AI Protocol
Protocol for the use of artificial intelligence and
large language model tools and resources
1. Introduction
The growing availability of artificial intelligence (AI) and large language model (LLM) tools have facilitated new possibilities in scholarly research. Nonetheless, they raise concerns about the originality of research, maintaining accountability and academic integrity, and the credibility of both scholars’ work and that of the scientific journal.
To address these challenges, it is important to document and note the type and extent of use of AI and LLM tools in research. These guidelines therefore describe Lexikos’ approach to the use of AI and LLM tools in research, and aim to mediate the relationship between transformative, contemporary research and learning in the AI-enabled world on the one hand, and the ethical, responsible use of AI and LLM tools on the other. As a general rule, Lexikos follows the ASSAf and SciELO Guidelines for the Use of Artificial Intelligence (AI) Tools and Resources in Research Communication. This protocol is designed to guide authors in utilising AI and AI-assisted technologies responsibly and transparently. This protocol is subject to change as technology continues to evolve.
Since AI and LLM tool usage can be closely linked to plagiarism, the latter is also briefly included.
2. Recommendations for authors
As set out in the author guidelines, only original contributions will be considered for publication. Authors bear full responsibility for the originality and factual content of their contributions. Additionally, authors bear sole responsibility for the integrity of the content of their contributions.
In this sense, originality in contributions is understood as the creation of entirely new, unborrowed and authentic research. Authors must avoid any form of plagiarism, including but not limited to direct copying, paraphrasing without proper or correct citation, or presenting someone else’s ideas, words or findings as their own. Plagiarism in the form of duplicate or translated publication of the author’s own work, in whole or in part without proper citation is not tolerated by the journal. Furthermore, authors are prohibited from using AI or LLMs to generate content (without proper disclosure), fabricate or falsify data, or manipulate research findings in any way. Authors are urged to be aware of the limitations and possible biases of these tools.
Any use of AI or LLMs in the creation or analysis of data must be disclosed and appropriately referenced, since it is not the work of the authors (see section 3). If authors use AI or LLMs to create content or analyse data in any way, the responsibility for ensuring the accuracy and integrity of the content remains with the authors. Authors are responsible for analysing any content created or manipulated in such a manner to ensure safeguarding against misinformation generated by AI or an LLM.
Authors are permitted to use tools and resources that assist in the preparation, methodology, data analysis, writing support, review and translation of their articles. AI applications that offer grammatical, spelling and punctuation error detection tools, for example, are allowed without disclosure. However, their use should maintain ethical and scientific integrity, and authors are advised to exercise caution and discretion. Authors are advised to scrutinise the suggestions and changes by AI tools and resources, especially in the context of terminology or subject matter expertise since AI or LLMs may misinterpret authors’ work.
3. How AI and/or LLM usage should be referenced
Concealing the use of AI or LLM tools and resources is unethical and violates the principles of academic integrity and honesty in research. If content is generated by AI or LLM tools and resources, the content should be referenced as an unrecoverable source, similar to personal communication. When referencing the use of a tool such as ChatGPT, please adhere to this format:
Name of AI. Year. Medium of Communication, Receiver of communication, Day, Month of communication. URL
OpenAI ChatGPT. 2023. ChatGPT response to Jane Doe, 20 October. https://chat.openai.com/share/f45a1e23-2217-4443-a244-d56ab26ae940
If AI or LLM tools and resources are being used to conduct the research itself, such as in the case of generating examples or to illustrate a point, idea, etc., it should be discussed or disclosed in the relevant section of the article. The ‘prompt’ or plain-language instruction entered in the tool should also be provided, either in the manuscript or as supplementary material to the manuscript.
4. Recommendations for reviewers
As for authors, reviewers’ concealing their use of AI or LLM tools and resources is unethical and violates the principles of academic integrity and honesty in research and the peer-reviewing process. Consequently, the work and evaluation of the reviewers may not be replaced by the use of AI or LLM tools and resources. This pertains to both Lexikos not substituting human reviewers with AI or LLMs, as well as reviewers not using these tools and resources to assess articles while presenting the report and evaluation as their own.
Articles and review articles submitted to Lexikos are subjected to strict anonymous evaluation by independent academic peers to ensure the international research quality thereof. Reviewers appointed are therefore responsible for evaluating the submissions fairly and objectively, while maintaining a focus on originality and quality. They must further not use any AI or LLM tools or resources to review the article since this violates the confidentiality of the submitted work by making the manuscript public and compromising transparency.
Reviewers must also consider the impact and implications of AI-generated content used to conduct analysis or report results that may be misleading, biased or otherwise misinformation. While the responsibility of utilising anti-plagiarism software to facilitate the detection of both plagiarism and possible unreferenced AI and LLM use is that of the editors, the reviewers are expected to identify and report any potential instances of plagiarism or potential unreferenced use of AI or LLM tools they notice.