The use of Artificial Intelligence (AI) and AI-assisted technologies in scientific discourse has been in the spotlight recently, particularly regarding ChatGPT and other generative AI platforms and tools that can support academic writing. In response to the increasing use of Generative AI and AI-assisted technologies by authors, we adopt Elsevier’s AI author policy, which focuses on ensuring the integrity of the scholarly record and aims to provide greater transparency and guidance to authors, readers, reviewers, editors and contributors.
We are conscious that GenAI development and adoption are rapidly changing, and it is very difficult to keep up and continuously adjust policies and practices. In the spirit of CriSTaL, which focuses on critical studies in learning and teaching in higher education, and is guided by an ethics of care, we value our relationships with our authors and recognise the importance of trust and transparency across all research phases. CriSTaL publishes studies predominantly from the Global South and, as such, is concerned with creating counternarratives to the dominant Eurocentric discourses in academia, while most GenAI often reproduces and affirms dominant narratives. As such, we do not want to police our authors, but we do want to assume that contributors to CriSTaL adhere to our ethos of social justice, equity, integrity, ethics, professionalism, and, most importantly, criticality. We support experimentation and encourage authors to explore different ways to co-create with GenAI, while affirming our values.
As editors, we also want to be transparent about our own use of GenAI. We do not use AI detection tools because we know they are unreliable, but we work with reference checkers, such as Reciteworks, to verify consistency between in-text references and the bibliography. We also use human oversight to identify typical GenAI writing patterns and terminology. We mostly apply a developmental approach, in which we address concerns about appropriate AI use and give authors the opportunity to resubmit. We have rejected and will reject papers when there are serious concerns about the use of genAI, such as an absence of authorial voice, lack of context, questionable data authenticity, limited information about the research process (e.g., ethics clearance), and/or hallucinated references. Reviewers are also encouraged to look out for unethical and inappropriate use of genAI and to avoid using GenAI tools in their review process, as submitting authors to GenAI platforms would compromise scholarly integrity and privacy regulations.
Where authors use AI and AI-assisted technologies in the writing process:
- The use of technologies to improve readability and language, such as grammar and spelling, is allowed. However, we value distinct authors’ voices, and we find that these tools sometimes strip away the author’s original voice, making the writing bland and repetitive.
- Many research tools, such as data analysis tools, now have built-in AI features. Where researchers are using genAI beyond editing, such as mapping the literature, brainstorming research questions, analysing and interpreting data, this use must be disclosed. It is good practice to combine human and gen-AI coding to compare the two.
- If genAI tools were used to support data collection, transcription, mapping, data analysis, or image generation, authors should describe in detail how this was done and how human oversight was applied in the research methodology or design section.
- We do not allow synthetic data generation.
- Core research components, such as discussing findings or drawing conclusions, should remain the author’s responsibility.
- When AI technology is applied, it must be with human oversight and control, and careful review and editing of the results, as AI can generate authoritative-sounding output that may be incorrect, incomplete, decontextual or biased.
- Do not list AI and AI-assisted technologies as an author or co-author, or cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier’s AI author policy.
- We urge authors to carefully check references for correctness, as this is often the first indication to editors that GenAI has not been used appropriately.
- All submissions must therefore include an AI statement. Authors should disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.
- Use GenAI to support, not replace, the research process and voice.
Disclosure instructions
- Authors must disclose the use of generative AI and AI-assisted technologies in the writing process by adding a statement at the end of their manuscript in the core manuscript file, before the References list. The statement should be placed in a new section entitled ‘Declaration of Generative AI and AI-assisted technologies in the writing process’.
Statement: During the preparation of this work, the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
Example:
Particularly as practices with generative AI continue to evolve at a rapid pace, the authors recognize the need for openness and transparency around AI in the form of usage statements. These statements not only enhance trust and accountability, but also encourage the open sharing of practices, providing a foundation for ongoing discourse around the identity and positionality of AI in academic work. As such, the authors declare that generative AI was used to support a select number of tasks within the research process. The AI-supported search tools, Elicit, Consensus, and Semantic Scholar were used to assist in the process of selecting initial articles for exploration for the literature review. Transcripts were created of the Zoom interviews using Zoom’s native transcription tool, as well as Whisper AI. Chat GPT Plus was used to create transcripts of Zoom interviews and to generate summaries of transcript text from interview data (offered with informed consent) as a resource for comparison against the human-coded data summaries prepared by the research team. Generative AI was not used for the writing of the manuscript nor in the creation of images, graphics, tables, or corresponding captions.
https://aiopeneducation.pubpub.org/pub/fmktz5d3/release/5
The ASSAf Council guidelines for the Use of AI
The ASSAf Council has endorsed the ASSAf and SciELO Guidelines for the Use of Artificial Intelligence (AI) Tools and Resources in Research Communication .
Background:
The original version of this document was tabled at the SciELO 25-year celebration in São Paulo in Brazil in September 2023. The document was circulated to various stakeholders and comments and input were received from members of the ASSAf Council, the Committee of Scholarly Publishing in South Africa (CSPiSA), the National Scholarly Editors Forum (NSEF), the SciELO SA editors and the SciELO SA Advisory Committee, and staff members of the ASSAf Scholarly Publishing Unit.
Council endorsement:
The ASSAf Council requested a few modifications to the Guidelines and the revised Guidelines were subsequently endorsed on 17 September 2024.
Useful resources
Cleland, J., Driessen, E., Masters, K., Lingard, L., & Maggio, L. A. (2025). When and how to disclose AI use in academic publishing: AMEE Guide No. 192. Medical Teacher.
https://www.tandfonline.com/doi/full/10.1080/0142159X.2025.2607513#d1e337
