Generative (AI) Policy
Generative Artificial Intelligence (GenAI) technologies (such as ChatGPT, Bard, Copilot, and similar tools) can be useful in research and scientific writing. However, their use raises important considerations for transparency, accountability, and academic integrity.
To ensure responsible use of GenAI in academic publishing, the RJCSM adopts the following policy:
-
Authorship
-
GenAI tools cannot be listed as authors or co-authors. Authorship implies accountability, originality, and the ability to take responsibility for the work—qualities that AI tools do not possess.
-
-
Permitted Use
-
Authors may use GenAI tools to improve language, grammar, or clarity of manuscripts.
-
Authors may use GenAI for coding assistance, idea generation, or drafting text, but they must critically review and validate all outputs.
-
-
Disclosure Requirement
-
Any use of GenAI tools must be clearly disclosed in the manuscript (e.g., in the Acknowledgments or Methods section). Authors should specify the tool used and its purpose (e.g., “We used ChatGPT (OpenAI) to refine the language of this article”).
-
-
Accountability
-
Authors are fully responsible for the accuracy, originality, integrity, and ethical standards of their work, regardless of the use of GenAI tools.
-
GenAI outputs must not introduce fabricated data, references, or misleading content.
-
-
Peer Review Process
-
Reviewers and editors may not use GenAI tools to generate review reports or editorial decisions. Limited use for grammar improvement or summarization may be allowed, but the intellectual evaluation must always be human-driven.
-
-
Ethical Standards
-
Plagiarism, data fabrication, image manipulation, and unethical use of GenAI will be treated as serious violations of publication ethics and may result in manuscript rejection or retraction.
-