3 minute read
Considering the Implications of ChatGPT for Academic Literature
By Robert J Stephens, MD
The Case
The resident editor for an emergency medicine academic journal has been asked to write an editorial on the ethics of use of ChatGPT and other similar artificial intelligences in academic writing. Writing the manuscript is not going as easily as he hoped, but while researching for the editorial, he decides to see if the tool can help him complete the article before his deadline. He quickly prepares a draft using this new online artificial intelligence technology and submits it to his project mentor for review.
Recent advances in artificial intelligence (AI) have entered the public awareness with large language models (LLM) such as ChatGPT becoming remarkably popular leading to excitement and controversy in the academic work. The dissemination of these tools raises multiple ethical questions for how they can and should be used in clinical research.
Falsification and Duplicate
Publication
LLM AIs use a vast data bank of example text to predict “ideal” responses to queries — what statistically is the most likely set of sentences that should return to answer a question. This does not guarantee that these responses are correct, as the reinforcement learning model does not provide a “source of truth,” according to the developers. Simply put, it seeks plausibility rather than fact. The algorithm does not have a search engine function and is not able to utilize sources from the internet; rather, it can only refer to the training data bank. Citations delivered by ChatGPT may not reflect the content of the cited work and has been reported to generate citations for nonexistent works with one study finding only 6% of citations being correctly referenced. Above all, ChatGPT is not an analysis tool and should not be used in this context. Most concerning is that the AI output is plausible and seems adept at falsifying results in a credible way.
A similar concern is that these technologies will make duplicate publication more difficult to detect. Gao, et al. were able to duplicate 50 published abstracts using ChatGPT with all generated abstracts being rated high on originality using a plagiarism detection algorithm.
Authorship and “Plagiarism”
One of the stickiest ethical issues that we will face in the coming years is how these tools will impact the concept of authorship. LLMs do not understand the sentences produced regardless of how well the model has been trained. Authorship inherently implies accountability to the scientific community for the content of the work. AI tools cannot carry this responsibility. In response to publications produced using AI and several listing ChatGPT as an author, many publishers in the scientific community have issued specific author guidance on the use of AI and AI-assisted technologies in the publication of scientific work. Carefully reviewing these policies will be vital for authors who chose to use AI assistance and clear declaration of whether and how these technologies are used should be included in submitted manuscripts. Multiple scientific organizations and publishers have weighed in on this topic, unanimously ruling that AI cannot be credited as an author (COPE, JAMA Network, Nature).
Additionally, the use of AI to generate text for a manuscript raises ethical questions surrounding the definition of plagiarism. Although any author considering using AI to generate text would not be taking credit for a different person’s work, their text certainly would not be original. Provided the ideas are original, does physically writing the text truly determine authorship? Is the use of AI-generated or edited text an act of plagiarism? Currently, we do not have definitive answers to these questions and as a scientific community we need to be actively seeking consensus on the answers.
Future Use
It is naive to believe that LLM and similar AI will not play a role in the future of research. So how can these tools be leveraged to aid in publishing research in an ethical manner? The obvious answer is to enhance readability. This is perhaps most promising to aid those for whom English is a second language. Another potential use would be for authors to use these tools to aid in formatting their work to meet journal requirements and preparing submission cover letters. However, for this application, ChatGPT has been shown to be particularly inept, with the only study on this topic finding it unsuccessful across nearly tested abstracts. But that is not to say that this technology will not develop significantly in the coming weeks to months as further iterations are developed. In the future AI may play a greater role in study design, hypothesis generation, data analysis, and manuscript production.
The Conclusion
The resident editor tells his mentor that he used ChatGPT to aid in drafting the paper. He and his mentor rework the manuscript to ensure that it is an original manuscript and that it truly reflects reality
About The Author
Dr. Stephens served as a 2022-2023 resident editor for Academic Emergency Medicine journal. He is a graduating emergency medicine resident at Washington University in St Louis and will be continuing his training as a critical care fellow at the University of Maryland.