Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?
No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.
What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.
However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.
According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.
The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.
Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.
Written by…
Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.
Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912
Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245
We participate in debates
Recent Comments