A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: responsibility

Debate on responsibility and academic authorship

Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI ​​to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI ​​can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?

No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.

What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.

However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.

According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.

The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.

Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912

Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245

This post in Swedish

We participate in debates

The branding of psychotherapy and responsible practice

Clinical psychologists receive degrees from universities, training them to effectively apply psychotherapy programs in psychiatry settings. But after graduation, whose responsibility is it to train, or perhaps re-train, clinical psychologists to practice “proper” therapy? Is it the responsibility of the owner of a three-letter branded protocol, such as DBT (Dialectical Behavior Therapy), SFT (Schema Focused Therapy) or MBSR (Metacognition Based Stress Reduction)? Or is it the responsibility of the health care systems that provide treatment? Perhaps they should ensure that a psychologist’s training is regularly updated, as in most other clinical professions?

As a clinical psychologist myself, with experience from practice in France, I want to address some challenges that I have experienced and reflected upon as I have tried to develop my own way of practicing therapy.

Medical training updates are widely encouraged for psychiatrists via the counting of credits for attending certified courses or international conferences (Continuing Medical Education Points). However, when it comes to clinical psychologists, the psychiatrists’ side-kicks offering psychotherapy treatment, there is no such unified system. Psychotherapy is in essence social work, and the success depends on the relationship between the therapist and the patient. But in many countries, particularly in the English speaking world, there is a tendency to brand specific therapeutic programs, such as the commonly known cognitive behavioral therapy, CBT, or versions of it such as dialectic behavioral therapy, DBT. Being “branded” as a psychotherapist comes with the advantage of being recurrently involved in seminars, training and follow-ups on our practice. But as a therapist, you are offering more than a program, you are offering years of experience and training. It is neither practical nor possible to “label” every little piece of training that made your practice look the way it does now. Yet psychotherapists face an entire branding system, with names such as DBT (Dialectical Behavioral Therapy), TFT (Transference Focused Therapy), SFT (Schema Focused Therapy) and MIT (Metacognition and Insight Therapy). All these names give structure, labels to refer to, which help both patients and colleagues to identify what happens in the therapy. But at the same time we might be confusing everyone involved with a jargon of acronyms. Depending on the cultural context, even using the word “client” instead of “patient” can be seen as subversive. The very idea that psychotherapy could be branded may appear strange and unusual. Are we considering the values at stake? Might not branding shift focus, from values of care towards economic considerations (such as selling your brand)? On the surface, it looks reasonable and as an approach that supports a fair distribution of care.

As an author of a CBT protocol myself, using a much longer acronym, ECCCLORE, I have been forced to question the underlying dynamic of naming or branding a particular kind of psychotherapy.  Like most of the CBT protocols, the effectiveness is in the structure of the protocol. Although I wanted to protect that structure, I did not want to rule out potential changes or improvements, but to make the protocol open to practitioners’ own experiences of using it with their patients. Therefore, I always encourage my students and colleagues to integrate the protocol with their own experiences, strengths and discoveries along the way.

Why? Well, because in using a protocol to engage with people in the intimate setting that psychotherapy is, we must also examine our values as caregivers, always considering the ethical principles of non-malevolence, respect and justice. And just as you must find your way to practice any branded therapy, you must find your way to observe these ethical principles in your work.

I was not harming my patients with the ECCCLORE protocol, but I created something that requires training to apply. Otherwise, like any mechanically applied protocol, it could potentially harm patients. Can that risk be overcome by adding another branded sub-protocol? There are already names all over the place in the CBT world. We all use “branded skills” such as Beck’s Columns and Padesky’s Polygram, and they are free to use, but they are just names for very commonly used tools, which we must again learn to use in practice.

When you dive into the specificity of “certified programs,” things start to become even more complicated. If I did not brand my project, anyone could use (or abuse) the ECCCLORE brand. For example, in France one needs to declare intellectual property in order to protect the project or research results from being stolen (as the research outcome is not considered the intellectual property of the researcher, as it is in Sweden). This means that anyone can use the name, even if it is unrelated to the CBT framework. But by acknowledging the creator’s intellectual property, it is possible for me to brand my own research protocol and evidence-based program, preventing misuse of the methods. But is it helping the replication and dissemination of the protocol? And if my ultimate goal is to offer the protocol to help as many patients as possible, is branding it the best solution?

I sensed an affinity between my own reflections and recent research that questioned the ethical guidelines for social justice work in psychology and outlined the need for social justice ethics. When I thought about branded CBT programs, I recognized ethical risks everywhere. If you pay a lot of money to be trained in Program A, you expect to be recognized as a Program A practitioner, and you expect to benefit from the specific expertise that you earned. Is it fair then to offer such services at premium prices? Or to refuse to have Program A training delivered to most of the clinical psychologists? Does it make the program more affordable and accessible to the patients in need of it? Is a society fair where most of the latest advances are not available to everyone, but only in private practice? Well, there are of course economic considerations, but on the clinical level it is not easy to sort out the pros and cons of these “acronymized” psychotherapies.

As a treatment developer, I do recognize that having a name to identify the program really helps. The social component of psychotherapy is known to be an important effectiveness factor. This was the case also for me. Avoiding any stigmatizing name of my therapeutic group, such as “Borderline Group,” was a move toward justice, respect and non-malevolence. I decided to create the acronym together with the first patient group, which helped create motivation and reflected the collaborative process. Because in therapy, it is the patients who have the most at stake. Along the way, I also had a chance to be trained in a manual-based psychotherapy, and I saw the advantages, as a clinician, to have a tribe supporting me as I entered their group. Branded evidence-based psychotherapies are organizing trainings and conferences, which offer many resources for their practitioners. They build up more and more specific results around subgroups of patients, and take responsibility for the full functioning of their practitioners.

Branded psychotherapies are probably here to stay, but I wanted to highlight some practical and ethical challenges that I have experienced and reflected upon as a treatment developer. Let me conclude with one final consideration about the future. In recent research on the effectiveness of personality disorder psychotherapy, the main factors were found to be the therapeutic attitude (active and collaborative) and the clarity of the protocol (the underlying theories). Future research may further investigate whether the branding of psychotherapies, which can be confusing, may also contribute to these factors.

Sylvia Martin

Sylvia Martin, Clinical Psychologist and Senior Researcher at the Centre for Research Ethics & Bioethics (CRB)

Sylvia Martin. (2022) Le programme ECCCLORE: Une nouvelle approche du trouble borderline. Deboeck Supérior.

In dialogue with patients

An ethical strategy for improving the healthcare of brain-damaged patients

How can we improve the clinical care of brain-damaged patients? Individual clinicians, professional and patient associations, and other relevant stakeholders are struggling with this huge challenge.

A crucial step towards a better treatment of these very fragile patients is the elaboration and adoption of agreed-upon recommendations for their clinical treatment, both in emergency and intensive care settings. These recommendations should cover different aspects, from diagnosis to prognosis and rehabilitation plan. Both Europe and the US have issued relevant guidelines on Disorders of Consciousness (DoCs) in order to make clinical practice consistent and ultimately more beneficial to patients.

Nevertheless, these documents risk becoming ineffective or not having sufficient impact if they are not complemented with a clear strategy for operationalizing them. In other words, it is necessary to develop an adequate translation of the guidelines into actual clinical practice.

In a recent article that I wrote with Arleen Salles, we argue that ethics plays a crucial role in elaborating and implementing this strategy. The application of the guidelines is ethically very relevant, as it can directly impact the patients’ well-being, their right to the best possible care, communication between clinicians and family members, and overall shared decision-making. Failure to apply the guidelines in an ethically sound manner may inadvertently lead to unequal and unfair treatment of certain patients.

To illustrate, both documents recommend integrating behavioural and instrumental approaches to improve the diagnostic accuracy of DoCs (such as vegetative state/unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation). This recommendation is commendable, but not easy to follow because of a number of shortcomings and limitations in the actual clinical settings where patients with DoCs are diagnosed and treated. For instance, not all “ordinary,” non-research oriented hospitals have the necessary financial, human, and technical resources to afford the dual approach recommended by the guidelines. The implementation of the guidelines is arguably a complex process, involving several actors at different levels of action (from the administration to the clinical staff, from the finances to the therapy, etc.). Therefore, it is crucial to clearly identify “who is responsible for what” at each level of the implementation process.

For this reason, we propose that a strategy is built up to operationalize the guidelines, based on a clarification of the notion of responsibility. We introduce a Distributed Responsibility Model (DRM), which frames responsibility as multi-level and multi-dimensional. The main tenet of DRM is a shift from an individualistic to a modular understanding of responsibility, where several agents share professional and/or moral obligations across time. Moreover, specific responsibilities are assigned depending on the different areas of activity. In this way, each agent is assigned a specific autonomy in relation to their field of activity, and the mutual interaction between different agents is clearly defined. As a result, DRM promotes trust between the various agents.

Neither the European nor the US guidelines explicitly address the issue of implementation in terms of responsibility. We argue that this is a problem, because in situations of scarce resources and financial and technological constraints, it is important to explicitly conceptualize responsibility as a distributed ethical imperative that involves several actors. This will make it easier to identify possible failures at different levels and to implement adequate corrective action.

In short, we identify three main levels of responsibility: institutional, clinical, and interpersonal. At the institutional level, responsibility refers to the obligations of the relevant institution or organization (such as the hospital or the research centre). At the clinical level, responsibility refers to the obligations of the clinical staff. At the interpersonal level, responsibility refers to the involvement of different stakeholders with individual patients (more specifically, institutions, clinicians, and families/surrogates).

Our proposal in the article is thus to combine these three levels, as formalized in DRM, in order to operationalize the guidelines. This can help reduce the gap between the recommendations and actual clinical practice.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, Michele; Salles, Arleen. American and European Guidelines on Disorders of Consciousness: Ethical Challenges of Implementation, Journal of Head Trauma Rehabilitation: April 13, 2022. doi: 10.1097/HTR.0000000000000776

We want solid foundations

Can consumers help counteract antimicrobial resistance?

Antimicrobial resistance (AMR) occurs when microorganisms (bacteria and viruses, etc.) survive treatments with antimicrobial drugs, such as antibiotics. However, the problem is not only caused by unwise use of such drugs on humans. Such drugs are also used on a large scale in animals in food production, which is a significant cause of AMR.

In an article in the journal Frontiers in Sustainable Food Systems, Mirko Ancillotti and three co-authors discuss the possibility that food consumers can contribute to counteracting AMR. This is a specific possibility that they argue is often overlooked when addressing the general public.

A difficulty that arises when AMR needs to be handled by several actors, such as authorities, food producers, consumers and retailers, is that the actors transfer the responsibility to the others. Consumers can claim that they would buy antibiotic-smart goods if they were offered in stores, while retailers can claim that they would sell such goods if consumers demanded them. Both parties can also blame how, for example, the market or legislation governs them. Another problem is that if one actor, for example the authorities, takes great responsibility, other actors feel less or no responsibility.

The authors of the article propose that one way out of the difficulty could be to influence consumers to take individual responsibility for AMR. Mirko Ancillotti has previously found evidence that people care about antibiotic resistance. Perhaps a combination of social pressure and empowerment could engage consumers to individually act more wisely from an AMR perspective?

The authors make comparisons with the climate movement and suggest digital innovations in stores and online, which can inform, exert pressure and support AMR-smarter food choices. One example could be apps that help consumers see their purchasing pattern, suggest product alternatives, and inform about what is gained from an AMR perspective by choosing the alternative.

Read the article with its constructive proposal to engage consumers against antimicrobial resistance: The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ancillotti, Mirko; Nilsson, Elin; Nordvall, Anna-Carin; Oljans, Emma. The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance. Frontiers in Sustainable Food Systems, 2022.

This post in Swedish

Approaching future issues

Genetic risk entails genetic responsibility

Pär SegerdahlIntellectual optimists have seen genetic risk information as a human victory over nature. The information gives us power over our future health. What previously would have been our fate, genetics now transforms into matters of personal choice.

Reality, however, is not as rosy as in this dream of intellectual power over life. Where there is risk there is responsibility, Silke Schicktanz writes in an article on genetic risk and responsibility. This is probably how people experience genetic risk information when they face it. Genetic risk gives us new forms of responsibility, rather than liberates us from nature.

Silke Schicktanz describes how responsibility emerges in situations where genetic risk is investigated, communicated and managed. The analysis exceeds what I can reproduce in a short blog post. However, I can give the reader a sense of how genetic risk information entails a broad spectrum of responsibilities. Sometimes in the individual who receives the information. Sometimes in the professional who provides the information. Sometimes in the family affected by the information. The examples are versions of the cases discussed in the article:

Suppose you have become strangely forgetful. You do a genetic test to determine if you have a gene associated with Alzheimer’s disease. You have the gene! The test result immediately makes you responsible for yourself. What can you do to delay or alleviate the disease? What practical measures can be taken at home to help you live with the disease? You can also feel responsibility for your family. Have you transferred the gene to your children and grandchildren? Should you urge them to test themselves? What can they do to protect themselves? The professional who administered the test also becomes responsible. Should she tell you that the validity of the test is low? Maybe you should not have been burdened with such a worrying test result, when the validity so low?

Suppose you have rectum-colon cancer. The surgeon offers you to participate in a research study in which a genetic test of the tumor cells will allow individualized treatment. Here, the surgeon becomes responsible for explaining research in personalized medicine, which is not easy. There is also the responsibility of not presenting your participation in the study as an optimization of your treatment. You yourself may feel a responsibility to participate in research, as patients have done in the past. They contributed to the care you receive today. Now you can contribute to the use genetic information in future cancer care. Moreover, the surgeon may have a responsibility to counteract a possible misunderstanding of the genetic test. You can easily believe that the test says something about disease genes that you may have passed on, and that the information should be relevant to your children. However, the test concerns mutations in the cancer cells. The test provides information only about the tumor.

Suppose you have an unusual neurological disorder. A geneticist informs you that you have a gene sequence that may be the cause of the disease. Here we can easily imagine that you feel responsibility for your family and children. Your 14-year-old son has started to show symptoms, but your 16-year-old daughter is healthy. Should she do a genetic test? You discuss the matter with your ex-partner. You explain how you found the genetic information helpful: you worry less, you have started going on regular check-ups and you have taken preventive measures. Together, you decide to tell your daughter about your test results, so that she can decide for herself if she wants to test herself.

These three examples are sufficient to illustrate how genetic risk entails genetic responsibility. How wonderful it would have been if the information simply allowed us to triumph over nature, without this burdensome genetic responsibility! A pessimist could object that the responsibility becomes overpowering instead of empowering. We must surrender to the course of nature; we cannot control everything but must accept our fate.

Neither optimists nor pessimists tend to be realistic. The article by Silke Schicktanz can help us look more realistically at the responsibilities entailed by genetic risk information.

Pär Segerdahl

Schicktanz, S. 2018. Genetic risk and responsibility: reflections on a complex relationship. Journal of Risk Research 21(2): 236-258

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

Trust, responsibility and the Volkswagen scandal

Jessica Nihlén FahlquistVolkswagen’s cheating with carbon emissions attracted a lot of attention this autumn. It has been suggested that the cheating will lead to a decrease in trust for the company, but also for the industry at large. That is probably true. But, we need to reflect on the value of trust, what it is and why it is needed. Is trust a means or a result?

It would seem that trust has a strong instrumental value since it is usually discussed in business-related contexts. Volkswagen allegedly needs people’s trust to avoid losing money. If customers abandon the brand due to distrust, fewer cars will be sold.

This discussion potentially hides the real issue. Trust is not merely a means to create or maintain a brand name, or to make sure that money keeps coming in. Trust is the result of ethically responsible behaviour. The only companies that deserve our trust are the ones that behave responsibly. Trust, in this sense, is closely related to responsibility.

What is responsibility then? One important distinction to make is the one between backward-looking and forward-looking responsibility. We are now looking for the one who caused the problem, who is to blame and therefore responsible for what happened. But responsibility is not only about blame. It is also a matter of looking ahead, preventing wrongful actions in the future and doing one’s utmost to make sure the organisation, of which one is a member, behaves responsibly.

One problem in our time is that so many activities take place in such large contexts. Organisations are global and complex and it is hard to pinpoint who is responsible for what. All the individuals involved only do a small part, like cogs in a wheel. When a gigantic actor like Volkswagen causes damage to health or the environment, it is almost impossible to know who caused what and who should have acted otherwise. In order to avoid this, we need individuals who take responsibility and feel responsible. We should not conceive of people as powerless cogs in a wheel. The only companies who deserve our trust are the ones in which individuals at all levels take responsibility.

What is most important now is not that the company regains trust. Instead, we should demand that the individuals at Volkswagen raise their ethical awareness and start acting responsibly towards people, society and the environment. If they do that, trust will eventually be a result of their responsible behaviour.

Jessica Nihlén Fahlquist

(This text was originally published in Swedish, in the magazine, Unionen, industri och teknik, December 2015.)

Further reading:

Nihlén Fahlquist, J. 2015. “Responsibility as a virtue and the problem of many hands,” In: Ibo van de Poel, Lambèr Royakkers, Sjoerd Zwart. Moral Responsibility in Innovation Networks. Routledge.

Nihlén Fahlquist J. 2006. “Responsibility ascriptions and Vision Zero,” Accident Analysis and Prevention 38, pp. 1113-1118.

Van de Poel, I. and Nihlén Fahlquist J. 2012. “Risk and responsibility.” In: Sabine Roeser, Rafaela Hillerbrand, Martin Peterson, Per Sandin Handbook of Risk Theory, 2012, Springer, Dordrecht.

Nihlén Fahlquist J. 2009. “Moral responsibility for environmental problems – individual or institutional?” Journal of Agricultural and Environmental Ethics 22(2), pp. 109-124.

This post in Swedish

We challenge habits of thought : the Ethics Blog