A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Pär Segerdahl (Page 1 of 40)

Debate on responsibility and academic authorship

Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI ​​to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI ​​can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?

No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.

What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.

However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.

According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.

The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.

Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912

Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245

This post in Swedish

We participate in debates

How do we create sustainable research and development of new antibiotics?

Antibiotic resistance is a growing global challenge, particularly for modern healthcare, which relies on antibiotics to prevent and treat infectious diseases. Multi-resistant bacteria are already present across the globe and without effective antibiotics, simple medical interventions will become risky in the future. Each year, several million deaths globally are associated with antibiotic resistance. With more and more drug-resistant microorganisms, one could expect an increase in research and development of new antibiotics or vaccines. However, in parallel with the growing global threat from antimicrobial resistance, or AMR as it is often called, the development rate of new antibiotics is instead decreasing. Reduced R&D also reduces the number of experts in the field, which in turn affects our society’s ability to develop new antibiotics.

Why is that so? One reason is that the return on investment is so low that many large pharmaceutical companies have scaled back or abandoned their development programs, resulting in a loss of expertise. The effort to slow down the development rate of antibiotic resistance requires us to save the most effective medicines for the most difficult cases, and this “stewardship” contributes to inhibiting the will to invest, as the companies are unable to count on any new “blockbuster” drugs.

The problem of access to effective treatment is global, and on September 26 this year, the UN General Assembly is organizing a high-level meeting on AMR. The political declaration published ahead of the meeting highlights, among other things, the need for mechanisms for funding research and development, the need for functioning collaborations between private and public actors, and the need for measures to deal with the growing lack of competence in the area.

However, the picture is not only dark. During the last decade, several investments have been made in collaborations to meet the challenges for research and development in the field. One such investment is the European AMR Accelerator program, running since 2019 with funding from the Innovative Medicines Initiative (IMI). The program consists of nine projects that bring different stakeholders together to collaborate on the development of new treatments, for example against multi-resistant tuberculosis.

In a short article recently published in Nature Reviews Drug Discovery, representatives of the program discuss some of the important values ​​and challenges associated with collaborations between academia and industry. Antibiotic development is expensive and many drug candidates are discontinued already in the early stages of development. By sharing risks and costs between several organizations, the AMR Accelerator has so far been able to contribute to the development of a large portfolio of different antibiotics. In addition, the nine projects have developed research infrastructures for, among other things, modelling, data management, and clinical studies that can benefit the entire AMR research community. Moreover, the critical mass that is generated when 98 organizations collaborate, can generate new ideas and synergies in the work against AMR.

There are also challenges. Among the challenges is balancing the perspectives and needs of different actors in the program, not least in the collaborations between academia and industry, where cooperation agreements and regular meetings have been needed to manage differences in culture and approach. The AMR Accelerator program has also served as neutral ground for competing companies, which have been able to can collaborate within the framework of the projects.

According to the authors, the biggest challenge remains: what happens after the projects end? The Innovative Medicines Initiative has invested €479 million in the program. The question now is how the nine projects and partners will find long-term sustainability for the assets and infrastructures they have put in place. Some form of continued funding is needed so that the resources created within the AMR Accelerator can be used in the next phase of the work, where the end goal is providing access to drugs that can treat antibiotic-resistant infections.

The article concludes with a call to governments, research funders, pharmaceutical companies and other actors to invest in research and development of new medicines and research to support the fight against antibiotic resistance. To ensure that we can benefit from investments such as the AMR Accelerator in the long term, regular funding calls are needed to maintain expertise, infrastructures, data and networks.

Read the highly topical article here: The AMR Accelerator: from individual organizations to efficient antibiotic partnerships.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Fernow J, Olliver M, Couet W, Lagrange S, Lamers MH, Olesen OF, Orrling K, Pieren M, Sloan DJ, Vaquero JJ, Miles TJ & Karlén A, The AMR Accelerator: from individual organizations to efficient antibiotic development partnerships, Nature Reviews Drug Discovery, first online 23 September, DOI: https://doi.org/10.1038/d41573-024-00138-9

This post in Swedish

Approaching future issues

Return of health data from clinical trials to the patients

During a clinical trial, large amounts of health data are generated that can be useful not only within the current study. If the trial data are made available for sharing, they can be reused within other research projects. Moreover, if the research participants’ individual health data are returned to them, this may benefit the patients in the study.

The opportunities to increase the usefulness of data from clinical trials in these two ways are not being exploited as well as today’s technology allows. The European project FACILITATE will therefore contribute to improved availability of data from clinical trials for other research purposes and strengthen the position of participating patients and their opportunity to gain access to their individual health data.

A policy brief article in Frontiers in Medicine presents the project’s work and recommendations regarding the position of patients in clinical studies and the possibility of communicating their health data back to them. The project develops an ethical framework that will put patients more at the center and increase their influence over the studies they participate in. For example, it tries to make it easier for patients to dynamically design and modify their consent, access information about the study and retrieve individual health data.

An extended number of ethical principles are identified within the project as essential for clinical trials. For example, one should not only respect the patients’ autonomy, but also strengthen their opportunities to make informed decisions about their own care on the basis of returned health data. Returned data must be judged to be of some kind of benefit to the individuals and the data must be communicated in such a way that they as effectively as possible strengthen the patients’ ability to make informed decisions about their care.

If you are interested in greater opportunities to use health data from clinical trials, mainly opportunities for the participating patients themselves, read the article here: Ethical framework for FACILITATE: a foundation for the return of clinical trial data to participants.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ciara Staunton, Johanna M. C. Blom and Deborah Mascalzoni on behalf of the IMI FACILITATE Consortium. Ethical framework for FACILITATE: a foundation for the return of clinical trial data to participants. Frontiers in Medicine, 17 July 2024. https://doi.org/10.3389/fmed.2024.1408600

This post in Swedish

We recommend readings

Does knowing the patient make a moral difference?

Several ethical concepts and principles govern how patients should be treated in healthcare. For example, healthcare professionals should respect patients’ autonomy. Moreover, they should act in the patients’ best interest and avoid actions that can cause harm. Patients must also be treated fairly. However, exactly how such ethical concepts and principles should be applied can vary in different situations.

A new article examines whether the application may depend on whether the healthcare personnel know the patient (in the sense of having knowledge about the patient). Some healthcare situations are characterized by the fact that the patient is unknown to the personnel: they have never met the patient before. Other situations are characterized by familiarity: the personnel have had continuous contact with the patient for a long time. In the latter situations, the personnel know the patient’s personality, living conditions, preferences and needs. Does such familiarity with the patient make any difference to how patients should be treated ethically by the healthcare staff, ask the authors of the article, Joar Björk and Anna Hirsch.

It may be tempting to reply that knowing the patient should not be allowed to play any role, that it follows from the principle of justice that familiarity should not be allowed to make any difference. Of course, the principle of justice places limits on the importance of familiarity with the patient. But in healthcare there is still this difference between situations marked by unfamiliarity and situations marked by familiarity. Consider the difference between screening and palliative home care. Should not this difference sometimes make a moral difference?

Presumably familiarity can sometimes make a moral difference, the authors argue. They give examples of how, not least, autonomy can take different forms depending on whether the situation is characterized by familiarity or unfamiliarity. Take the question of when and how patients should be allowed to delegate their decision-making to the healthcare personnel. If the personnel do not know the patient at all, it seems to be at odds with autonomy to take over the patient’s decision-making, even if the patient wishes it. However, if the personnel are well acquainted with the patient, it may be more consistent with autonomy to take over parts of the decision-making, if the patient so wishes. The authors provide additional examples. Suppose a patient has asked not to be informed prior to treatment, but the staff know the patient well and know that a certain part of the information could make this particular patient want to change certain decisions about the treatment. Would it then not be ethically correct to give the patient at least that part of the information and problematic not to do so? Or suppose a patient begins to change their preferences back and forth. If the patient is unfamiliar to the staff, it may be correct to always let the most recent preference apply. (One may not even be aware that the patient had other preferences before.) If, on the other hand, the patient is well known, the staff may have to take into account both past and present preferences and make a more global assessment of the changes and of autonomy.

The authors also exemplify how the application of other moral concepts and principles can take different forms, depending on whether the relationship with the patient is characterized by familiarity or unfamiliarity. Even the principle of justice could in some cases take different form, depending on whether the personnel know the patient or not, they suggest. If you want to see a possible example of this, read the article here: An “ethics of strangers”? On knowing the patient in clinical ethics.

The authors finally argue that care decisions regarding autonomy, justice and acting in the best interest of the patient are probably made with greater precision if the personnel know the patient well. They argue that healthcare professionals therefore should strive to get to know their patients. They also argue that healthcare systems where a greater proportion of the staff know a greater proportion of the patients are preferable from an ethical point of view, for example systems that promote therapeutic continuity.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk, J., Hirsch, A. An “ethics of strangers”? On knowing the patient in clinical ethics. Med Health Care and Philosophy 27, 389–397 (2024). https://doi.org/10.1007/s11019-024-10213-y

This post in Swedish

We have a clinical perspective

Philosophy on a chair

Philosophy is an unusual activity, partly because it can be conducted to such a large extent while sitting still. Philosophers do not need research vessels, laboratories or archives to work on their questions. Just a chair to sit on. Why is it like that?

The answer is that philosophers examine our ways of thinking, and we are never anywhere but where we are. A chair takes us exactly as far as we need: to ourselves. Philosophizing on a chair can of course look self-absorbed. How can we learn anything significant from “thinkers” who neither seem to move nor look around the world? If we happen to see them sitting still in their chairs and thinking, they can undeniably appear to be cut off from the complex world in which the rest of us must live and navigate. Through its focus on human thought, philosophy can seem to ignore our human world and not be of any use to the rest of us.

What we overlook with such an objection to philosophy is that our complex human world already reflects to a large extent our human ways of thinking. To the extent that these ways of thinking are confused, limited, one-sided and unjust, our world will also be confused, limited, one-sided and unjust. When we live and move in this human world, which reflects our ways of thinking, can it not be said that we live somewhat inwardly, without noticing it? We act in a world that reflects ourselves, including the shortcomings in our ways of thinking.

If so, maybe it is not so introverted to sit down and examine these ways of thinking? On the contrary, this seems to enable us to free ourselves and the world from human thought patterns that sometimes limit and distort our perspectives without us realizing it. Of course, research vessels, laboratories and archives also broaden our perspectives on the world. But we already knew that. I just wanted to open our eyes to a more unexpected possibility: that even a chair can take us far, if we practice philosophy on it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

End-of-life care: ethical challenges experienced by critical care nurses

In an intensive care unit, seriously ill patients who need medical and technical support for central bodily functions, such as breathing and circulation, are monitored and treated. Usually it goes well, but not all patients survive, despite the advanced and specialized care. An intensive care unit can be a stressful environment for the patient, not least because of the technical equipment to which the patient is connected. When transitioning to end-of-life care, one therefore tries to create a calmer and more dignified environment for the patient, among other things by reducing the use of life-sustaining equipment and focusing on reducing pain and anxiety.

The transition to end-of-life care can create several ethically challenging situations for critical care nurses. What do these challenges look like in practice? The question is investigated in an interview study with nurses at intensive care units in a Swedish region. What did the interviewees say about the transition to end-of-life care?

A challenge that many interviewees mentioned was when life-sustaining treatment was continued at the initiative of the physician, despite the fact that the nurses saw no signs of improvement in the patient and judged that the probability of survival was very low. There was concern that the patient’s suffering was thus prolonged and that the patient was deprived of the right to a peaceful and dignified death. There was also concern that continued life-sustaining treatment could give relatives false hope that the patient would survive, and that this prevented the family from supporting the patient at the end of life. Other challenges had to do with the dosage of pain and anti-anxiety drugs. The nurses naturally sought a good effect, but at the same time were afraid that too high doses could harm the patient and risk hastening death. The critical care nurses also pointed out that family members could request higher doses for the patient, which increased the concern about the risk of possibly shortening the patient’s life.

Other challenges had to do with situations where the patient’s preferences are unknown, perhaps because the patient is unconscious. Another challenge that was mentioned is when conscious patients have preferences that conflict with the nurses’ professional judgments and values. A patient may request that life-sustaining treatment cease, while the assessment is that the patient’s life can be significantly extended by continued treatment. Additional challenging situations can arise when the family wants to protect the patient from information that death is imminent, which violates the patient’s right to information about diagnosis and prognosis.

Finally, various situations surrounding organ donation were mentioned as ethically challenging. For example, family members may oppose the patient’s decision to donate organs. It may also happen that the family does not understand that the patient suffered a total cerebral infarction, and believes that the patient died during the donation surgery.

The results provide a good insight into ethical challenges in end-of-life care that critical care nurses experience. Read the article here: Critical care nurses’ experiences of ethical challenges in end-of-life care.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Palmryd L, Rejnö Å, Alvariza A, Godskesen T. Critical care nurses’ experiences of ethical challenges in end-of-life care. Nursing Ethics. 2024;0(0). doi:10.1177/09697330241252975

This post in Swedish

Ethics needs empirical input

Of course, but: ethics in palliative practice

What is obvious in principle may turn out to be less obvious in practice. That would be at least one possible interpretation of a new study on ethics in palliative care.

Palliative care is given to patients with life-threatening illnesses that cannot be cured. Although palliative care can sometimes contribute to extending life somewhat, the focus is on preventing and alleviating symptoms in the final stages of life. The patient can also receive support to deal with worries about death, as well as guidance on practical issues regarding finances and relationships with relatives.

As in all care, respect for the patient’s autonomy is central in palliative care. To the extent possible, the patient should be given the opportunity to participate in the medical decision-making and receive information that corresponds to the patient’s knowledge and wishes for information. This means that if a patient does not wish information about their health condition and future prospects, this should also be respected. How do palliative care professionals handle such a situation, where a patient does not want to know?

The question is investigated in an interview study by Joar Björk, who is a clinical ethicist and physician in palliative home care. He conducted six focus group interviews with staff in palliative care in Sweden, a total of 33 participants. Each interview began with an outline of an ethically challenging patient case. A man with disseminated prostate cancer is treated by a palliative care team. He has previously reiterated that it is important for him to gain complete knowledge of the illness and how his death may look. Because the team had to deal with many physical symptoms, they have not yet had time to answer his questions. When they finally get time to talk to him, he suddenly says that he does not want more information and that the issue should not be raised again. He gives no reason for his changed position, but nothing else seems to have changed and he seems to be in his right mind.

What did the interviewees say about the made-up case? The initial reaction was that it goes without saying that the patient has the right not to be informed. If a patient does not want information, then you must not impose the information on him, but must “meet the patient where he is.” But the interviewees still began to wonder about the context. Why did the man suddenly change his mind? Although the case description states that the man is competent to make decisions, this began to be doubted. Or could someone close to him have influenced him? What at first seemed obvious later appeared to be problematic.

The interviewees emphasized that in a case like this one must dig deeper and investigate whether it is really true that the patient does not want to be informed. Maybe he said that he does not want to know to appear brave, or to protect loved ones from disappointing information? Preferences can also change over time. Suddenly you do not want what you just wanted, or thought you wanted. Palliative care is a process, it was emphasized in the interviews. Thanks to the fact that the care team has continuous contact with the patient, it was felt that one could carefully probe what he really wants at regular intervals.

Other values were also at stake for the interviewees, which could further contribute to undermining what at first seemed obvious. For example, that the patient has the right to a dignified, peaceful and good death. If he is uninformed that he has a very short time left to live, he cannot prepare for death, say goodbye to loved ones, or finish certain practical tasks. It may also be more difficult to plan and provide good care to an uninformed patient, and it may feel dishonest to know something important but not tell the person concerned. The interviewees also considered the consequences for relatives of the patient’s reluctance to be informed.

The main result of the study is that the care teams found it difficult to handle a situation where a patient suddenly changes his mind and does not want to be informed. Should they not have experienced these difficulties? Should they accept what at first seemed self-evident in principle, namely that the patient has the right not to know? The interviewees themselves emphasized that care is a process, a gradually unfolding relationship, and that it is important to be flexible and continuously probe the changing will of the patient. Perhaps, after all, it is not so difficult to deal with the case in practice, even if it is not as simple as it first appeared?

The interviewees seemed unhappy about the patient’s decision, but at the same time seemed to feel that there were ways forward and that time worked in their favor. In the end, the patient probably wants to know, after all, they seemed to think. Should they not have had such an attitude towards the patient’s decision?

Read the author’s interesting discussion of the study results here: “It is very hard to just accept this” – a qualitative study of palliative care teams’ ethical reasoning when patients do not want information.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk, J. “It is very hard to just accept this” – a qualitative study of palliative care teams’ ethical reasoning when patients do not want information. BMC Palliative Care 23, 91 (2024). https://doi.org/10.1186/s12904-024-01412-8

This post in Swedish

We like real-life ethics

What is hidden behind the concept of research integrity?

In order to counteract scientific misconduct and harmful research, one often talks about protecting and supporting research integrity. The term seems to cover three different aspects of research, although the differences may not always be fully in mind. The term can refer to the character traits of individual researchers, for example, that the researcher values truth and precision and has good intentions. But the term can also refer to the research process, for example, that the method, data and results are correctly chosen, well executed and faithfully reproduced in scientific publications. Third, the term can refer to research-related institutions and systems, such as universities, ethical review, legislation and scientific journals. In the latter case, it is usually emphasized that research integrity presupposes institutional conditions beyond the moral character of individual researchers.

Does such a varied concept have to be problematic? Of course not, but possibly the concept of research integrity is less suitable, argue Gert Helgesson and William Bülow in an article that you can read here: Research Integrity and Hidden Value Conflicts.

In the article, they first discuss some ambiguities in the three uses of the concept of research integrity. Which personal traits are desirable in researchers and which values should they endorse? Does the integrity of the research process cover all ethically relevant aspects of research, including the application process, for example? Are research-related institutions actors with research integrity, or are they rather means that support research integrity?

Mentioning these ambiguities is not, as I understand it, intended as a decisive objection. Nor do the authors think that it is generally a shortcoming if concepts have a wide and varied use. But the concept of research integrity risks hiding value conflicts through its varying use, they argue. Suppose someone claims that, in order to protect and support research integrity, we should criminalize serious forms of scientific misconduct. This is perhaps true if by research integrity we refer to aspects of the research process, for example, that results are accurate and reliable. But the stricter regulation of research that this entails risks reducing the responsibility of individual researchers, which can undermine research integrity in the first sense. How should we compare the value of research integrity in the different senses? What does it mean to “increase research integrity”?

The concept of research integrity is not useless, the authors point out. But if we want to make value conflicts visible, if we want to clarify what we mean by research integrity and which forms of integrity are most important, as well as clear up the ambiguities mentioned above, then we will examine issues that are appropriately described as issues of research ethics.

If I understand the authors correctly, they mean that ethical questions about research should be characterized as research ethics. It is unfortunate that “research integrity” has come to function as an alternative designation for ethical questions about research. Everything becomes clearer if any questions about “research integrity,” if we want to use the concept, fall under research ethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Helgesson, G., Bülow, W. Research Integrity and Hidden Value Conflicts. Journal of Academic Ethics 21, 113–123 (2023). https://doi.org/10.1007/s10805-021-09442-0

This post in Swedish

We like ethics

Finding the way when there is none

A difficulty for academic writers is managing the dual role of both knowing and not knowing, of both showing the way and not finding it. There is an expectation that such writers should already have the knowledge they are writing about, that they should know the way they show others right from the start. As readers, we are naturally delighted and grateful to share the authors’ knowledge and insight.

But academic writers usually write because something strikes them as puzzling. They write for the same reason that readers read: because they lack the knowledge and clarity required to find the way through the questions. This lack stimulates them to research and write. The way that did not exist, takes shape when they tackle their questions.

This dual role as a writer often worries students who are writing an essay or dissertation for the first time. They can easily perceive themselves as insufficiently knowledgeable to have the right to tackle the work. Since they lack the expertise that they believe is required of academic writers from the outset, does it not follow that they are not yet mature enough to begin the work? Students are easily paralyzed by the knowledge demands they place on themselves. Therefore, they hide their questions instead of tackling them.

It always comes as a surprise, that the way actually takes shape as soon as we ask for it. Who dares to believe that? Research is a dynamic interplay with our questions: with ignorance and lack of clarity. An academic writer is not primarily someone who knows a lot and who therefore can show others the way, but someone who dares and is even stimulated by this duality of both knowing and not knowing, of both finding and not finding the way.

If we have something important to learn from the exploratory writers, it is perhaps that living knowledge cannot be separated as pure knowledge and nothing but knowledge. Knowledge always interacts with its opposite. Therefore, essay writing students already have the most important asset to be able to write in an exploratory way, namely the questions they are wondering about. Do not hide the questions, but let them take center stage. Let the text revolve around what you do not know. Knowledge without contact with ignorance is dead.  It solves no one’s problem, it answers no one’s question, it removes no one’s confusion. So let the questions sprout in the soil of the text, and the way will soon take shape.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about authorship

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

« Older posts