A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 1 of 33)

Nurses’ experiences of tube feeding under restraint for anorexia

The eating disorder anorexia (anorexia nervosa) is a mental disorder that can be life-threatening if it is not treated. It is characterized by fear of gaining weight: you starve yourself to lose weight and do not understand that being underweight is dangerous. Even if most recover, the disease is associated with increased mortality and the most severely ill may need to be hospitalized.

Hospital care can involve both psychotherapy and drug treatment, but not everyone wants or is able to participate in the treatment, which of course also involves eating. They may lack motivation to change or refuse to see that they need treatment. If the malnutrition becomes life-threatening, it may be necessary to decide on tube feeding as a compulsory measure. Liquid nutrition is then given via a thin tube that is inserted through one nostril and down into the stomach.

Tube-feeding an adult who does not want to eat is reasonably a challenge for the nurses who have to perform the procedure. What are their experiences of the measure like? One study investigated the issue by interviewing nurses at a Norwegian inpatient ward where adult patients with severe anorexia were cared for. What did the nurses have to say?

An important theme was that one strove to provide good care even during the coercive measure. It must be so good that the patient voluntarily wants to stay in the ward after tube feeding. For example, the measure is never taken until one has gradually tried to encourage the patient to eat, asked the patient about the situation and discussed whether to use the tube instead. If tube feeding becomes necessary, one still tries to give the patient options, one tries to respect the patient’s autonomy as much as possible, even if it is a coercive measure. The nurses also described difficulties in balancing kindness and firmness during the procedure, difficulties in combining the role of being a helper and being a controller.

Another theme was ethical concerns when the doctor decided on tube feeding even though the patient’s BMI was not so low that the condition was life-threatening. One nurse stated that she sometimes found such situations so problematic that she refused to take part in the procedure.

The third theme was concerns about calling in staff from another ward to help restrain the patient while the nurse performed the tube feeding. Some nurses were concerned about how this might be experienced by patients with a history of abuse. Others saw the tube feeding as a life-saving measure and experienced no ethical concerns. However, participants in the study emphasized that tube feeding affects the relationship with the patient and that restraint can disrupt the relationship. A nurse told how she once performed tube feeding on a patient she had never met before, and with whom she had therefore not established a relationship, and how this then prevented a good relationship with that patient.

If you want to read for yourself what the nurses said and how the authors discussed their findings, read the study here: Nurses’ experience of nasogastric tube feeding under restraint for Anorexia Nervosa in a psychiatric hospital.

Interview studies that capture human experience through the participants’ own stories often yield unexpectedly meaningful insights. Subtle details of human life that you would not otherwise have thought of appear in the interview material. One such insight from this study was how the nurses made great efforts so that tube feeding could be perceived as good care with respect for the patient’s autonomy and dignity, despite the fact that it is a coercive measure. It also became clear that there were tensions in the situation that the nurses had difficulty dealing with, such as first performing the coercive measure and then comforting the patient and re-establishing the relationship that had been disrupted. One of the conclusions in the article is therefore that even the nurses who perform tube feeding are vulnerable.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Brinchmann, B.S., Ludvigsen, M.S. & Godskesen, T. Nurses’ experience of nasogastric tube feeding under restraint for Anorexia Nervosa in a psychiatric hospital. BMC Medical Ethics 25, 111 (2024). https://doi.org/10.1186/s12910-024-01108-x

This post in Swedish

Ethics needs empirical input

Psychological distress: an overlooked issue in immigrants

Psychological distress that ethnic minorities experience is an often overlooked problem. In France, the mental well-being of ethnic minorities, particularly those with North African immigrant backgrounds is an important issue to study. Both first- and second-generation immigrants face unique challenges that may make them more vulnerable to more general mental health issues, and psychological disorders. A fresh report from the European Fundamental Rights Association on being a Muslim in the EU (published on October 24, 2024) sheds some light on issues related to health and racial harassment and violence. The report did not study psychological issues specifically, but it is worth noting that race-related violence has psychological impact for 55 percent of the respondents (p. 21).

Vulnerability is frequently linked to ethnic minority status, leading to recurring experiences of discrimination and difficulties in reconciling cultural identity with a society that often prioritizes assimilation. In this context, assimilation tends to erase or disregard the original cultural heritage in favor of integration into the dominant culture. Such dynamics can lead to feelings of isolation, invalidation, and psychological distress among affected individuals.

Research on the mental health of French populations of North African descent remains largely neglected. In other regions, for example North America, mental health and immigration is much better studied. While the topic of discrimination has been explored in some areas, few studies have focused on the psychological effects of these experiences and the coping strategies adopted by these populations in France. Some research does indicate a rise in discrimination, but lack of comprehensive studies on this issue creates both a scientific and social void, keeping these topics largely invisible.

In other southern European countries such as Italy and Spain, the mental health problems of ethnic minorities are recognized, but do not yet receive the same attention as in North America. In Italy, studies on the mental health of minorities are mainly focused on recent migrants and refugees, not least because of the importance of migratory flows in the Mediterranean. Researchers are mainly interested in the traumas associated with exile and the precarious conditions of migrants, but issues of discrimination or systemic racism are less well explored.

In Spain, there is also research on the mental health of migrants, particularly from Latin America and North Africa. However, the framework remains focused on social integration and economic issues, and less on the dynamics of discrimination and ethnicity. Both countries are beginning to recognize the importance of these issues, but in-depth studies on the impact of racial discrimination on the mental health of ethnic minorities, as in all parts of Europe, are still limited.

One psychological phenomenon that is still underexplored in this context is “racial battle fatigue.” Introduced in the early 2000s by William A. Smith, this concept refers to the emotional and psychological stress accumulated by individuals who repeatedly face racism. It represents the emotional burden that ethnic minorities carry as a result of racial discrimination and societal expectations. This burden can drive individuals to minimize or suppress their own suffering to avoid being perceived as “weak” or “complaining.” These coping mechanisms can exacerbate psychological issues, creating a vicious cycle of untreated distress.

In academic and professional settings, there is often reluctance to openly discuss these challenges. Some individuals may regard these topics as taboo or controversial, limiting the opportunities for open dialogue and scientific advancement. This reflects a broader trend in the mental health field, where the specific needs of ethnic minorities, particularly in terms of tailored psychological care, are not adequately addressed.

If we are going to be able to provide concrete answers to these questions, we need to study this phenomenon and shed some light on the mechanisms underlying the psychological suffering of ethnic minorities. Research on the psychological distress experienced by ethnic minorities could also help develop therapeutic interventions better suited to these populations. A recent French pilot study can lead the way: in Rania Driouach’s sample of people with North African descent, 226 out of a total of 387 participants indicated heightened psychological distress on a transgenerational level. Her study is the first step towards a scientific framework that acknowledges the specific needs of these groups while promoting an inclusive and rigorous therapeutic approach. Perhaps such a framework can help pave the way for a better understanding of the effects of migration on psychological distress across generations, and provide better tools for the (mental) health care providers that provide both first and second line care.

This post is written by Rania Driouach (Nîmes University) and:

Sylvia Martin

Sylvia Martin, Clinical Psychologist and Senior Researcher at the Centre for Research Ethics & Bioethics (CRB)

We transcend disciplinary borders

Digitization of healthcare requires a national strategy to increase individuals’ ability to handle information digitally

There is consensus that the digitization of healthcare can make it easier to keep in touch with healthcare and get information that supports individual decision-making about one’s own health. However, the ability to understand and use health information digitally varies. The promising digitization therefore risks creating unequal care and health.

In this context, one usually speaks of digital health literacy. The term refers to the ability to retrieve, understand and use health information digitally to maintain or improve one’s health. This ability varies not only between individuals, but also within the same individual. Illness can, for example, reduce the ability to use a computer or a smartphone to maintain contact with healthcare and to understand and manage health information digitally. Your digital health literacy is dependent on your health.

How do Swedish policy makers think about the need for strategies to increase digital health literacy in Sweden? An article with Karin Schölin Bywall as the main author examines the question. Material was collected during three recorded focus group discussions (or workshops) with a total of 10 participants. The study is part of a European project to increase digital health literacy in Europe. What did Swedish decision-makers think of the need for a national strategy?

The participants in the study said that the issue of digital health literacy was not as much on the agenda in Sweden as in many other countries in Europe and that governmental agencies have limited knowledge of the problem. Digital services in healthcare also usually require that you identify yourself digitally, but a large group of adults in Sweden lack e-identification. The need for a national strategy is therefore great.

Participants further discussed how digital health literacy manifests itself in individuals’ ability to find the right website and reliable information on the internet. People with lower digital health literacy may not be able to identify appropriate keywords or may have difficulty assessing the credibility of the information source. The problem is not lessened by the fact that algorithms control where we end up when we search for information. Often the algorithms make companies more visible than government organizations.

The policy makers in the study also identified specific groups that are at risk of digital exclusion (digital divide) and that need different types of support. Among others, they mentioned people with intellectual disabilities and young people who do not sufficiently master source criticism (even though they are skilled users of the internet and various apps). Specific measures to counteract the digital divide in healthcare were discussed, such as regular mailings with information about good websites, adaptation of website content for people with special needs, and teaching in source criticism. It was also emphasized that individuals may have different combinations of conditions that affect the ability to manage health information digitally in different ways, and that a strategy to increase digital health literacy must therefore be nuanced.

In summary, the study emphasizes that the need for a national strategy for increased digital health literacy is great. While digital technologies have huge potential to improve public health, they also risk reinforcing already existing inequalities, the authors conclude. Read the study here: Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers.

Something that struck me was that the policy makers in the study, as far as I could see, did not emphasize the growing group of elderly people in the population. Elderly people may have a particularly broad combination of conditions that affect digital health literacy in many different ways. In addition, the elderly’s ability to handle information digitally not only varies from day to day, but the ability can be expected to have an increasingly steady tendency to deteriorate. Probably at the same rate as the need to use the ability increases.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bywall, K.S., Norgren, T., Avagnina, B. et al. Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers. BMC Public Health 24, 2666 (2024). https://doi.org/10.1186/s12889-024-20174-9

This post in Swedish

Ethics needs empirical input

Debate on responsibility and academic authorship

Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI ​​to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI ​​can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?

No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.

What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.

However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.

According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.

The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.

Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912

Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245

This post in Swedish

We participate in debates

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

How do we create sustainable research and development of new antibiotics?

Antibiotic resistance is a growing global challenge, particularly for modern healthcare, which relies on antibiotics to prevent and treat infectious diseases. Multi-resistant bacteria are already present across the globe and without effective antibiotics, simple medical interventions will become risky in the future. Each year, several million deaths globally are associated with antibiotic resistance. With more and more drug-resistant microorganisms, one could expect an increase in research and development of new antibiotics or vaccines. However, in parallel with the growing global threat from antimicrobial resistance, or AMR as it is often called, the development rate of new antibiotics is instead decreasing. Reduced R&D also reduces the number of experts in the field, which in turn affects our society’s ability to develop new antibiotics.

Why is that so? One reason is that the return on investment is so low that many large pharmaceutical companies have scaled back or abandoned their development programs, resulting in a loss of expertise. The effort to slow down the development rate of antibiotic resistance requires us to save the most effective medicines for the most difficult cases, and this “stewardship” contributes to inhibiting the will to invest, as the companies are unable to count on any new “blockbuster” drugs.

The problem of access to effective treatment is global, and on September 26 this year, the UN General Assembly is organizing a high-level meeting on AMR. The political declaration published ahead of the meeting highlights, among other things, the need for mechanisms for funding research and development, the need for functioning collaborations between private and public actors, and the need for measures to deal with the growing lack of competence in the area.

However, the picture is not only dark. During the last decade, several investments have been made in collaborations to meet the challenges for research and development in the field. One such investment is the European AMR Accelerator program, running since 2019 with funding from the Innovative Medicines Initiative (IMI). The program consists of nine projects that bring different stakeholders together to collaborate on the development of new treatments, for example against multi-resistant tuberculosis.

In a short article recently published in Nature Reviews Drug Discovery, representatives of the program discuss some of the important values ​​and challenges associated with collaborations between academia and industry. Antibiotic development is expensive and many drug candidates are discontinued already in the early stages of development. By sharing risks and costs between several organizations, the AMR Accelerator has so far been able to contribute to the development of a large portfolio of different antibiotics. In addition, the nine projects have developed research infrastructures for, among other things, modelling, data management, and clinical studies that can benefit the entire AMR research community. Moreover, the critical mass that is generated when 98 organizations collaborate, can generate new ideas and synergies in the work against AMR.

There are also challenges. Among the challenges is balancing the perspectives and needs of different actors in the program, not least in the collaborations between academia and industry, where cooperation agreements and regular meetings have been needed to manage differences in culture and approach. The AMR Accelerator program has also served as neutral ground for competing companies, which have been able to can collaborate within the framework of the projects.

According to the authors, the biggest challenge remains: what happens after the projects end? The Innovative Medicines Initiative has invested €479 million in the program. The question now is how the nine projects and partners will find long-term sustainability for the assets and infrastructures they have put in place. Some form of continued funding is needed so that the resources created within the AMR Accelerator can be used in the next phase of the work, where the end goal is providing access to drugs that can treat antibiotic-resistant infections.

The article concludes with a call to governments, research funders, pharmaceutical companies and other actors to invest in research and development of new medicines and research to support the fight against antibiotic resistance. To ensure that we can benefit from investments such as the AMR Accelerator in the long term, regular funding calls are needed to maintain expertise, infrastructures, data and networks.

Read the highly topical article here: The AMR Accelerator: from individual organizations to efficient antibiotic partnerships.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Fernow J, Olliver M, Couet W, Lagrange S, Lamers MH, Olesen OF, Orrling K, Pieren M, Sloan DJ, Vaquero JJ, Miles TJ & Karlén A, The AMR Accelerator: from individual organizations to efficient antibiotic development partnerships, Nature Reviews Drug Discovery, first online 23 September, DOI: https://doi.org/10.1038/d41573-024-00138-9

This post in Swedish

Approaching future issues

Return of health data from clinical trials to the patients

During a clinical trial, large amounts of health data are generated that can be useful not only within the current study. If the trial data are made available for sharing, they can be reused within other research projects. Moreover, if the research participants’ individual health data are returned to them, this may benefit the patients in the study.

The opportunities to increase the usefulness of data from clinical trials in these two ways are not being exploited as well as today’s technology allows. The European project FACILITATE will therefore contribute to improved availability of data from clinical trials for other research purposes and strengthen the position of participating patients and their opportunity to gain access to their individual health data.

A policy brief article in Frontiers in Medicine presents the project’s work and recommendations regarding the position of patients in clinical studies and the possibility of communicating their health data back to them. The project develops an ethical framework that will put patients more at the center and increase their influence over the studies they participate in. For example, it tries to make it easier for patients to dynamically design and modify their consent, access information about the study and retrieve individual health data.

An extended number of ethical principles are identified within the project as essential for clinical trials. For example, one should not only respect the patients’ autonomy, but also strengthen their opportunities to make informed decisions about their own care on the basis of returned health data. Returned data must be judged to be of some kind of benefit to the individuals and the data must be communicated in such a way that they as effectively as possible strengthen the patients’ ability to make informed decisions about their care.

If you are interested in greater opportunities to use health data from clinical trials, mainly opportunities for the participating patients themselves, read the article here: Ethical framework for FACILITATE: a foundation for the return of clinical trial data to participants.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ciara Staunton, Johanna M. C. Blom and Deborah Mascalzoni on behalf of the IMI FACILITATE Consortium. Ethical framework for FACILITATE: a foundation for the return of clinical trial data to participants. Frontiers in Medicine, 17 July 2024. https://doi.org/10.3389/fmed.2024.1408600

This post in Swedish

We recommend readings

Does knowing the patient make a moral difference?

Several ethical concepts and principles govern how patients should be treated in healthcare. For example, healthcare professionals should respect patients’ autonomy. Moreover, they should act in the patients’ best interest and avoid actions that can cause harm. Patients must also be treated fairly. However, exactly how such ethical concepts and principles should be applied can vary in different situations.

A new article examines whether the application may depend on whether the healthcare personnel know the patient (in the sense of having knowledge about the patient). Some healthcare situations are characterized by the fact that the patient is unknown to the personnel: they have never met the patient before. Other situations are characterized by familiarity: the personnel have had continuous contact with the patient for a long time. In the latter situations, the personnel know the patient’s personality, living conditions, preferences and needs. Does such familiarity with the patient make any difference to how patients should be treated ethically by the healthcare staff, ask the authors of the article, Joar Björk and Anna Hirsch.

It may be tempting to reply that knowing the patient should not be allowed to play any role, that it follows from the principle of justice that familiarity should not be allowed to make any difference. Of course, the principle of justice places limits on the importance of familiarity with the patient. But in healthcare there is still this difference between situations marked by unfamiliarity and situations marked by familiarity. Consider the difference between screening and palliative home care. Should not this difference sometimes make a moral difference?

Presumably familiarity can sometimes make a moral difference, the authors argue. They give examples of how, not least, autonomy can take different forms depending on whether the situation is characterized by familiarity or unfamiliarity. Take the question of when and how patients should be allowed to delegate their decision-making to the healthcare personnel. If the personnel do not know the patient at all, it seems to be at odds with autonomy to take over the patient’s decision-making, even if the patient wishes it. However, if the personnel are well acquainted with the patient, it may be more consistent with autonomy to take over parts of the decision-making, if the patient so wishes. The authors provide additional examples. Suppose a patient has asked not to be informed prior to treatment, but the staff know the patient well and know that a certain part of the information could make this particular patient want to change certain decisions about the treatment. Would it then not be ethically correct to give the patient at least that part of the information and problematic not to do so? Or suppose a patient begins to change their preferences back and forth. If the patient is unfamiliar to the staff, it may be correct to always let the most recent preference apply. (One may not even be aware that the patient had other preferences before.) If, on the other hand, the patient is well known, the staff may have to take into account both past and present preferences and make a more global assessment of the changes and of autonomy.

The authors also exemplify how the application of other moral concepts and principles can take different forms, depending on whether the relationship with the patient is characterized by familiarity or unfamiliarity. Even the principle of justice could in some cases take different form, depending on whether the personnel know the patient or not, they suggest. If you want to see a possible example of this, read the article here: An “ethics of strangers”? On knowing the patient in clinical ethics.

The authors finally argue that care decisions regarding autonomy, justice and acting in the best interest of the patient are probably made with greater precision if the personnel know the patient well. They argue that healthcare professionals therefore should strive to get to know their patients. They also argue that healthcare systems where a greater proportion of the staff know a greater proportion of the patients are preferable from an ethical point of view, for example systems that promote therapeutic continuity.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk, J., Hirsch, A. An “ethics of strangers”? On knowing the patient in clinical ethics. Med Health Care and Philosophy 27, 389–397 (2024). https://doi.org/10.1007/s11019-024-10213-y

This post in Swedish

We have a clinical perspective

End-of-life care: ethical challenges experienced by critical care nurses

In an intensive care unit, seriously ill patients who need medical and technical support for central bodily functions, such as breathing and circulation, are monitored and treated. Usually it goes well, but not all patients survive, despite the advanced and specialized care. An intensive care unit can be a stressful environment for the patient, not least because of the technical equipment to which the patient is connected. When transitioning to end-of-life care, one therefore tries to create a calmer and more dignified environment for the patient, among other things by reducing the use of life-sustaining equipment and focusing on reducing pain and anxiety.

The transition to end-of-life care can create several ethically challenging situations for critical care nurses. What do these challenges look like in practice? The question is investigated in an interview study with nurses at intensive care units in a Swedish region. What did the interviewees say about the transition to end-of-life care?

A challenge that many interviewees mentioned was when life-sustaining treatment was continued at the initiative of the physician, despite the fact that the nurses saw no signs of improvement in the patient and judged that the probability of survival was very low. There was concern that the patient’s suffering was thus prolonged and that the patient was deprived of the right to a peaceful and dignified death. There was also concern that continued life-sustaining treatment could give relatives false hope that the patient would survive, and that this prevented the family from supporting the patient at the end of life. Other challenges had to do with the dosage of pain and anti-anxiety drugs. The nurses naturally sought a good effect, but at the same time were afraid that too high doses could harm the patient and risk hastening death. The critical care nurses also pointed out that family members could request higher doses for the patient, which increased the concern about the risk of possibly shortening the patient’s life.

Other challenges had to do with situations where the patient’s preferences are unknown, perhaps because the patient is unconscious. Another challenge that was mentioned is when conscious patients have preferences that conflict with the nurses’ professional judgments and values. A patient may request that life-sustaining treatment cease, while the assessment is that the patient’s life can be significantly extended by continued treatment. Additional challenging situations can arise when the family wants to protect the patient from information that death is imminent, which violates the patient’s right to information about diagnosis and prognosis.

Finally, various situations surrounding organ donation were mentioned as ethically challenging. For example, family members may oppose the patient’s decision to donate organs. It may also happen that the family does not understand that the patient suffered a total cerebral infarction, and believes that the patient died during the donation surgery.

The results provide a good insight into ethical challenges in end-of-life care that critical care nurses experience. Read the article here: Critical care nurses’ experiences of ethical challenges in end-of-life care.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Palmryd L, Rejnö Å, Alvariza A, Godskesen T. Critical care nurses’ experiences of ethical challenges in end-of-life care. Nursing Ethics. 2024;0(0). doi:10.1177/09697330241252975

This post in Swedish

Ethics needs empirical input

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

« Older posts