A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: March 2025

Existential conversations in palliative care

In palliative care of seriously ill and dying patients, healthcare professionals deal not only with medical needs, but also with the existential needs of patients and their families. Although the palliative healthcare teams can receive support from professions that focus on existential conversations, it is the physicians and not least the nurses, care assistants, physiotherapists and occupational therapists who more continuously talk with patients about life, dying and death. Sometimes the conversations are planned in advance, but often they arise spontaneously in connection with the care interventions.

A Swedish interview study investigated experiences of spontaneous existential conversations with patients and their families within the healthcare professions that meet them daily: nurses, care assistants, physiotherapists and occupational therapists. They were asked questions about when existential conversations could arise and what influenced the quality of the conversations. They were also asked about how they talked to patients about their thoughts about death, how they reacted to patients’ existential questions, and how they reacted when relatives had difficulty accepting the situation.

The aim of the study was to create a structured overview of the experiences of the healthcare professionals, a model of what was considered important for existential conversations to arise and function well. Strategies used by the palliative teams were identified, as well as obstacles to meaningful existential conversations.

The main concern for the healthcare professionals was to establish a trusting relationship with patients and next of kin. Without such a relationship, no meaningful conversations about life, dying and death could arise. A core category that emerged from the interview material was to maintain presence: to be like a stable rock under all circumstances. In the meeting with patients and relatives, they stayed physically close and were calmly present during quiet moments. This low-key presence could spark conversations about the end of life, about memories, about support for quality of life, even in situations where patients and relatives were afraid or upset. By maintaining a calm presence, it was perceived that one became receptive to existential conversations.

The palliative teams tried to initiate conversations about death early. As soon as patients entered the ward, open-ended questions were asked about how they were feeling. The patients’ thoughts about the future, their hopes and fears were carefully probed. Here, the main thing is to listen attentively. Another strategy was to capture wishes and needs by talking about memories or informing about the diagnosis and how symptoms can be alleviated. The healthcare professionals must also guide relatives, who may be anxious, angry and frustrated. Here, it is important not to take any criticism and threats personally, to calmly acknowledge their concerns and inform about possible future scenarios. Relatives may also need information on how they can help care for the patient, as well as support to say goodbye peacefully when the patient has died. Something that also emerged in the interviews was the importance of maintaining one’s professional role in the team. For example, a physiotherapist must maintain focus on the task of getting patients, who may lack motivation, to get up and exercise. A strategy for dealing with similar difficulties was to seek support from others in the care team, to talk about challenges that one otherwise felt alone with.

Something that could hinder existential conversations was the fear of making mistakes: then one dares neither to ask nor to listen. Another obstacle could be anxious relatives: if relatives are frustrated and disagreeing, this can hinder existential conversations that help them say goodbye and let the patient die peacefully. A third obstacle was lack of time and feeling strained: sometimes the health care professionals have other work tasks and do not have time to stop and talk. And if relatives do not accept that the patient is dying, but demand that the patient be moved to receive effective hospital care, the tension can hinder existential conversations. Finally, lack of continuous training and education in conducting existential conversations was perceived as an obstacle, as was lack of support from colleagues and from the healthcare organization.

Hopefully, the article can motivate educational efforts within palliative care for those professions that manage the existential needs of patients and relatives on a daily basis. You can find the article here: Interdisciplinary strategies for establishing a trusting relation as a pre-requisite for existential conversations in palliative care: a grounded theory study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Lagerin, A., Melin-Johansson, C., Holmberg, B. et al. Interdisciplinary strategies for establishing a trusting relation as a pre-requisite for existential conversations in palliative care: a grounded theory study. BMC Palliative Care 24, 47 (2025). https://doi.org/10.1186/s12904-025-01681-x

This post in Swedish

We recommend readings

Nurses’ experiences of dehumanization

Many healthcare professionals who work in nursing report that they experience a sense of dehumanization in their work. Although this is an increasingly recognized problem, it is still unclear how it manifests itself in practice and should be addressed. Previous studies indicate that the experience of dehumanization is often linked to excessive workload, lack of institutional support, and growing bureaucratization of medical care. As healthcare becomes more standardized and protocol-driven, nurses find themselves constrained by rigid structures that limit their ability to provide personalized and compassionate care. Over time, these conditions contribute to professional exhaustion, a loss of meaning in work, and in some cases, institutional mistreatment that is not intentional but arises as a byproduct of a dysfunctional organization of work.

The ethical implications of this phenomenon are significant. Respecting the dignity of both healthcare professionals and patients is fundamental to medical ethics, but this principle is increasingly challenged by current working conditions. The erosion of humanity in nurse-patient interactions not only affects the emotional well-being of nurses but also impacts the quality of care itself. Studies have shown that depersonalization in healthcare settings is associated with higher rates of medical errors. Furthermore, institutions bear a collective responsibility to ensure ethical working conditions, providing nurses with the necessary resources and support to maintain both their professional integrity and personal well-being.

Dehumanization of care is one of the topics of Marie-Charlotte Mollet’s soon-to-be completed dissertation at Paris Nanterre University. In one of her most recent studies, 263 French nurses, working in a variety of healthcare settings (public, private, nursing homes), were surveyed regarding factors related to their working conditions. They answered questionnaires about their workload, emotional demands, and organizational dehumanization. They also answered questions about their mental states, psychological flexibility, psychological distress, stress, and burnout. They moreover provided sociodemographic data on age, seniority, and gender.

In the analysis of the data, gender was found to be a relevant factor, raising new questions about dehumanization. For example, a significant difference between men and women was observed regarding dehumanization of patients: male nurses dehumanize patients more than female nurses do. This difference was measured by having study participants answer questions about “depersonalization” in a psychological assessment instrument for burnout (Maslach Burnout Inventory). Marie-Charlotte Mollet’s work thus suggests that dehumanization in healthcare needs to be examined through a gendered lens. For example, several studies have demonstrated that female nurses often face different expectations than their male counterparts, especially when it comes to emotional labor. Female nurses are more often expected to show empathy and provide emotional support, which places an additional burden on them and increases their vulnerability to burnout.

Addressing challenges related to dehumanization requires serious rethinking of the ethical and institutional frameworks of healthcare. Systemic reforms are necessary to uphold humanistic values and ethical standards in medical practice and to ensure that nurses are not merely treated as functional units within an overburdened system. Empirically informed reflection on equity, recognition, and gender in nursing is crucial to fostering a more sustainable and just profession; one where both patients and nurses are treated with the dignity they deserve. It is in the context of this need for well-founded reflection on the working conditions of nursing that this study and similar research efforts should be understood and considered: for the nurses’ own sake but also for the well-being of the patients and the quality of the care they receive.

This post is written by…

Sylvia Martin

Sylvia Martin, Clinical Psychologist and Senior Researcher at the Centre for Research Ethics & Bioethics (CRB).

Marie-Charlotte Mollet, PhD student at Paris Nanterre University.

Ethics needs empirical input

Why does science ask the question of artificial consciousness?

The possibility of conscious AI is increasingly perceived as a legitimate and important scientific question. This interest has arisen after a long history of scientific doubts about the possibility of consciousness not only in other animals, but sometimes even in humans. The very concept of consciousness was for a period considered scientifically suspect. But now the question of conscious AI is being raised within science.

For anyone interested in how such a mind-boggling question can be answered philosophically and scientifically, I would like to recommend an interesting AI-philosophical exchange of views in the French journal Intellectica. The exchange (which is in English) revolves around an article by two philosophers, Jonathan Birch and Kristin Andrews, who for several years have discussed consciousness not only among mammals, but also among birds, fish, cephalopods, crustaceans, reptiles, amphibians and insects. The two philosophers carefully distinguish between psychological questions about what might make us emotionally attracted to believe that an AI system is conscious, and logical questions about what philosophically and scientifically can count as evidence for conscious AI. It is to this logical perspective that they want to contribute. How can we determine whether an artificial system is truly conscious; not just be seduced into believing it because the system emotionally convincingly mirrors the behavior of subjectively experiencing humans? Their basic idea is that we should first study consciousness in a wide range of animal species beyond mammals. Partly because the human brain is too different from (today’s) artificial systems to serve as a suitable reference point, but above all because such a broad comparison can help us identify the essential features of consciousness: features that could be used as markers for consciousness in artificial systems. The two philosophers’ proposal is thus that by starting from different forms of animal consciousness, we can better understand how we should philosophically and scientifically seek evidence for or against conscious AI.

One of my colleagues at CRB, Kathinka Evers, also a philosopher, comments on the article. She appreciates Birch and Andrews’ discussion as philosophically clarifying and sees the proposal to approach the question of conscious AI by studying forms of consciousness in a wide range of animal species as well argued. However, she believes that a number of issues require more attention. Among other things, she asks whether the transition from carbon- to silicon-based substrates does not require more attention than Birch and Andrews give it.

Birch and Andrews propose a thought experiment in which a robot rat behaves exactly like a real rat. It passes the same cognitive and behavioral tests. They further assume that the rat brain is accurately depicted in the robot, neuron for neuron. In such a case, they argue, it would be inconsistent not to accept the same pain markers that apply to the rat for the robot as well. The cases are similar, they argue, the transition from carbon to silicon does not provide sufficient reason to doubt that the robot rat can feel pain when it exhibits the same features that mark pain in the real rat. But the cases are not similar, Kathinka Evers points out, because the real rat, unlike the robot, is alive. If life is essential for consciousness, then it is not inconsistent to doubt that the robot can feel pain even in this thought experiment. Someone could of course associate life with consciousness and argue that a robot rat that exhibits the essential features of consciousness must also be considered alive. But if the purpose is to identify what can logically serve as evidence for conscious AI, the problem remains, says Kathinka Evers, because we then need to clarify how the relationship between life and consciousness should be investigated and how the concepts should be defined.

Kathinka Evers thus suggests several questions of relevance to what can logically be considered evidence for conscious AI. But she also asks a more fundamental question, which can be sensed throughout her commentary. She asks why the question of artificial consciousness is even being raised in science today. As mentioned, one of Birch and Andrews’ aims was to avoid the answer being influenced by psychological tendencies to interpret an AI that convincingly reflects human emotions as if it were conscious. But Kathinka Evers asks, as I read her, whether this logical purpose may not come too late. Is not the question already a temptation? AI is trained on human-generated data to reflect human behavior, she points out. Are we perhaps seeking philosophical and scientific evidence regarding a question that seems significant simply because we have a psychological tendency to identify with our digital mirror images? For a question to be considered scientific and worth funding, some kind of initial empirical support is usually required, but there is no evidence whatsoever for the possibility of consciousness in non-living entities such as AI systems. The question of whether an AI can be conscious has no more empirical support than the question of whether volcanoes can experience their eruptions, Kathinka Evers points out. There is a great risk that we will scientifically try to answer a question that lacks scientific basis. No matter how carefully we seek the longed-for answer, the question itself seems imprudent.

I am reminded of the myth of Narcissus. After a long history of rejecting the love of others (the consciousness of others), he finally fell in love with his own (digital) reflection, tried hopelessly to hug it, and was then tormented by an eternal longing for the image. Are you there? Will the reflection respond? An AI will certainly generate a response that speaks to our human emotions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Birch Jonathan, Andrews Kristin (2024/2). To Understand AI Sentience, First Understand it in Animals. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 213-226.

Evers Kathinka (2024/2). To understand sentience in AI first understand it in animals. Commentary to Jonathan Birch and Kristin Andrews. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 229-232.

This post in Swedish

We challenge habits of thought