A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 1 of 25)

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

Dynamic consent: broad and specific at the same time

The challenge of finding an appropriate way to handle informed consent to biobank research is big and has often been discussed here on the Ethics Blog. Personal data and biological samples are collected and saved for a long time to be used in future research, for example, on how genes and the environment interact in various diseases. The informed consent to research is for natural reasons broad, because when collecting data and samples it is not yet possible to specify which future research studies the material will be used in.

An unusually clear and concise article on biobank research presents a committed approach to the possible ethical challenges regarding broad consent. The initial broad consent to research is combined with clearly specified strong governance and oversight mechanisms. The approach is characterized also by continuous communication with the research participants, through which they receive updated information that could not be given at the time of the original consent. This enables participants to stay specifically informed and make autonomous choices about their research participation through time.

The model is called dynamic consent. This form of consent can be viewed as broad and specific at the same time. The article describes experiences from a long-term biobank study in South Tyrol in Italy, the CHRIS study, where dynamic consent is implemented since 2011. The model is now used to initiate the first follow-up phase, where participants are contacted for further sampling and data collection in new studies.

The article on dynamic consent in the CHRIS study is written by Roberta Biasiotto, Peter P. Pramstaller and Deborah Mascalzoni. In addition to describing their experiences of dynamic consent, they also respond to common objections to the model, for example, that participants would be burdened by constant requests for consent or that participants would have an unreasonable influence over research.

I would like to emphasize once again the clarity of the article, which shows great integrity and courage. The authors do not hide behind a facade of technical terminology and jargon, so that one must belong to a certain academic discipline to understand. They write broadly and specifically at the same time, I am inclined to say! This inspires confidence and indicates how sincerely one has approached the ethical challenges of involving and communicating with research participants in the CHRIS study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Biasiotto, Roberta; Pramstaller, Peter P.; Mascalzoni, Deborah. 2021. The dynamic consent of the Cooperative Health Research in South Tyrol (CHRIS) study: broad aim within specific oversight and communication. Part of BIOLAW JOURNAL-RIVISTA DI BIODIRITTO, pp. 277-287. http://dx.doi.org/10.15168/2284-4503-786

This post in Swedish

We care about communication

Challenges in end-of-life care of people with severe dementia

In order to improve care, insight is needed into the challenges that one experiences in the daily care work. One way to gain insight is to conduct interview studies with healthcare staff. The analysis of the interviews can provide a well-founded perspective on the challenges, as they are experienced from within the practices.

In Sweden, people with severe dementia usually die in nursing homes. Compared to the specialised palliative care of cancer patients, the general care of people with severe dementia at the end of life is less advanced, with fewer opportunities to relieve pain and other ailments. To gain a clearer insight into the challenges, Emma Lundin and Tove Godskesen conducted an interview study with nurses in various nursing homes in Stockholm. They approached the profession that is largely responsible for relieving pain and other ailments in dying severely demented people.

The content of the interviews was thematically analysed as three types of challenges: communicative, relational and organisational. The communicative challenges have to do with the difficulty of assessing type of pain and pain level in people with severe dementia, as they often cannot understand and answer questions. Assessment becomes particularly difficult if the nurse does not already know the person with dementia and therefore cannot assess the difference between the person’s current and previous behaviour. Communication difficulties also make it difficult to find the right dose of pain medications. In addition, they make it difficult to assess whether the person’s behaviour expresses pain or rather anxiety, which may need other treatment.

Visiting relatives can often help nurses interpret the behaviour of the person with dementia. However, they can also interfere with nurses’ work to relieve pain, since they can have different opinions about the use of, for example, morphine. Some relatives want to increase the dose to be sure that the person with dementia does not suffer from pain, while others are worried that morphine may cause death or create addiction.

The organisational challenges have to do in part with understaffing. The nurses do not have enough time to spend with the demented persons, who sometimes die alone, perhaps without optimal pain relief. Furthermore, there is often a lack of professional competence and experience at the nursing homes regarding palliative care for people with severe dementia: it is a difficult art.

The authors of the article argue that these challenges point to the need for specialist nurses who are trained in palliative care for people with dementia. They further ague that resources and strategies are needed to inform relatives about end-of-life care, and to involve them in decision-making where they can represent the relative. Relatives may need to be informed that increased morphine doses are probably not due to drug addiction. Rather, they are due to the fact that the need for pain relief increases as more and more complications arise near death. If the intention is to relieve symptoms at the end of life, you may end up in a situation where large doses of morphine need to be given to relieve pain, despite the risk to the patient.

If you want a deeper insight into the challenges, read the article in BMC Nursing: End-of-life care for people with advanced dementia and pain: a qualitative study in Swedish nursing homes.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Lundin, E., Godskesen, T.E. End-of-life care for people with advanced dementia and pain: a qualitative study in Swedish nursing homes. BMC Nurs 20, 48 (2021). https://doi.org/10.1186/s12912-021-00566-7

This post in Swedish

We like real-life ethics

To change the changing human

Neuroscience contributes to human self-understanding, but it also raises concerns that it might change humanness, for example, through new neurotechnology that affects the brain so deeply that humans no longer are truly human, or no longer experience themselves as human. Patients who are treated with deep brain stimulation, for example, can state that they feel like robots.

What ethical and legal measures could such a development justify?

Arleen Salles, neuroethicist in the Human Brain Project, argues that the question is premature, since we have not clarified our concept of humanness. The matter is complicated by the fact that there are several concepts of human nature to be concerned about. If we believe that our humanness consists of certain unique abilities that distinguish humans from animals (such as morality), then we tend to dehumanize beings who we believe lack these abilities as “animal like.” If we believe that our humanity consists in certain abilities that distinguish humans from inanimate objects (such as emotions), then we tend to dehumanize beings who we believe lack these abilities as “mechanical.” It is probably in the latter sense that the patients above state that they do not feel human but rather as robots.

After a review of basic features of central philosophical concepts of human nature, Arleen Salles’ reflections take a surprising turn. She presents a concept of humanness that is based on the neuroscientific research that one worries could change our humanness! What is truly surprising is that this concept of humanness to some extent questions the question itself. The concept emphasizes the profound changeability of the human.

What does it mean to worry that neuroscience can change human nature, if human nature is largely characterized its ability to change?

If you follow the Ethics Blog and remember a post about Kathinka Evers’ idea of a neuroscientifically motivated responsibility for human nature, you are already familiar with the dynamic concept of human nature that Arleen Salles presents. In simple terms, it can be said to be a matter of complementing human genetic evolution with an “epigenetic” selective stabilization of synapses, which every human being undergoes during upbringing. These connections between brain cells are not inherited genetically but are selected in the living brain while it interacts with its environments. Language can be assumed to belong to the human abilities that largely develop epigenetically. I have proposed a similar understanding of language in collaboration with two ape language researchers.

Do not assume that this dynamic concept of human nature presupposes that humanness is unstable. As if the slightest gust of wind could disrupt human evolution and change human nature. On the contrary, the language we develop during upbringing probably contributes to stabilizing the many human traits that develop simultaneously. Language probably supports the transmission to new generations of the human forms of life where language has its uses.

Arleen Salles’ reflections are important contributions to the neuroethical discussion about human nature, the brain and neuroscience. In order to take ethical responsibility, we need to clarify our concepts, she emphasizes. We need to consider that humanness develops in three interconnected dimensions. It is about our genetics together with the selective stabilization of synapses in living brains in continuous interaction with social-cultural-linguistic environments. All at the same time!

Arleen Salles’ reflections are published as a chapter in a new anthology, Developments in Neuroethics and Bioethics (Elsevier). I am not sure if the publication will be open access, but hopefully you can find Arleen Salles’ contribution via this link: Humanness: some neuroethical reflections.

The chapter is recommended as an innovative contribution to the understanding of human nature and the question of whether neuroscience can change humanness. The question takes a surprising turn, which suggests we all together have an ongoing responsibility for our changing humanness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles (2021). Humanness: some neuroethical reflections. Developments in Neuroethics and Bioethics. https://doi.org/10.1016/bs.dnb.2021.03.002

This post in Swedish

We think about bioethics

New dissertation on patient preferences in medical approvals

During the spring, several doctoral students at CRB successfully defended their dissertations. Karin Schölin Bywall defended her dissertation on May 12, 2021. The dissertation, like the two previous ones, reflects a trend in bioethics from theoretical investigations to empirical studies of people’s perceptions of bioethical issues.

An innovative approach in Karin Schölin Bywall’s dissertation is that she identifies a specific area of ​​application where the preference studies that are increasingly used in bioethics can be particularly beneficial. It is about patients’ influence on the process of medical approval. Patients already have such an influence, but their views are obtained somewhat informally, from a small number of invited patients. Karin Schölin Bywall explores the possibility of strengthening patients’ influence scientifically. Preference studies can give decision-makers an empirically more well-founded understanding of what patients actually prefer when they weigh efficacy against side effects and other drug properties.

If you want to know more about the possibility of using preference studies to scientifically strengthen patients’ influence in medical approvals, read Karin Schölin Bywall’s dissertation: Getting a Say: Bringing patients’ views on benefit-risk into medical approvals.

If you want a concise summary of the dissertation, read Anna Holm’s news item on our website: Bringing patients’ views into medical approvals.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Schölin Bywall, K. (2021) Getting a Say: Bringing patients’ views on benefit-risk into medical approvals. [Dissertation]. Uppsala University.

This post in Swedish

We want solid foundations

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

When established treatments do not help

What should the healthcare team do when established treatments do not help the patient? Should one be allowed to test a so-called non-validated treatment on the patient, where efficacy and side effects have not yet been determined scientifically?

Gert Helgesson comments on this problem in Theoretical Medicine and Bioethics. His comment concerns suggestions from authors who in the same journal propose a specific restrictive policy. They argue that if you want to test a non-validated treatment, you should from the beginning plan this as a research project where the treatment is tested on several subjects. Only in this way do you get data that can form the basis for scientific conclusions about the treatment. Above all, the test will undergo ethical review, where the risks to the patient and the reasons for trying the treatment are carefully assessed.

Of course, it is important to be restrictive. At the same time, there are disadvantages with the specific proposal above. If the patient has a rare disease, for example, it can be difficult to gather enough patients to draw scientific conclusions from. Here it may be more reasonable to allow case reports and open storage of data, rather than requiring ethically approved clinical trials. Another problem is that clinical trials take place under conditions that differ from those of patient care. If the purpose is to treat an individual patient because established treatments do not work, then it becomes strange if the patient is included in a randomized study where the patient may end up in the control group which receives the standard treatment. A third problem is when the need for treatment is urgent and there is no time to approach an ethical review board and await their response. Moreover, is it reasonable that research ethical review boards make treatment decisions about individual patients?

Gert Helgesson is well aware of the complexity of the problem and the importance of being careful. Patients must not be used as if they were guinea pigs for clinicians who want to make quick, prestigious discoveries without undergoing proper research ethical review. At the same time, one can do a lot of good for patients by identifying new effective treatments when established treatments do not work. But who should make the decision to test a non-validated treatment if it is unreasonable to leave the decision to a research ethical board?

Gert Helgesson suggests that such decisions on non-validated treatments can reasonably be made by the head of the clinic, and that a procedure for such decisions at the clinic level should exist. For example, an advisory hospital board can be appointed, which supports discussions and decisions at the clinic level about new treatments. The fact that a treatment is non-validated does not mean that there are no empirical and theoretical reasons to believe that it might work. Making a careful assessment of these reasons is an important task in these discussions and decisions.

I hope I have done justice to Gert Helgesson’s balanced discussion of a complex question: What is a reasonable framework for new non-validated treatments? In some last-resort cases where the need for care is urgent, for example, or the disease is rare, decisions about non-validated treatments should be clinical rather than research ethical, concludes Gert Helgesson. The patient must, of course, consent and a careful assessment must be made of the available knowledge about the treatment.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Helgesson, G. What is a reasonable framework for new non-validated treatments?. Theor Med Bioeth 41, 239–245 (2020). https://doi.org/10.1007/s11017-020-09537-6

This post in Swedish

We recommend readings

An unusually big question

Sometimes the intellectual claims on science are so big that they risk obscuring the actual research. This seems to happen not least when the claims are associated with some great prestigious question, such as the origin of life or the nature of consciousness. By emphasizing the big question, one often wants to show that modern science is better suited than older human traditions to answer the riddles of life. Better than philosophy, for example.

I think of this when I read a short article about such a riddle: “What is consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers.” The article by Christof Koch gives the impression that it is only a matter of time before science determines not only where in the brain consciousness arises (one already seems have a suspect), but also the specific neural mechanisms that give rise to – everything you have ever experienced. At least if one is to believe one of the fundamental theories about the matter.

Reading about the discoveries behind the identification of where in the brain consciousness arises is as exciting as reading a whodunit. It is obvious that important research is being done here on the effects that loss or stimulation of different parts of the brain can have on people’s experiences, mental abilities and personalities. The description of a new technology and mathematical algorithm for determining whether patients are conscious or not is also exciting and indicates that research is making fascinating progress, which can have important uses in healthcare. But when mathematical symbolism is used to suggest a possible fundamental explanation for everything you have ever experienced, the article becomes as difficult to understand as the most obscure philosophical text from times gone by.

Since even representatives of science sometimes make philosophical claims, namely, when they want to answer prestigious riddles, it is perhaps wiser to be open to philosophy than to compete with it. Philosophy is not just about speculating about big questions. Philosophy is also about humbly clarifying the questions, which otherwise tend to grow beyond all reasonable limits. Such openness to philosophy flourishes in the Human Brain Project, where some of my philosophical colleagues at CRB collaborate with neuroscientists to conceptually clarify questions about consciousness and the brain.

Something I myself wondered about when reading the scientifically exciting but at the same time philosophically ambitious article, is the idea that consciousness is everything we experience: “It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.” What does it mean to take such an all-encompassing claim seriously? What is not consciousness? If everything we can experience is consciousness, from the taste of chocolate mousse to the sight of the stars in the sky and our human bodies with their various organs, where is the objective reality to which science wants to relate consciousness? Is it in consciousness?

If consciousness is our inevitable vantage point, if everything we experience as real is consciousness, it becomes unclear how we can treat consciousness as an objective phenomenon in the world along with the body and other objects. Of course, I am not talking here about actual scientific research about the brain and consciousness, but about the limitless intellectual claim that scientists sooner or later will discover the neural mechanisms that give rise to everything we can ever experience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Christof Koch, What Is Consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers, Nature 557, S8-S12 (2018) https://doi.org/10.1038/d41586-018-05097-x

This post in Swedish

We transgress disciplinary borders

Patient integrity at the end of life

When we talk about patient integrity, we often talk about the patients’ medical records and the handling of their personal data. But patient integrity is not just about how information about patients is handled, but also about how the patients themselves are treated. For example, can they tell about their problems without everyone in the waiting room hearing them?

This more real aspect of patient integrity is perhaps extra challenging in an intensive care unit. Here, patients can be more or less sedated and connected to life-sustaining equipment. The patients are extremely vulnerable, in some cases dying. It can be difficult to see the human being for all the medical devices. Protecting the integrity of these patients is a challenge, not least for the nurses, who have close contact with them around the clock (and with the relatives). How do nurses perceive and manage the integrity of patients who end their lives in an intensive care unit?

This important question is examined in an article in the journal Annals of Intensive Care, written by Lena Palmryd, Åsa Rejnö and Tove Godskesen. They conducted an interview study with nurses in four intensive care units in Sweden. Many of the nurses had difficulty defining integrity and explaining what the concept means in the care of dying patients. This is not surprising. Not even the philosopher Socrates would have succeeded in defining integrity. However, the nurses used other words that emphasised respect for the patient and patient-centred attitudes, such as being listening and sensitive to the patient. They also tried to describe good care.

When I read the article, I was struck by how ethically central concepts, such as integrity and autonomy, often obscure reality and paralyse us. Just when we need to see clearly and act wisely. When the authors of the article analyse the interviews with the nurses, they use five categories instead, which in my opinion speak more clearly than the overall concept of integrity does:

  1. Seeing the unique individual
  2. Being sensitive to the patient’s vulnerability
  3. Observing the patient’s physical and mental sphere
  4. Taking into account the patient’s religion and culture
  5. Being respectful during patient encounters

How transparent to reality these words are! They let us see what it is about. Of course, it is not wrong to talk about integrity and it is no coincidence that these categories emerged in the analysis of the conversations with the nurses about integrity. However, sometimes it is perhaps better to refrain from ethically central concepts, because such concepts often hide more than they reveal.

The presentation of the interviews under these five headings, with well-chosen quotes from the conversations, is even more clarifying. This shows the value of qualitative research. In interview studies, reality is revealed through people’s own words. Strangely enough, such words can help us to see reality more clearly than the technical concepts that the specialists in the field consider to be the core of the matter. Under heading (2), for example, a nurse tells of a patient who suffered from hallucinations, and who became anxious when people showed up that the patient did not recognize. One evening, the doctors came in with 15 people from the staff, to provide staff with a report at the patient’s bedside: “So I also drove them all out; it’s forbidden, 15 people can’t stand there, for the sake of the patient.” These words are as clarifying as the action itself is.

I do not think that the nurse who drove out the crowd for the sake of the patient thought that she was doing it “to protect the patient’s integrity.” Ethically weighty concepts can divert our attention, as if they were of greater importance than the actual human being. Talking about patient integrity can, oddly enough, make us blind to the patient.

Perhaps that is why many of Socrates’ conversations about concepts end in silence instead of in definitions. Should we define silence as an ethical concept? Should we arrange training where we have the opportunity to talk more about silence? The instinct to control reality by making concepts of it diverts attention from reality.

Read the qualitative study of patients’ integrity at the end of life, which draws attention to what it really is about.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Palmryd, L., Rejnö, Å. & Godskesen, T.E. Integrity at end of life in the intensive care unit: a qualitative study of nurses’ views. Ann. Intensive Care 11, 23 (2021). https://doi.org/10.1186/s13613-021-00802-y

This post in Swedish

We like real-life ethics

Two new dissertations!

Two of our doctoral students at CRB recently successfully defended their dissertations. Both dissertations reflect a trend in bioethics from purely theoretical studies to also include empirical studies of people’s perceptions of bioethical issues.

Åsa Grauman’s dissertation explores the public’s view of risk information about cardiovascular disease. The risk of cardiovascular disease depends on many factors, both lifestyle and heredity influence the risk. Many find it difficult to understand such risk information and many underestimate their risk, while others worry unnecessarily. For risk information to make sense to people, it must be designed so that recipients can benefit from it in practice. That requires knowing more about their perspective on risk, how health information affects them, and what they think is important and unimportant when it comes to risk information about cardiovascular disease. One of Åsa Grauman’s conclusions from her studies of these issues is that people often estimate their risk on the basis of self-assessed health and family history. As this can lead to the risk being underestimated, she argues that health examinations are important which can nuance individuals’ risk assessments and draw their attention to risk factors that they themselves can influence.

If you want more conclusions and see the studies behind them, read Åsa Grauman’s dissertation: The publics’ perspective on cardiovascular risk information: Implications for practice.

Mirko Ancillotti’s dissertation explores the Swedish public’s view of antibiotic resistance and our responsibility to reduce its prevalence. The rise of antibiotic-resistant bacteria is one of the major global threats to public health. The increase is related to our often careless overuse of antibiotics in society. The problem needs to be addressed both nationally and internationally, both collectively and individually. Mirko Ancillotti focuses on our individual responsibility for antibiotic resistance. He examines how such a responsibility can be supported through more effective health communication and improved institutional conditions that can help people to use antibiotics more judiciously. Such support requires knowledge of the public’s beliefs, values ​​and preferences regarding antibiotics, which may affect their willingness and ability to take responsibility for their own use of antibiotics. One of the studies in the dissertation indicates that people are prepared to make significant sacrifices to reduce their contribution to antibiotic resistance.

If you want to know more about the Swedish public’s view of antibiotic resistance and the possibility of supporting judicious behaviour, read Mirko Ancillotti’s dissertation: Antibiotic Resistance: A Multimethod Investigation of Individual Responsibility and Behaviour.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Åsa Grauman. 2021. The publics’ perspective on cardiovascular risk information: Implications for practice. Uppsala: Acta Universitatis Upsaliensis.

Mirko Ancillotti. 2021. Antibiotic Resistance: A Multimethod Investigation of Individual Responsibility and Behaviour. Uppsala: Acta Universitatis Upsaliensis.

This post in Swedish

Ethics needs empirical input

« Older posts