A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: disorders of consciousness

An ethical strategy for improving the healthcare of brain-damaged patients

How can we improve the clinical care of brain-damaged patients? Individual clinicians, professional and patient associations, and other relevant stakeholders are struggling with this huge challenge.

A crucial step towards a better treatment of these very fragile patients is the elaboration and adoption of agreed-upon recommendations for their clinical treatment, both in emergency and intensive care settings. These recommendations should cover different aspects, from diagnosis to prognosis and rehabilitation plan. Both Europe and the US have issued relevant guidelines on Disorders of Consciousness (DoCs) in order to make clinical practice consistent and ultimately more beneficial to patients.

Nevertheless, these documents risk becoming ineffective or not having sufficient impact if they are not complemented with a clear strategy for operationalizing them. In other words, it is necessary to develop an adequate translation of the guidelines into actual clinical practice.

In a recent article that I wrote with Arleen Salles, we argue that ethics plays a crucial role in elaborating and implementing this strategy. The application of the guidelines is ethically very relevant, as it can directly impact the patients’ well-being, their right to the best possible care, communication between clinicians and family members, and overall shared decision-making. Failure to apply the guidelines in an ethically sound manner may inadvertently lead to unequal and unfair treatment of certain patients.

To illustrate, both documents recommend integrating behavioural and instrumental approaches to improve the diagnostic accuracy of DoCs (such as vegetative state/unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation). This recommendation is commendable, but not easy to follow because of a number of shortcomings and limitations in the actual clinical settings where patients with DoCs are diagnosed and treated. For instance, not all “ordinary,” non-research oriented hospitals have the necessary financial, human, and technical resources to afford the dual approach recommended by the guidelines. The implementation of the guidelines is arguably a complex process, involving several actors at different levels of action (from the administration to the clinical staff, from the finances to the therapy, etc.). Therefore, it is crucial to clearly identify “who is responsible for what” at each level of the implementation process.

For this reason, we propose that a strategy is built up to operationalize the guidelines, based on a clarification of the notion of responsibility. We introduce a Distributed Responsibility Model (DRM), which frames responsibility as multi-level and multi-dimensional. The main tenet of DRM is a shift from an individualistic to a modular understanding of responsibility, where several agents share professional and/or moral obligations across time. Moreover, specific responsibilities are assigned depending on the different areas of activity. In this way, each agent is assigned a specific autonomy in relation to their field of activity, and the mutual interaction between different agents is clearly defined. As a result, DRM promotes trust between the various agents.

Neither the European nor the US guidelines explicitly address the issue of implementation in terms of responsibility. We argue that this is a problem, because in situations of scarce resources and financial and technological constraints, it is important to explicitly conceptualize responsibility as a distributed ethical imperative that involves several actors. This will make it easier to identify possible failures at different levels and to implement adequate corrective action.

In short, we identify three main levels of responsibility: institutional, clinical, and interpersonal. At the institutional level, responsibility refers to the obligations of the relevant institution or organization (such as the hospital or the research centre). At the clinical level, responsibility refers to the obligations of the clinical staff. At the interpersonal level, responsibility refers to the involvement of different stakeholders with individual patients (more specifically, institutions, clinicians, and families/surrogates).

Our proposal in the article is thus to combine these three levels, as formalized in DRM, in order to operationalize the guidelines. This can help reduce the gap between the recommendations and actual clinical practice.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, Michele; Salles, Arleen. American and European Guidelines on Disorders of Consciousness: Ethical Challenges of Implementation, Journal of Head Trauma Rehabilitation: April 13, 2022. doi: 10.1097/HTR.0000000000000776

We want solid foundations

How can we detect consciousness in brain-damaged patients?

Detecting consciousness in brain-damaged patients can be a huge challenge and the results are often uncertain or misinterpreted. In a previous post on this blog I described six indicators of consciousness that I introduced together with a neuroscientist and another philosopher. Those indicators were originally elaborated targeting animals and AI systems. Our question was: what capacities (deducible from behavior and performance or relevant cerebral underpinnings) make it reasonable to attribute consciousness to these non-human agents? In the same post, I mentioned that we were engaged in a multidisciplinary exploration of the clinical relevance of selected indicators, specifically for testing them on patients with Disorders of Consciousness (DoCs, for instance, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State, Cognitive-Motor Dissociation). While this multidisciplinary work is still in progress, we recently published an ethical reflection on the clinical relevance of the indicators of consciousness, taking DoCs as a case study.

To recapitulate, indicators of consciousness are conceived as particular capacities that can be deduced from the behavior or cognitive performance of a subject and that serve as a basis for a reasonable inference about the level of consciousness of the subject in question. Importantly, also the neural correlates of the relevant behavior or cognitive performance may make possible deducing the indicators of consciousness.  This implies the relevance of the indicators to patients with DoCs, who are often unable to behave or to communicate overtly. Responses in the brain can be used to deduce the indicators of consciousness in these patients.

On the basis of this relevance, we illustrate how the different indicators of consciousness might be applied to patients with DoCs with the final goal of contributing to improve the assessment of their residual conscious activity. In fact, a still astonishing rate of misdiagnosis affects this clinical population. It is estimated that up to 40 % of patients with DoCs are wrongly diagnosed as being in Vegetative State/Unresponsive Wakefulness Syndrome, while they are actually in a Minimally Conscious State. The difference of these diagnoses is not minimal, since they have importantly different prognostic implications, which raises a huge ethical problem.

We also argue for the need to recognize and explore the specific quality of the consciousness possibly retained by patients with DoCs. Because of the devastating damages of their brain, it is likely that their residual consciousness is very different from that of healthy subjects, usually assumed as a reference standard in diagnostic classification. To illustrate, while consciousness in healthy subjects is characterized by several distinct sensory modalities (for example, seeing, hearing and smelling), it is possible that in patients with DoCs, conscious contents (if any) are very limited in sensory modalities. These limitations may be evaluated based on the extent of the brain damage and on the patients’ residual behaviors (for instance, sniffing for smelling). Also, consciousness in healthy subjects is characterized by both dynamics and stability: it includes both dynamic changes and short-term stabilization of contents. Again, in the case of patients with DoCs, it is likely that their residual consciousness is very unstable and flickering, without any capacity for stabilization. If we approach patients with DoCs without acknowledging that consciousness is like a spectrum that accommodates different possible shapes and grades, we exclude a priori the possibility of recognizing the peculiarity of consciousness possibly retained by these patients.

The indicators of consciousness we introduced offer a potential help to identify the specific conscious abilities of these patients. While in this paper we argue for the rationale behind the clinical use of these indicators, and for their relevance to patients with DoCs, we also acknowledge that they open up new lines of research with concrete application to patients with DoCs. As already mentioned, this more applied work is in progress and we are confident of being able to present relevant results in the weeks to come.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 2330 (2022). https://doi.org/10.1186/s12910-022-00770-3

We have a clinical perspective

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transcend disciplinary borders