A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: Human Brain Project (Page 1 of 8)

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

Taking care of the legacy: curating responsible research and innovation practice

Responsible research and innovation, or RRI as it is often called in EU-project language, is both scholarship and practice. Over the last decade, the Human Brain Project Building has used structured and strategic approaches to embed responsible research and innovation practices across the project. The efforts to curate the legacy of this work includes the development an online Ethics & Society toolkit. But how does that work? And what does a toolkit need in order to ensure it has a role to play?

A recent paper by Lise Bitsch and Bernd Stahl in Frontiers in Research Metrics and Analytics explores whether this kind of toolkit can help embed the legacy of RRI activities in a large research project. According to them, a toolkit has the potential to play an important role in preserving RRI legacy. But they also point out that that potential can only be realised if we have organisational structures and funding in place to make sure that this legacy is retained. Because as all resources, it needs to be maintained, shared, used, and curated. To play a role in the long-term.

Even though this particular toolkit is designed to integrate insights and practises of responsible research and innovation in the Human Brain Project, there are lessons to be learned for other efforts to ensure acceptability, desirability and sustainability of processes and outcomes of research and innovation activities. The Human Brain Project is a ten-year European Flagship project that has gone through several phases. Bernd Stahl is the ethics director of the Human Brain Project, and Lise Bitsch has led the project’s responsible research and innovation work stream for the past three years. And there is a lot to be learned. For projects who are considering developing similar tools, they describe the process of designing and developing the toolkit.

But there are parts of the RRI-legacy that cannot fit in a toolkit. The impact of the ethical, social and reflective work in the Human Brain Project is visible in governance structures, how the project is managing and handling data, in its publications and communications. The authors are part of those structures.

In addition to the Ethics & Society toolkit, the work has been published in journals, shared on the Ethics Dialogues blog (where a first version of this post was published) and the HBP Society Twitter handle, offering more opportunities to engage and discuss in the EBRAINS community Ethics & Society space. The capacity building efforts carried out for the project and EBRAINS research infrastructure have been developed into an online ethics & society training resource, and the work with gender and diversity has resulted in a toolkit for equality, diversity and inclusion in project themes and teams.

Read the paper by Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy.

(A first version of this post was originally published on the Ethics Dialogues blog, March 13, 2023)

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy, Frontiers in Research Metrics and Analytics, 13 March 2023, Sec. Research Policy and Strategic Management, Volume 8 – 2023, https://doi.org/10.3389/frma.2023.1112106

Part of international collaborations

A charming idea about consciousness

Some ideas can have such a charm that you only need to hear them once to immediately feel that they are probably true: “there must be some grain of truth in it.” Conspiracy theories and urban myths probably spread in part because of how they manage to charm susceptible human minds by ringing true. It is said that even some states of illness are spread because the idea of ​​the illness has such a strong impact on many of us. In some cases, we only need to hear about the diagnosis to start showing the symptoms and maybe we also receive the treatment. But even the idea of diseases spread by ideas has charm, so we should be on our guard.

The temptation to fall for the charm of certain ideas naturally also exists in academia. At the same time, philosophy and science are characterized by self-critical examination of ideas that may sound so attractive that we do not notice the lack of examination. As long as the ideas are limited hypotheses that can in principle be tested, it is relatively easy to correct one’s hasty belief in them. But sometimes these charming ideas consist of grand hypotheses about elusive phenomena that no one knows how to test. People can be so convinced by such ideas that they predict that future science just needs to fill in the details. A dangerous rhetoric to get caught up in, which also has its charm.

Last year I wrote a blog post about a theory at the border between science and philosophy that I would like to characterize as both grand and charming. This is not to say that the theory must be false, just that in our time it may sound immediately convincing. The theory is an attempt to explain an elusive “phenomenon” that perplexes science, namely the nature of consciousness. Many feel that if we could explain consciousness on purely scientific grounds, it would be an enormously significant achievement.

The theory claims that consciousness is a certain mathematically defined form of information processing. Associating consciousness with information is timely, we are immediately inclined to listen. What type of information processing would consciousness be? The theory states that consciousness is integrated information. Integration here refers not only to information being stored as in computers, but to all this diversified information being interconnected and forming an organized whole, where all parts are effectively available globally. If I understand the matter correctly, you can say that the integrated information of a system is the amount of generated information that exceeds the information generated by the parts. The more information a system manages to integrate, the more consciousness the system has.

What, then, is so charming about the idea that ​​consciousness is integrated information? Well, the idea might seem to fit with how we experience our conscious lives. At this moment you are experiencing multitudes of different sensory impressions, filled with details of various kinds. Visual impressions are mixed with impressions from the other senses. At the same time, however, these sensory impressions are integrated into a unified experience from a single viewpoint, your own. The mathematical theory of information processing where diversification is combined with integration of information may therefore sound attractive as a theory of consciousness. We may be inclined to think: Perhaps it is because the brain processes information in this integrative way that our conscious lives are characterized by a personal viewpoint and all impressions are organized as an ego-centred subjective whole. Consciousness is integrated information!

It becomes even more enticing when it turns out that the theory, called Integrated Information Theory (IIT), contains a calculable measure (Phi) of the amount of integrated information. If the theory is correct, then one would be able to quantify consciousness and give different systems different Phi for the amount of consciousness. Here the idea becomes charming in yet another way. Because if you want to explain consciousness scientifically, it sounds like a virtue if the theory enables the quantification of how much consciousness a system generates. The desire to explain consciousness scientifically can make us extra receptive to the idea, which is a bit deceptive.

In an article in Behavioral and Brain Sciences, Björn Merker, Kenneth Williford and David Rudrauf examine the theory of consciousness as integrated information. The review is detailed and comprehensive. It is followed up by comments from other researchers, and ends with the authors’ response. What the three authors try to show in the article is that even if the brain does integrate information in the sense of the theory, the identification of consciousness with integrated information is mistaken. What the theory describes is efficient network organization, rather than consciousness. Phi is a measure of network efficiency, not of consciousness. What the authors examine in particular is that charming feature I just mentioned: the theory can seem to “fit” with how we experience our conscious lives from a unified ego-centric viewpoint. It is true that integrated information constitutes a “unity” in the sense that many things are joined in a functionally organized way. But that “unity” is hardly the same “unity” that characterizes consciousness, where the unity is your own point of view on your experiences. Effective networks can hardly be said to have a “viewpoint” from a subjective “ego-centre” just because they integrate information. The identification of features of our conscious lives with the basic concepts of the theory is thus hasty, tempting though it may be.

The authors do not deny that the brain integrates information in accordance with the theory. The theory mathematically describes an efficient way to process information in networks with limited energy resources, something that characterizes the brain, the authors point out. But if consciousness is identified with integrated information, then many other systems that process information in the same efficient way would also be conscious. Not only other biological systems besides the brain, but also artifacts such as some large-scale electrical power grids and social networks. Proponents of the theory seem to accept this, but we have no independent reason to suppose that systems other than the brain would have consciousness. Why then insist that other systems are also conscious? Well, perhaps because one is already attracted by the association between the basic concepts of the theory and the organization of our conscious experiences, as well as by the possibility of quantifying consciousness in different systems. The latter may sound like a scientific virtue. But if the identification is false from the beginning, then the virtue appears rather as a departure from science. The theory might flood the universe with consciousness. At least that is how I understand the gist of ​​the article.

Anyone who feels the allure of the theory that consciousness is integrated information should read the careful examination of the idea: The integrated information theory of consciousness: A case of mistaken identity.

The last word has certainly not been said and even charming ideas can turn out to be true. The problem is that the charm easily becomes the evidence when we are under the influence of the idea. Therefore, I believe that the careful discussion of the theory of consciousness as integrated information is urgent. The article is an excellent example of the importance of self-critical examination in philosophy and science.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Merker, B., Williford, K., & Rudrauf, D. (2022). The integrated information theory of consciousness: A case of mistaken identity. Behavioral and Brain Sciences, 45, E41. doi:10.1017/S0140525X21000881

This post in Swedish

We like critical thinking

AI narratives from the Global North

The way we develop, adopt, regulate and accept artificial intelligence is embedded in our societies and cultures. Our narratives about intelligent machines take on a flavour of the art, literature and imaginations of the people who live today, and of those that came before us. But some of us are missing from the stories that are told about thinking machines. A recent paper about forgotten African AI narratives and the future of AI in Africa shines a light on some of the missing narratives.

In the paper, Damian Eke and George Ogoh point to the fact that how artificial intelligence is developed, adopted, regulated and accepted is hugely influenced by socio-cultural, ethical, political, media and historical narratives. But most of the stories we tell about intelligent machines are imagined and conceptualised in the Global North. The paper begs the question whether it is a problem? And if so, in what way? When machine narratives put the emphasis on technology neutrality, that becomes a problem that goes beyond AI.

What happens when Global North narratives set the agenda for research and innovation also in the Global South, and what happens more specifically to the agenda for artificial intelligence? The impact is difficult to quantify. But when historical, philosophical, socio-cultural and political narratives from Africa are missing, we need to understand why and what it might imply. Damian Eke & George Ogoh provide a list of reasons for why this is important. One is concerns about the state of STEM education (science, technology, engineering and mathematics) in many African countries. Another reason is the well-documented issue of epistemic injustice: unfair discrimination against people because of prejudices about their knowledge. The dominance of Global North narratives could lead to devaluing the expertise of Africans in the tech community. This brings us to the point of the argument, which is that African socio-cultural, ethical and political contexts and narratives are absent from the global debate about responsible AI.

The paper makes the case for including African AI narratives not only into the research and development of artificial intelligence, but also into the ethics and governance of technology more broadly. Such inclusion would help counter epistemic injustice. If we fail to include narratives from the South into the AI discourse, the development can never be truly global. Moreover, excluding African AI narratives will limit our understanding of how different cultures in Africa conceptualise AI, and we miss an important perspective on how people across the world perceive the risks and benefits of machine learning and AI powered technology. Nor will we understand the many ways in which stories, art, literature and imaginations globally shape those perceptions.

If we want to develop an “AI for good”, it needs to be good for Africa and other parts of the Global South. According to Damian Eke and George Ogoh, it is possible to create a more meaningful and responsible narrative about AI. That requires that we identify and promote people-centred narratives. And anchor AI ethics for Africa in African ethical principles, like ubuntu. But the key for African countries to participate in the AI landscape is a greater focus on STEM education and research. The authors end their paper with a call to improve the diversity of voices in the global discourse about AI. Culturally sensitive and inclusive AI applications would benefit us all, for epistemic injustice is not just a geographical problem. Our view of whose knowledge has value is powered by a broad variety of forms of prejudice.

Damian Eke and George Ogoh are both actively contributing to the Human Brain Project’s work on responsible research and innovation. The Human Brain Project is a European Flagship project providing in-depth understanding of the complex structure and function of the human brain, using interdisciplinary approaches.

Do you want to learn more? Read the article here: Forgotten African AI Narratives and the future of AI in Africa.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Eke D, Ogoh G, Forgotten African AI Narratives and the future of AI in Africa, International Review of Information Ethics, 2022;31(08).

We want to be just

Does the brain make room for free will?

The question of whether we have free will has been debated throughout the ages and everywhere in the world. Can we influence our future or is it predetermined? If everything is predetermined and we lack free will, why should we act responsibly and by what right do we hold each other accountable?

There have been different ideas about what predetermines the future and excludes free will. People have talked about fate and about the gods. Today, we rather imagine that it is about necessary causal relationships in the universe. It seems that the strict determinism of the material world must preclude the free will that we humans perceive ourselves to have. If we really had free will, we think, then nature would have to give us a space of our own to decide in. A causal gap where nature does not determine everything according to its laws, but allows us to act according to our will. But this seems to contradict our scientific world view.

In an article in the journal Intellectica, Kathinka Evers at CRB examines the plausibility of this choice between two extreme positions: either strict determinism that excludes free will, or free will that excludes determinism.

Kathinka Evers approaches the problem from a neuroscientific perspective. This particular perspective has historically tended to support one of the positions: strict determinism that excludes free will. How can the brain make room for free will, if our decisions are the result of electrochemical processes and of evolutionarily developed programs? Is it not right there, in the brain, that our free will is thwarted by material processes that give us no space to act?

Some authors who have written about free will from a neuroscientific perspective have at times explained away freedom as the brain’s user’s illusion: as a necessary illusion, as a fictional construct. Some have argued that since social groups function best when we as individuals assume ourselves to be responsible actors, we must, after all, keep this old illusion alive. Free will is a fiction that works and is needed in society!

This attitude is unsound, says Kathinka Evers. We cannot build our societies on assumptions that contradict our best knowledge. It would be absurd to hold people responsible for actions that they in fact have no ability to influence. At the same time, she agrees that the notion of free will is socially important. But if we are to retain the notion, it must be consistent with our knowledge of the brain.

One of the main points of the article is that our knowledge of the brain could actually provide some room for free will. The brain could function beyond the opposition between indeterminism and strict determinism, some neuroscientific theories suggest. This does not mean that there would be uncaused neural events. Rather, a determinism is proposed where the relationship between cause and effect is variable and contingent, not invariable and necessary, as we commonly assume. As far as I understand, it is about the fact that the brain has been shown to function much more independently, actively and flexibly than in the image of it as a kind of programmed machine. Different incoming nerve signals can stabilize different neural patterns of connections in the brain, which support the same behavioural ability. And the same incoming nerve signal can stabilize different patterns of connections in the brain that result in the same behavioural ability. Despite great variation in how individuals’ neural patterns of connections are stabilized, the same common abilities are supported. This model of the brain is thus deterministic, while being characterized by variability. It describes a kind of kaleidoscopically variable causality in the brain between incoming signals and resulting behaviours and abilities.

Kathinka Evers thus hypothetically suggests that this variability in the brain, if real, could provide empirical evidence that free will is compatible with determinism.

Read the philosophically exciting article here: Variable determinism in social applications: translating science to society

Although Kathinka Evers suggests that a certain amount of free will could be compatible with what we know about the brain, she emphasizes that neuroscience gives us increasingly detailed knowledge about how we are conditioned by inherited programs, for example, during adolescence, as well as by our conditions and experiences in childhood. We should, after all, be cautiously restrained in praising and blaming each other, she concludes the article, referring to the Stoic Epictetus, one of the philosophers who thought about free will and who rather emphasized freedom from the notion of a free will.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers Kathinka (2021/2). Variable Determinism in Social Applications: Translating Science to Society. In Monier Cyril & Khamassi Mehdi (Eds), Liberty and cognition, Intellectica, 75, pp.73-89.

This post in Swedish

We like challenging questions

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

An ethical strategy for improving the healthcare of brain-damaged patients

How can we improve the clinical care of brain-damaged patients? Individual clinicians, professional and patient associations, and other relevant stakeholders are struggling with this huge challenge.

A crucial step towards a better treatment of these very fragile patients is the elaboration and adoption of agreed-upon recommendations for their clinical treatment, both in emergency and intensive care settings. These recommendations should cover different aspects, from diagnosis to prognosis and rehabilitation plan. Both Europe and the US have issued relevant guidelines on Disorders of Consciousness (DoCs) in order to make clinical practice consistent and ultimately more beneficial to patients.

Nevertheless, these documents risk becoming ineffective or not having sufficient impact if they are not complemented with a clear strategy for operationalizing them. In other words, it is necessary to develop an adequate translation of the guidelines into actual clinical practice.

In a recent article that I wrote with Arleen Salles, we argue that ethics plays a crucial role in elaborating and implementing this strategy. The application of the guidelines is ethically very relevant, as it can directly impact the patients’ well-being, their right to the best possible care, communication between clinicians and family members, and overall shared decision-making. Failure to apply the guidelines in an ethically sound manner may inadvertently lead to unequal and unfair treatment of certain patients.

To illustrate, both documents recommend integrating behavioural and instrumental approaches to improve the diagnostic accuracy of DoCs (such as vegetative state/unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation). This recommendation is commendable, but not easy to follow because of a number of shortcomings and limitations in the actual clinical settings where patients with DoCs are diagnosed and treated. For instance, not all “ordinary,” non-research oriented hospitals have the necessary financial, human, and technical resources to afford the dual approach recommended by the guidelines. The implementation of the guidelines is arguably a complex process, involving several actors at different levels of action (from the administration to the clinical staff, from the finances to the therapy, etc.). Therefore, it is crucial to clearly identify “who is responsible for what” at each level of the implementation process.

For this reason, we propose that a strategy is built up to operationalize the guidelines, based on a clarification of the notion of responsibility. We introduce a Distributed Responsibility Model (DRM), which frames responsibility as multi-level and multi-dimensional. The main tenet of DRM is a shift from an individualistic to a modular understanding of responsibility, where several agents share professional and/or moral obligations across time. Moreover, specific responsibilities are assigned depending on the different areas of activity. In this way, each agent is assigned a specific autonomy in relation to their field of activity, and the mutual interaction between different agents is clearly defined. As a result, DRM promotes trust between the various agents.

Neither the European nor the US guidelines explicitly address the issue of implementation in terms of responsibility. We argue that this is a problem, because in situations of scarce resources and financial and technological constraints, it is important to explicitly conceptualize responsibility as a distributed ethical imperative that involves several actors. This will make it easier to identify possible failures at different levels and to implement adequate corrective action.

In short, we identify three main levels of responsibility: institutional, clinical, and interpersonal. At the institutional level, responsibility refers to the obligations of the relevant institution or organization (such as the hospital or the research centre). At the clinical level, responsibility refers to the obligations of the clinical staff. At the interpersonal level, responsibility refers to the involvement of different stakeholders with individual patients (more specifically, institutions, clinicians, and families/surrogates).

Our proposal in the article is thus to combine these three levels, as formalized in DRM, in order to operationalize the guidelines. This can help reduce the gap between the recommendations and actual clinical practice.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, Michele; Salles, Arleen. American and European Guidelines on Disorders of Consciousness: Ethical Challenges of Implementation, Journal of Head Trauma Rehabilitation: April 13, 2022. doi: 10.1097/HTR.0000000000000776

We want solid foundations

How can we detect consciousness in brain-damaged patients?

Detecting consciousness in brain-damaged patients can be a huge challenge and the results are often uncertain or misinterpreted. In a previous post on this blog I described six indicators of consciousness that I introduced together with a neuroscientist and another philosopher. Those indicators were originally elaborated targeting animals and AI systems. Our question was: what capacities (deducible from behavior and performance or relevant cerebral underpinnings) make it reasonable to attribute consciousness to these non-human agents? In the same post, I mentioned that we were engaged in a multidisciplinary exploration of the clinical relevance of selected indicators, specifically for testing them on patients with Disorders of Consciousness (DoCs, for instance, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State, Cognitive-Motor Dissociation). While this multidisciplinary work is still in progress, we recently published an ethical reflection on the clinical relevance of the indicators of consciousness, taking DoCs as a case study.

To recapitulate, indicators of consciousness are conceived as particular capacities that can be deduced from the behavior or cognitive performance of a subject and that serve as a basis for a reasonable inference about the level of consciousness of the subject in question. Importantly, also the neural correlates of the relevant behavior or cognitive performance may make possible deducing the indicators of consciousness.  This implies the relevance of the indicators to patients with DoCs, who are often unable to behave or to communicate overtly. Responses in the brain can be used to deduce the indicators of consciousness in these patients.

On the basis of this relevance, we illustrate how the different indicators of consciousness might be applied to patients with DoCs with the final goal of contributing to improve the assessment of their residual conscious activity. In fact, a still astonishing rate of misdiagnosis affects this clinical population. It is estimated that up to 40 % of patients with DoCs are wrongly diagnosed as being in Vegetative State/Unresponsive Wakefulness Syndrome, while they are actually in a Minimally Conscious State. The difference of these diagnoses is not minimal, since they have importantly different prognostic implications, which raises a huge ethical problem.

We also argue for the need to recognize and explore the specific quality of the consciousness possibly retained by patients with DoCs. Because of the devastating damages of their brain, it is likely that their residual consciousness is very different from that of healthy subjects, usually assumed as a reference standard in diagnostic classification. To illustrate, while consciousness in healthy subjects is characterized by several distinct sensory modalities (for example, seeing, hearing and smelling), it is possible that in patients with DoCs, conscious contents (if any) are very limited in sensory modalities. These limitations may be evaluated based on the extent of the brain damage and on the patients’ residual behaviors (for instance, sniffing for smelling). Also, consciousness in healthy subjects is characterized by both dynamics and stability: it includes both dynamic changes and short-term stabilization of contents. Again, in the case of patients with DoCs, it is likely that their residual consciousness is very unstable and flickering, without any capacity for stabilization. If we approach patients with DoCs without acknowledging that consciousness is like a spectrum that accommodates different possible shapes and grades, we exclude a priori the possibility of recognizing the peculiarity of consciousness possibly retained by these patients.

The indicators of consciousness we introduced offer a potential help to identify the specific conscious abilities of these patients. While in this paper we argue for the rationale behind the clinical use of these indicators, and for their relevance to patients with DoCs, we also acknowledge that they open up new lines of research with concrete application to patients with DoCs. As already mentioned, this more applied work is in progress and we are confident of being able to present relevant results in the weeks to come.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 2330 (2022). https://doi.org/10.1186/s12910-022-00770-3

We have a clinical perspective

Fact resistance, human nature and contemplation

Sometimes we all resist facts. I saw a cyclist slip on the icy road. When I asked if it went well, she was on her feet in an instant and denied everything: “I did not fall!” It is human to deny facts. They can hurt and be disturbing.

What are we resisting? The usual answer is that fact-resistant individuals or groups resist facts about the world around us, such as statistics on violent crime, on vaccine side effects, on climate change or on the spread of disease. It then becomes natural to offer resistance to fact resistance by demanding more rigour in the field of knowledge. People should learn to turn more rigorously to the world they live in! The problem is that fact-resistant attitudes do just that. They are almost bewitched by the world and by the causes of what are perceived as outrageous problems in it. And now we too are bewitched by fact resistance and speculate about the causes of this outrageous problem.

Of course, we believe that our opposition is justified. But who does not think so? Legitimate resistance is met by legitimate resistance, and soon the conflict escalates around its double spiral of legitimacy. The possibility of resolving it is blocked by the conflict itself, because all parties are equally legitimate opponents of each other. Everyone hears their own inner voices warning them from acknowledging their mistakes, from acknowledging their uncertainty, from acknowledging their human resistance to reality, as when we fall off the bike and wish it had never happened. The opposing side would immediately seize the opportunity! Soon, our mistake is a scandal on social media. So we do as the person who slipped on the icy road, we deny everything without thinking: “I was not wrong, I had my own facts!” We ignore the fact that life thereby becomes a lie, because our inner voices warn us from acknowledging our uncertainty. We have the right to be recognized, our voices insist, at least as an alternative to the “established view.”

Conflicts give us no time for reflection. Yet, there is really nothing stopping us from sitting down, in the midst of conflict, and resolving it within ourselves. When we give ourselves time to think for ourselves, we are freer to acknowledge our uncertainty and examine our spirals of thought. Of course, this philosophical self-examination does not resolve the conflict between legitimate opponents which escalates around us as increasingly impenetrable and real. It only resolves the conflict within ourselves. But perhaps our thoughtful philosophical voice still gives a hint of how, just by allowing us to soar in uncertainty, we already see the emptiness of the conflict and are free from it?

If we more often dared to soar in uncertainty, if it became more permissible to say “I do not know,” if we listened more attentively to thoughtful voices instead of silencing them with loud knowledge claims, then perhaps fact resistance also decreases. Perhaps fact resistance is not least resistance to an inner fact. To a single inner fact. What fact? Our insecurity as human beings, which we do not permit ourselves. But if you allow yourself to slip on the icy road, then you do not have to deny that you did!

A more thoughtful way of being human should be possible. We shape the societies that shape us.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

How can neuroethics and AI ethics join their forces?

As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.

In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.

In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.

Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.

In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?

This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.

In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0

We transcend disciplinary borders

« Older posts