A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: consciousness (Page 1 of 5)

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

A new project will explore the prospect of artificial awareness

The neuroethics group at CRB has just started its work as part of a new European research project about artificial awareness. The project is called “Counterfactual Assessment and Valuation for Awareness Architecture” (CAVAA), and is funded for a duration of four years. The consortium is composed of 10 institutions, coordinated by the Radboud University in the Netherlands.

The goal of CAVAA is “to realize a theory of awareness instantiated as an integrated computational architecture…, to explain awareness in biological systems and engineer it in technological ones.” Different specific objectives derive from this general goal. First, CAVAA has a robust theoretical component: it relies on a strong theoretical framework. Conceptual reflection on awareness, including its definition and the identification of features that allow its attribution to either biological organisms or artificial systems, is an explicit task of the project. Second, CAVAA is interested in exploring the connection between awareness in biological organisms and its possible replication in artificial systems. The project thus gives much attention to the connection between neuroscience and AI. Third, against this background, CAVAA aims at replicating awareness in artificial settings. Importantly, the project also has a clear ethical responsibility, more specifically about anticipating the potential societal and ethical impact of aware artificial systems.

There are several reasons why a scientific project with a strong engineering and computer science component also has philosophers on board. We are asked to contribute to developing a strong and consistent theoretical account of awareness, including the conceptual conceivability and the technical feasibility of its artificial replication. This is not straightforward, not only because there are many content-related challenges, but also because there are logical traps to avoid. For instance, we should avoid the temptation to validate an empirical statement on the basis of our own theory: this would possibly be tautological or circular.

In addition to this theoretical contribution, we will also collaborate in identifying indicators of awareness and benchmarks for validating the cognitive architecture that will be developed. Finally, we will collaborate in the ethical analysis concerning potential future scenarios related to artificial awareness, such as the possibility of developing artificial moral agents or the need to extend moral rights also to artificial systems.

In the end, there are several potential contributions that philosophy can provide to the scientific attempt to replicate biological awareness in artificial systems. Part of this possible collaboration is the fundamental and provoking question: why should we try to develop artificial awareness at all? What is the expected benefit, should we succeed? This is definitely an open question, with possible arguments for and against attempting such a grand accomplishment.

There is also another question of equal importance, which may justify the effort to identify the necessary and sufficient conditions for artificial systems to become aware, and how to recognize them as such. What if we will inadvertently create (or worse: have already created) forms of artificial awareness, but do not recognize this and treat them as if they were unaware? Such scenarios also confront us with serious ethical issues. So, regardless of our background beliefs about artificial awareness, it is worth investing in thinking about it.

Stay tuned to hear more from CAVAA!

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Part of international collaborations

A charming idea about consciousness

Some ideas can have such a charm that you only need to hear them once to immediately feel that they are probably true: “there must be some grain of truth in it.” Conspiracy theories and urban myths probably spread in part because of how they manage to charm susceptible human minds by ringing true. It is said that even some states of illness are spread because the idea of ​​the illness has such a strong impact on many of us. In some cases, we only need to hear about the diagnosis to start showing the symptoms and maybe we also receive the treatment. But even the idea of diseases spread by ideas has charm, so we should be on our guard.

The temptation to fall for the charm of certain ideas naturally also exists in academia. At the same time, philosophy and science are characterized by self-critical examination of ideas that may sound so attractive that we do not notice the lack of examination. As long as the ideas are limited hypotheses that can in principle be tested, it is relatively easy to correct one’s hasty belief in them. But sometimes these charming ideas consist of grand hypotheses about elusive phenomena that no one knows how to test. People can be so convinced by such ideas that they predict that future science just needs to fill in the details. A dangerous rhetoric to get caught up in, which also has its charm.

Last year I wrote a blog post about a theory at the border between science and philosophy that I would like to characterize as both grand and charming. This is not to say that the theory must be false, just that in our time it may sound immediately convincing. The theory is an attempt to explain an elusive “phenomenon” that perplexes science, namely the nature of consciousness. Many feel that if we could explain consciousness on purely scientific grounds, it would be an enormously significant achievement.

The theory claims that consciousness is a certain mathematically defined form of information processing. Associating consciousness with information is timely, we are immediately inclined to listen. What type of information processing would consciousness be? The theory states that consciousness is integrated information. Integration here refers not only to information being stored as in computers, but to all this diversified information being interconnected and forming an organized whole, where all parts are effectively available globally. If I understand the matter correctly, you can say that the integrated information of a system is the amount of generated information that exceeds the information generated by the parts. The more information a system manages to integrate, the more consciousness the system has.

What, then, is so charming about the idea that ​​consciousness is integrated information? Well, the idea might seem to fit with how we experience our conscious lives. At this moment you are experiencing multitudes of different sensory impressions, filled with details of various kinds. Visual impressions are mixed with impressions from the other senses. At the same time, however, these sensory impressions are integrated into a unified experience from a single viewpoint, your own. The mathematical theory of information processing where diversification is combined with integration of information may therefore sound attractive as a theory of consciousness. We may be inclined to think: Perhaps it is because the brain processes information in this integrative way that our conscious lives are characterized by a personal viewpoint and all impressions are organized as an ego-centred subjective whole. Consciousness is integrated information!

It becomes even more enticing when it turns out that the theory, called Integrated Information Theory (IIT), contains a calculable measure (Phi) of the amount of integrated information. If the theory is correct, then one would be able to quantify consciousness and give different systems different Phi for the amount of consciousness. Here the idea becomes charming in yet another way. Because if you want to explain consciousness scientifically, it sounds like a virtue if the theory enables the quantification of how much consciousness a system generates. The desire to explain consciousness scientifically can make us extra receptive to the idea, which is a bit deceptive.

In an article in Behavioral and Brain Sciences, Björn Merker, Kenneth Williford and David Rudrauf examine the theory of consciousness as integrated information. The review is detailed and comprehensive. It is followed up by comments from other researchers, and ends with the authors’ response. What the three authors try to show in the article is that even if the brain does integrate information in the sense of the theory, the identification of consciousness with integrated information is mistaken. What the theory describes is efficient network organization, rather than consciousness. Phi is a measure of network efficiency, not of consciousness. What the authors examine in particular is that charming feature I just mentioned: the theory can seem to “fit” with how we experience our conscious lives from a unified ego-centric viewpoint. It is true that integrated information constitutes a “unity” in the sense that many things are joined in a functionally organized way. But that “unity” is hardly the same “unity” that characterizes consciousness, where the unity is your own point of view on your experiences. Effective networks can hardly be said to have a “viewpoint” from a subjective “ego-centre” just because they integrate information. The identification of features of our conscious lives with the basic concepts of the theory is thus hasty, tempting though it may be.

The authors do not deny that the brain integrates information in accordance with the theory. The theory mathematically describes an efficient way to process information in networks with limited energy resources, something that characterizes the brain, the authors point out. But if consciousness is identified with integrated information, then many other systems that process information in the same efficient way would also be conscious. Not only other biological systems besides the brain, but also artifacts such as some large-scale electrical power grids and social networks. Proponents of the theory seem to accept this, but we have no independent reason to suppose that systems other than the brain would have consciousness. Why then insist that other systems are also conscious? Well, perhaps because one is already attracted by the association between the basic concepts of the theory and the organization of our conscious experiences, as well as by the possibility of quantifying consciousness in different systems. The latter may sound like a scientific virtue. But if the identification is false from the beginning, then the virtue appears rather as a departure from science. The theory might flood the universe with consciousness. At least that is how I understand the gist of ​​the article.

Anyone who feels the allure of the theory that consciousness is integrated information should read the careful examination of the idea: The integrated information theory of consciousness: A case of mistaken identity.

The last word has certainly not been said and even charming ideas can turn out to be true. The problem is that the charm easily becomes the evidence when we are under the influence of the idea. Therefore, I believe that the careful discussion of the theory of consciousness as integrated information is urgent. The article is an excellent example of the importance of self-critical examination in philosophy and science.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Merker, B., Williford, K., & Rudrauf, D. (2022). The integrated information theory of consciousness: A case of mistaken identity. Behavioral and Brain Sciences, 45, E41. doi:10.1017/S0140525X21000881

This post in Swedish

We like critical thinking

How can we detect consciousness in brain-damaged patients?

Detecting consciousness in brain-damaged patients can be a huge challenge and the results are often uncertain or misinterpreted. In a previous post on this blog I described six indicators of consciousness that I introduced together with a neuroscientist and another philosopher. Those indicators were originally elaborated targeting animals and AI systems. Our question was: what capacities (deducible from behavior and performance or relevant cerebral underpinnings) make it reasonable to attribute consciousness to these non-human agents? In the same post, I mentioned that we were engaged in a multidisciplinary exploration of the clinical relevance of selected indicators, specifically for testing them on patients with Disorders of Consciousness (DoCs, for instance, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State, Cognitive-Motor Dissociation). While this multidisciplinary work is still in progress, we recently published an ethical reflection on the clinical relevance of the indicators of consciousness, taking DoCs as a case study.

To recapitulate, indicators of consciousness are conceived as particular capacities that can be deduced from the behavior or cognitive performance of a subject and that serve as a basis for a reasonable inference about the level of consciousness of the subject in question. Importantly, also the neural correlates of the relevant behavior or cognitive performance may make possible deducing the indicators of consciousness.  This implies the relevance of the indicators to patients with DoCs, who are often unable to behave or to communicate overtly. Responses in the brain can be used to deduce the indicators of consciousness in these patients.

On the basis of this relevance, we illustrate how the different indicators of consciousness might be applied to patients with DoCs with the final goal of contributing to improve the assessment of their residual conscious activity. In fact, a still astonishing rate of misdiagnosis affects this clinical population. It is estimated that up to 40 % of patients with DoCs are wrongly diagnosed as being in Vegetative State/Unresponsive Wakefulness Syndrome, while they are actually in a Minimally Conscious State. The difference of these diagnoses is not minimal, since they have importantly different prognostic implications, which raises a huge ethical problem.

We also argue for the need to recognize and explore the specific quality of the consciousness possibly retained by patients with DoCs. Because of the devastating damages of their brain, it is likely that their residual consciousness is very different from that of healthy subjects, usually assumed as a reference standard in diagnostic classification. To illustrate, while consciousness in healthy subjects is characterized by several distinct sensory modalities (for example, seeing, hearing and smelling), it is possible that in patients with DoCs, conscious contents (if any) are very limited in sensory modalities. These limitations may be evaluated based on the extent of the brain damage and on the patients’ residual behaviors (for instance, sniffing for smelling). Also, consciousness in healthy subjects is characterized by both dynamics and stability: it includes both dynamic changes and short-term stabilization of contents. Again, in the case of patients with DoCs, it is likely that their residual consciousness is very unstable and flickering, without any capacity for stabilization. If we approach patients with DoCs without acknowledging that consciousness is like a spectrum that accommodates different possible shapes and grades, we exclude a priori the possibility of recognizing the peculiarity of consciousness possibly retained by these patients.

The indicators of consciousness we introduced offer a potential help to identify the specific conscious abilities of these patients. While in this paper we argue for the rationale behind the clinical use of these indicators, and for their relevance to patients with DoCs, we also acknowledge that they open up new lines of research with concrete application to patients with DoCs. As already mentioned, this more applied work is in progress and we are confident of being able to present relevant results in the weeks to come.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 2330 (2022). https://doi.org/10.1186/s12910-022-00770-3

We have a clinical perspective

Can subjectivity be explained objectively?

The notion of a conscious universe, animated by unobservable experiences, is today presented almost as a scientific hypothesis. How is that possible? Do cosmologists’ hypotheses that the universe is filled with dark matter and dark energy contribute to making the idea of ​​a universe filled with “dark consciousness” almost credible?

I ask the question because I myself am amazed at how the notion that elementary particles have elementary experiences suddenly has become academically credible. The idea that consciousness permeates reality is usually called panpsychism and is considered to have been represented by several philosophers in history. The alleged scientific status of panpsychism is justified today by emphasizing two classic philosophical failures to explain consciousness. Materialism has not succeeded in explaining how consciousness can arise from non-conscious physical matter. Dualism has failed to explain how consciousness, if it is separate from matter, can interact with physical reality.

Against this discouraging background, panpsychism is presented as an attractive, even elegant solution to the problem of consciousness. The hypothesis is that consciousness is hidden in the universe as a fundamental non-observable property of matter. Proponents of this elegant solution suggest that this “dark consciousness,” which permeates the universe, is extremely modest. Consciousness is present in every elementary particle in the form of unimaginably simple elementary experiences. These insignificant experiences are united and strengthened in the brain’s nervous system, giving rise to what we are familiar with as our powerful human consciousness, with its stormy feelings and thoughts.

However, this justification of panpsychism as an elegant solution to a big scientific problem presupposes that there really is a big scientific problem to “explain consciousness.” Is not the starting point a bit peculiar, that even subjectivity must be explained as an objective phenomenon? Even dualism tends to objectify consciousness, since it presents consciousness as a parallel universe to our physical universe.

The alternative explanations are thus all equally objectifying. Either subjectivity is reduced to purely material processes, or subjectivity is explained as a mental parallel universe, or subjectivity is hypostasized as “dark consciousness” that pervades the universe: as elementary experiential qualities of matter. Can we not let subjectivity be subjectivity and objectivity be objectivity?

Once upon a time there was a philosopher named Immanuel Kant. He saw how our constantly objectifying subjectivity turns into an intellectual trap, when it tries to understand itself without limiting its own objectifying approach to all questions. We then resemble cats that hopelessly chase their own tails: either by spinning to the right or by spinning to the left. Both directions are equally frustrating. Is there an elegant solution to the spinning cat’s problem? Now, I do not want to claim that Kant definitely exposed the “hard problem” of consciousness as an intellectual trap, but he pointed out the importance of self-critically examining our projective, objectifying way of functioning. If we lived as expansively as we explain everything objectively, we would soon exhaust the entire planet… is not that exactly what we do?

During a philosophy lecture, I tried to show the students how we can be trapped by apparent problems, by pseudo-problems that of course are not scientific problems, since they make us resemble cats chasing their own tails without realizing the unrealizability of the task. One student did not like what she perceived as an arbitrary limitation of the enormous achievements of science, so she objected: “But if it is the task of science to explain all big problems, then it must attempt to explain these riddles as well.” The objection is similar to the motivation of panpsychism, where it is assumed that it is the task of science to explain everything objectively, even subjectivity, no matter how hopelessly the questions spin in our heads.

The spinning cat’s problem has a simple solution: stop chasing the tail. Humans, on the other hand, need to clearly see the hopelessness of their spinning in order to stop it. Therefore, humans need to philosophize in order to live well on this planet.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

If you want to read more about panpsychism, here are two links:

Does consciousness pervade the universe?

The idea that everything from spoons to stones is conscious is gaining academic credibility

This post in Swedish

We challenge habits of thought

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

« Older posts