A blog from the Centre for Research Ethics & Bioethics (CRB)

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

3 Comments

  1. Ernst Mecke

    I am interested in the question of consciousness since more than 20 years, but I tend to tackle it from my professional basis, which is biology (with myself being a staunch Darwinist). To me, the basic idea of consciousness is that it is a brain activity which keeps an impression “alive”, i.e. available for use in later activities (and be it just for the activity of comparing a previous state of a situation with a later one – which would make it possible to estimate what will happen next). It would mean that having a functioning working memory would already make its possessor conscious, and that consciousness would be a very valuable tool for survival (in ways far superior to simple reflexes). According to the above article this would be just “access consciousness”, which is certainly already realized in AI (more impressive examples of it to be found in military technology, e.g. the Aegis system, which is quite fit to handle very Darwinian fights for survival). As to “phenomenal consciousness”, there is the fact that, e.g., the central computer of the Aegis system has also to be conscious of the state and working order of the instruments which it has to use and steer in order to survive those Darwinian fights, and this information will not be treated as “information about the attacker to be fought” but rather as “information about myself” (same as our brains are conscious about the arms and legs whose use they have to control). After this hint to “access consciousness” we have to see the difference between the information which describes the POSITION of an opponent or object to be dealt with (at us this includes the information WHERE to scratch ourselves) and other information which DESCRIBES an object. These descriptions are by our cognitive system delivered in a language of qualia (e.g., is the apple red enough to be considered ripe for eating?). And all the signals which do inform about our mental/emotional/physiological states without localizing these states or their sources with any precision are also appearing in a language of qualia. Thus, qualia are functionally essential, and they may be so also in AI systems (or at least may be soon). Of course the corresponding “experiences” of the involved AI systems will be as inaccessible to us as the subjective experiences of Nagel’s bat. And considering the different ways how AI systems and our brains are handling information (AI in a very digital way, our brains by nerve net mechanism with rather “analogical” outcomes), human imagination will presumably have chronic difficulties to accept “phenomenal consciousness” in AI. But well, as long AI is not built to be able to SUFFER, this is perhaps not that important (because it is SUFFERING which we consider very much when discussing ethics).
    Anyway, the point I should like to make is that we should consider our nervous system as something like a machine which has been developed by evolution to help our survival. I do like it if Christof Koch is describing in his “The Feeling of Life Itself” a method by which one can assess whether a patient with whom communication is impossible is conscious or not. But I do NOT like it if he towards the end of his book is drifting towards Panpsychism (referring to authorities who, in spite of their undeniable intelligence, had never the possibility to become familiar with the concept of AI). NOR do I think it helpful to leave the discussion of human consciousness to the philosophers while introducing new and different terms for the discussion about the forms of consciousness in AI. Because, as I was writing in an earlier comment in this blog, the thinking in technical terms and along technical examples DOES help the understanding of also human experiences, and also medical research (which aims, after all, at the reduction of suffering) has very good chances of profiting from it.

    • Michele Farisco

      Dear Ernst (if I may),
      Thank you for your thought-provoking comments.
      I agree on the starting point: in fact I philosophically endorse a form of “biological naturalism”, to use an expression by Searle. I also agree on a necessary evolutionary stance on both life and consciousness.
      I’m not sure your basic idea about consciousness (i.e., brain activities which keep information alive/available for later use) actually expresses a necessary and sufficient feature of consciousness: it seems to me that such activity might be implemented also by unaware brain operations, unless we constrain/specify it further.
      This is the reason why I think that episodic memory (i.e., memory of events a subject experienced at a particular place and time) rather than working memory indicates consciousness. Episodic of autobiographical memory is more explicitly related to a kind of multimodal situational survey which I identify with consciousness (as an explicit representation of the surroundings that can be exploited for goal-directed behaviours, i.e. actions in which the subject has specific goals and knows that his own action is instrumental to get them).
      What you define phenomenal consciousness as referred to Aegis seems to me more a kind of self-monitoring computational activity, which does not logically imply nor require any form of experience (as expressed by the rough expression “what it is like to be”, i.e. qualia).
      I would like to stress that I’m sympathetic with your tendency to stretch the meaning of consciousness referring it, for instance, directly to the brain or even AI: this in philosophy is not unproblematic, as the mereological fallacy shows. I’m not against the logical possibility of artificial qualia, while I’m inclined to think that the difference in structure among AI and biological brains is such to imply a difference in kind, unless we endorse a form of functionalism according to which what eventually counts is the function whatever is the structure. So I don’t want to play a philosophical game of creating new terms for the sake of self-indulgence, but rather I strive for conceptual clarity, and a preliminary condition for it is naming different things with different names.
      Concerning the point that thinking in technical terms helps the understanding of human experiences, I cannot agree more, and that’s exactly why I think it’s so important to use a consistent technical language avoiding any form of anthropomorphic fallacy. Our thinking needs to be strengthen and our imagination enhanced if we want to get a more realistic view of actual and future AI.

  2. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.