In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions