The possibility of conscious AI is increasingly perceived as a legitimate and important scientific question. This interest has arisen after a long history of scientific doubts about the possibility of consciousness not only in other animals, but sometimes even in humans. The very concept of consciousness was for a period considered scientifically suspect. But now the question of conscious AI is being raised within science.

For anyone interested in how such a mind-boggling question can be answered philosophically and scientifically, I would like to recommend an interesting AI-philosophical exchange of views in the French journal Intellectica. The exchange (which is in English) revolves around an article by two philosophers, Jonathan Birch and Kristin Andrews, who for several years have discussed consciousness not only among mammals, but also among birds, fish, cephalopods, crustaceans, reptiles, amphibians and insects. The two philosophers carefully distinguish between psychological questions about what might make us emotionally attracted to believe that an AI system is conscious, and logical questions about what philosophically and scientifically can count as evidence for conscious AI. It is to this logical perspective that they want to contribute. How can we determine whether an artificial system is truly conscious; not just be seduced into believing it because the system emotionally convincingly mirrors the behavior of subjectively experiencing humans? Their basic idea is that we should first study consciousness in a wide range of animal species beyond mammals. Partly because the human brain is too different from (today’s) artificial systems to serve as a suitable reference point, but above all because such a broad comparison can help us identify the essential features of consciousness: features that could be used as markers for consciousness in artificial systems. The two philosophers’ proposal is thus that by starting from different forms of animal consciousness, we can better understand how we should philosophically and scientifically seek evidence for or against conscious AI.

One of my colleagues at CRB, Kathinka Evers, also a philosopher, comments on the article. She appreciates Birch and Andrews’ discussion as philosophically clarifying and sees the proposal to approach the question of conscious AI by studying forms of consciousness in a wide range of animal species as well argued. However, she believes that a number of issues require more attention. Among other things, she asks whether the transition from carbon- to silicon-based substrates does not require more attention than Birch and Andrews give it.

Birch and Andrews propose a thought experiment in which a robot rat behaves exactly like a real rat. It passes the same cognitive and behavioral tests. They further assume that the rat brain is accurately depicted in the robot, neuron for neuron. In such a case, they argue, it would be inconsistent not to accept the same pain markers that apply to the rat for the robot as well. The cases are similar, they argue, the transition from carbon to silicon does not provide sufficient reason to doubt that the robot rat can feel pain when it exhibits the same features that mark pain in the real rat. But the cases are not similar, Kathinka Evers points out, because the real rat, unlike the robot, is alive. If life is essential for consciousness, then it is not inconsistent to doubt that the robot can feel pain even in this thought experiment. Someone could of course associate life with consciousness and argue that a robot rat that exhibits the essential features of consciousness must also be considered alive. But if the purpose is to identify what can logically serve as evidence for conscious AI, the problem remains, says Kathinka Evers, because we then need to clarify how the relationship between life and consciousness should be investigated and how the concepts should be defined.

Kathinka Evers thus suggests several questions of relevance to what can logically be considered evidence for conscious AI. But she also asks a more fundamental question, which can be sensed throughout her commentary. She asks why the question of artificial consciousness is even being raised in science today. As mentioned, one of Birch and Andrews’ aims was to avoid the answer being influenced by psychological tendencies to interpret an AI that convincingly reflects human emotions as if it were conscious. But Kathinka Evers asks, as I read her, whether this logical purpose may not come too late. Is not the question already a temptation? AI is trained on human-generated data to reflect human behavior, she points out. Are we perhaps seeking philosophical and scientific evidence regarding a question that seems significant simply because we have a psychological tendency to identify with our digital mirror images? For a question to be considered scientific and worth funding, some kind of initial empirical support is usually required, but there is no evidence whatsoever for the possibility of consciousness in non-living entities such as AI systems. The question of whether an AI can be conscious has no more empirical support than the question of whether volcanoes can experience their eruptions, Kathinka Evers points out. There is a great risk that we will scientifically try to answer a question that lacks scientific basis. No matter how carefully we seek the longed-for answer, the question itself seems imprudent.

I am reminded of the myth of Narcissus. After a long history of rejecting the love of others (the consciousness of others), he finally fell in love with his own (digital) reflection, tried hopelessly to hug it, and was then tormented by an eternal longing for the image. Are you there? Will the reflection respond? An AI will certainly generate a response that speaks to our human emotions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Birch Jonathan, Andrews Kristin (2024/2). To Understand AI Sentience, First Understand it in Animals. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 213-226.

Evers Kathinka (2024/2). To understand sentience in AI first understand it in animals. Commentary to Jonathan Birch and Kristin Andrews. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 229-232.

This post in Swedish

We challenge habits of thought