A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 1 of 20)

Is this really true?

Why is the question of truth so marvelous? A common attitude is that the question can make us check that our opinions really are correct before we express them. By being as well-informed as possible, by examining our opinions so that they form as large and coherent a system as possible of well-considered opinions, we can in good conscience do what we all have a tendency to do: give vent to our opinions.

Letting the question of truth raise the demands on how we form our opinions is, of course, important. But the stricter requirements also risk reinforcing our stance towards the opinions that we believe meet the requirements. We are no longer just right, so to speak, but right in the right way, according to the most rigorous requirements. If someone expresses opinions formed without such rigor, we immediately feel compelled to respond to their delusions by expressing our more rigorous views on the matter.

Responding to misconceptions is, of course, important. One risk, however, is that those who are often declared insufficiently rigorous soon learn how to present a rigorous facade. Or even ignore the more demanding requirements because they are right anyway, and therefore also have the right to ignore those who are wrong anyway!

Our noble attitude to the question of truth may not always end marvelously, but may lead to a harsher climate of opinion. So how can the question of truth be marvelous?

Most of us have a tendency to think that our views of the world are motivated by everything disturbing that happens in it. We may even think that it is our goodness that makes us have the opinions, that it is our sense of justice that makes us express them. These tendencies reinforce our opinions, tighten them like the springs of a mechanism. Just as we have a knee-jerk reflex that makes our leg kick, we seem to have a knowledge reflex that makes us run our mouths, if I may express myself drastically. As soon as an opinion has taken shape, we think we know it is so. We live in our heads and the world seems to be inundated by everything we think about it.

“Is this really true?” Suppose we asked that question a little more often, just when we feel compelled to express our opinion about the state of the world. What would happen? We would probably pause for a moment … and might unexpectedly realize that the only thing that makes us feel compelled to express the opinion is the opinion itself. If someone questions our opinion, we immediately feel the compulsion to express more opinions, which in our view prove the first opinion.

“Is this really true?” For a brief moment, the question of truth can take our breath away. The compulsion to express our opinions about the state of the world is released and we can ask ourselves: Why do I constantly feel the urge to express my opinions? The opinions are honest, I really think this way, I don’t just make up opinions. But the thinking of my opinions has a deceptive form, because when I think my opinions, I obviously think that it is so. The opinions take the form of being the reality to which I react. – Or as a Stoic thinker said:

“People are disturbed not by things themselves, but by the views they take of them.” (Epictetus)

“Is this really true?” Being silenced by that question can make a whole cloud of opinions to condense into a drop of clarity. Because when we become silent, we can suddenly see how the knowledge reflex sets not only our mouths in motion, but the whole world. So, who takes truth seriously? Perhaps the one who does not take their opinions seriously.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

Why does science ask the question of artificial consciousness?

The possibility of conscious AI is increasingly perceived as a legitimate and important scientific question. This interest has arisen after a long history of scientific doubts about the possibility of consciousness not only in other animals, but sometimes even in humans. The very concept of consciousness was for a period considered scientifically suspect. But now the question of conscious AI is being raised within science.

For anyone interested in how such a mind-boggling question can be answered philosophically and scientifically, I would like to recommend an interesting AI-philosophical exchange of views in the French journal Intellectica. The exchange (which is in English) revolves around an article by two philosophers, Jonathan Birch and Kristin Andrews, who for several years have discussed consciousness not only among mammals, but also among birds, fish, cephalopods, crustaceans, reptiles, amphibians and insects. The two philosophers carefully distinguish between psychological questions about what might make us emotionally attracted to believe that an AI system is conscious, and logical questions about what philosophically and scientifically can count as evidence for conscious AI. It is to this logical perspective that they want to contribute. How can we determine whether an artificial system is truly conscious; not just be seduced into believing it because the system emotionally convincingly mirrors the behavior of subjectively experiencing humans? Their basic idea is that we should first study consciousness in a wide range of animal species beyond mammals. Partly because the human brain is too different from (today’s) artificial systems to serve as a suitable reference point, but above all because such a broad comparison can help us identify the essential features of consciousness: features that could be used as markers for consciousness in artificial systems. The two philosophers’ proposal is thus that by starting from different forms of animal consciousness, we can better understand how we should philosophically and scientifically seek evidence for or against conscious AI.

One of my colleagues at CRB, Kathinka Evers, also a philosopher, comments on the article. She appreciates Birch and Andrews’ discussion as philosophically clarifying and sees the proposal to approach the question of conscious AI by studying forms of consciousness in a wide range of animal species as well argued. However, she believes that a number of issues require more attention. Among other things, she asks whether the transition from carbon- to silicon-based substrates does not require more attention than Birch and Andrews give it.

Birch and Andrews propose a thought experiment in which a robot rat behaves exactly like a real rat. It passes the same cognitive and behavioral tests. They further assume that the rat brain is accurately depicted in the robot, neuron for neuron. In such a case, they argue, it would be inconsistent not to accept the same pain markers that apply to the rat for the robot as well. The cases are similar, they argue, the transition from carbon to silicon does not provide sufficient reason to doubt that the robot rat can feel pain when it exhibits the same features that mark pain in the real rat. But the cases are not similar, Kathinka Evers points out, because the real rat, unlike the robot, is alive. If life is essential for consciousness, then it is not inconsistent to doubt that the robot can feel pain even in this thought experiment. Someone could of course associate life with consciousness and argue that a robot rat that exhibits the essential features of consciousness must also be considered alive. But if the purpose is to identify what can logically serve as evidence for conscious AI, the problem remains, says Kathinka Evers, because we then need to clarify how the relationship between life and consciousness should be investigated and how the concepts should be defined.

Kathinka Evers thus suggests several questions of relevance to what can logically be considered evidence for conscious AI. But she also asks a more fundamental question, which can be sensed throughout her commentary. She asks why the question of artificial consciousness is even being raised in science today. As mentioned, one of Birch and Andrews’ aims was to avoid the answer being influenced by psychological tendencies to interpret an AI that convincingly reflects human emotions as if it were conscious. But Kathinka Evers asks, as I read her, whether this logical purpose may not come too late. Is not the question already a temptation? AI is trained on human-generated data to reflect human behavior, she points out. Are we perhaps seeking philosophical and scientific evidence regarding a question that seems significant simply because we have a psychological tendency to identify with our digital mirror images? For a question to be considered scientific and worth funding, some kind of initial empirical support is usually required, but there is no evidence whatsoever for the possibility of consciousness in non-living entities such as AI systems. The question of whether an AI can be conscious has no more empirical support than the question of whether volcanoes can experience their eruptions, Kathinka Evers points out. There is a great risk that we will scientifically try to answer a question that lacks scientific basis. No matter how carefully we seek the longed-for answer, the question itself seems imprudent.

I am reminded of the myth of Narcissus. After a long history of rejecting the love of others (the consciousness of others), he finally fell in love with his own (digital) reflection, tried hopelessly to hug it, and was then tormented by an eternal longing for the image. Are you there? Will the reflection respond? An AI will certainly generate a response that speaks to our human emotions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Birch Jonathan, Andrews Kristin (2024/2). To Understand AI Sentience, First Understand it in Animals. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 213-226.

Evers Kathinka (2024/2). To understand sentience in AI first understand it in animals. Commentary to Jonathan Birch and Kristin Andrews. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 229-232.

This post in Swedish

We challenge habits of thought

Conceivability and feasibility of artificial consciousness

Can artificial consciousness be engineered, is the endeavor even conceivable?  In a number of previous posts, I have explored the possibility of developing AI consciousness from different perspectives, including ethical analysis, a comparative analysis of artificial and biological consciousness, and a reflection about the fundamental motivation behind the development of AI consciousness.

Together with Kathinka Evers from CRB, and with other colleagues from the CAVAA project, I recently published a new paper which aims to clarify the first preparatory steps that would need to be taken on the path to AI consciousness: Preliminaries to artificial consciousness: A multidimensional heuristic approach. These first requirements are above all logical and conceptual. We must understand and clarify the concepts that motivate the endeavor. In fact, the growing discussion about AI consciousness often lacks consistency and clarity, which risks creating confusion about what is logically possible, conceptually plausible, and technically feasible.

As a possible remedy to these risks, we propose an examination of the different meanings attributed to the term “consciousness,” as the concept has many meanings and is potentially ambiguous. For instance, we propose a basic distinction between the cognitive and the experiential dimensions of consciousness: awareness can be understood as the ability to process information, store it in memory, and possibly retrieve it if relevant to the execution of specific tasks, while phenomenal consciousness can be understood as subjective experience (“what it is like to be” in a particular state, such as being in pain).

This distinction between cognitive and experiential dimensions is just one illustration of how the multidimensional nature of consciousness is clarified in our model, and how the model can support a more balanced and realistic discussion of the replication of consciousness in AI systems. In our multidisciplinary article, we try to elaborate a model that serves both as a theoretical tool for clarifying key concepts and as an empirical guide for developing testable hypotheses. Developing concepts and models that can be tested empirically is crucial for bridging philosophy and science, eventually making philosophy more informed by empirical data and improving the conceptual architecture of science.

In the article we also illustrate how our multidimensional model of consciousness can be tested empirically. We focus on awareness as a case study. As we see it, awareness has two fundamental capacities: the capacity to select relevant information from the environment, and the capacity to intentionally use this information to achieve specific goals. Basically, in order to be considered aware, the information processing should be more sophisticated than a simple input-output processing. For example, the processing needs to evaluate the relevance of information on the basis of subjective priors, such as needs and expectations. Furthermore, in order to be considered aware, information processing should be combined with a capacity to model or virtualize the world, in order to predict more distant future states. To truly be markers of awareness, these capacities for modelling and virtualization should be combined with an ability to intentionally use them for goal-directed behavior.

There are already some technical applications that exhibit capacities like these. For instance, researchers from the CAVAA project have developed a robot system which is able to adapt and correct its functioning and to learn “on the fly.” These capacities make the system able to dynamically and autonomously adapt its behavior to external circumstances to achieve its goals. This illustrates how awareness as a dimension of consciousness can already be engineered and reproduced.

Is this sufficient to conclude that AI consciousness is a fact? Yes and no. The full spectrum of consciousness has not yet been engineered and perhaps its complete reproduction is not conceivable or feasible. In fact, the phenomenal dimension of consciousness appears as a stumbling block against “full” AI consciousness. Among other things, because subjective experience arises from the capacity of biological subjects to evaluate the world, that is, to assign specific values to it on the basis of subjective needs. These needs are not just cognitive needs, as in the case of awareness, but emotionally charged and with a more comprehensive impact on the subjective state. Nevertheless, we cannot rule out this possibility a priori, and the fundamental question whether there can be a “ghost in the machine” remains open for further investigation.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

K. Evers, M. Farisco, R. Chatila, B.D. Earp, I.T. Freire, F. Hamker, E. Nemeth, P.F.M.J. Verschure, M. Khamassi, Preliminaries to artificial consciousness: A multidimensional heuristic approach, Physics of Life Reviews, Volume 52, 2025, Pages 180-193, ISSN 1571-0645, https://doi.org/10.1016/j.plrev.2025.01.002

We like challenging questions

The need for self-critical expertise in public policy making

Academics are often recruited as experts in committees tasked with developing guidelines for public services, such as healthcare. It is of course important that policy documents for public services are based on knowledge and understanding of the problems. At the same time, the role of an expert is far from self-evident, because the problems that need to be addressed are not purely academic and cannot be defined in the same way that researchers define their research questions. A competent academic who accepts the assignment as an expert therefore has reason to feel both confident and uncertain. It would be unfortunate otherwise. This also affects the expectations of those around them, not least the authority that commissions the experts to develop the guidelines. The expert should be given the opportunity to point out any ambiguities in the committee’s assignment and also to be uncertain about his or her role as an expert. Again, it would be unfortunate otherwise. But if the expert role is contradictory, if it contains both certainty and uncertainty, both knowledge and self-criticism, how are we to understand it?

A realistic starting point for discussing this question is an article in Politics & Policy, written by Erica Falkenström and Rebecca Selberg. They conducted an empirical case study of ethical problems related to the development of Swedish guidelines for intensive care during the COVID-19 pandemic: “National principles for prioritization in intensive care under extraordinary circumstances.” The expert group consisted of 11 men, all physicians or philosophers. The lack of diversity is obviously problematic. The professional group that most directly comes into contact with the organizational challenges in healthcare, nurses, mostly women, was not represented in the expert group. Nor did the expert group include any social scientists, who could have contributed knowledge about structural problems in Swedish healthcare even before the pandemic broke out, such as problems related to the fact that elderly care in Sweden is administered separately by the municipalities. Patients in municipal nursing homes were among the most severely affected groups during the pandemic. They were presented in the policy document as a frail group that should preferably be kept away from hospitals (where the most advanced medical care is provided), and instead be cared for on site in the nursing homes. A problematic aspect of this was that the group of elderly patients in municipal care did not have access to competent medical assessment of their individual ability to cope with intensive care, which could possibly be seen as discriminatory. This reduction in the number of patients requiring intensive care may in turn have given the regional authorities responsible for intensive care reason to claim that they had sufficient resources. Moreover, if one of the purposes of the guidelines was to reduce stress among healthcare staff, one might wonder what impact the guidelines had on the stress level of municipal employees in nursing homes.

The authors identify ethical issues concerning three aspects of the work to develop the national guidelines: regarding the starting points, regarding the content of the document, and regarding the implementation of the guidelines. They also discuss an alternative political-philosophical way of approaching the role of being an expert, which could counteract the problems described in the case study. This alternative philosophical approach, “engaged political philosophy,” is contrasted with a more conventional philosophical expert role, which according to the alternative view overemphasizes the role of philosophy. Among other things, by letting philosophical theory define the problem without paying sufficient attention to the context. Instead, more open questions should be asked. Why did the problem become a public issue right now? What are the positions and what drives people apart? By starting from such open-ended questions about the context, the politically engaged philosopher can identify values ​​at stake, the facts of the current situation and its historical background, and possible contemporary alternatives. As well as including several different forms of relevant expertise. A broader understanding of the circumstances that created the problem can also help authorities and experts to understand when it would be better not to propose a new policy, the authors point out.

I personally think that the risk of experts overemphasizing the importance of their own forms of knowledge is possibly widespread and not unique to philosophy. An alternative approach to the role of being an expert probably requires openness to its basic contradiction: the expert both knows and does not know. No academic discipline can make exclusive claim to such self-critical awareness, although self-examination can be described as philosophical in a broad sense that takes us beyond academic boundaries.

I recommend the article in Politics & Policy as a fruitful case study for further research and reflection on challenges in the role of being an expert: Ethical Problems and the Role of Expertise in Health Policy: A Case Study of Public Policy Making in Sweden During COVID-19.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Falkenström, E. and Selberg, R. (2025), Ethical Problems and the Role of Expertise in Health Policy: A Case Study of Public Policy Making in Sweden During COVID-19. Politics & Policy, 53: e12646. https://doi.org/10.1111/polp.12646

This post in Swedish

We recommend readings

Columbo in Athens

One of the most timeless TV crime series is probably Columbo. Peter Falk plays an inquisitive police lieutenant who sometimes seems so far beyond ordinary time reckoning that he can make Los Angeles resemble ancient Athens, where an equally inquisitive philosopher cared just as little about his appearance.

I hope you have seen a few Columbo episodes. I also take the liberty of opening this post by revealing why I want to write about him. Because he not only exposes the murderers but at the same time frees them from living entangled in their own brilliant plans. You might remember the unusual disposition of the episodes, that we immediately learn who the perpetrator is. The murderers in the series are distinguished not only by their high social and economic status, but also by their high intelligence (and their overconfidence in it). Before the murder takes place, we get to follow how ingeniously the killer plans the deed. The purpose is to give the appearance of having a watertight alibi, to avoid leaving unintended clues at the murder scene, and to leave those clues that clearly point to someone else. Everything is perfectly thought out: BANG! In the next act, Columbo enters the scene of the murder in his worn coat and with a cigar that has usually gone out. In one episode he arrives with a boiled egg in his pocket which he cracks against the murder weapon when he has not had time to eat breakfast.

The murder was just the prelude. Now the episode begins for real, the interaction between the absent-minded Columbo and the shrewd murderer who planned everything in detail and now feels invincible. Especially considering that the police lieutenant leading the investigation is clearly just a confused poor thing constantly fumbling for his notepad and pencil and asking irrelevant questions. I have soon dealt with this fellow, the killer thinks.

Columbo often immediately knows who the murderer is. He can reveal this in a final conversation with the murderer where both can unexpectedly find each other and speak openly, almost like old friends. Soon even the murderer begins to understand that Columbo knows, even though the lieutenant’s absent-minded demeanor at first made this unlikely. Usually, however, the murderer’s confidence is not shaken by knowing that Columbo knows, for everything is perfectly thought out: Columbo “knows” without being able to prove anything! Columbo spends many sleepless nights wondering about the murderer’s alibi and motive, or about seemingly irrelevant details at the murder scene: the “loose ends” that Columbo often talks about, without the murderer understanding why. They seem too trivial to touch the ingenious plan! The murderer almost seems to enjoy watching Columbo rack his brain with immaterial details that cannot possibly prove what both already “know.” Little does the killer know that Columbo’s uncertainty will soon bear fruit.

Finally, Columbo manages to tie up the loose ends that the murderer did not see the point of (they looked so plain compared to the elegant plan). When Columbo reveals how the alibi was only apparent, how the all-too-obvious clues were deliberately placed at the murder scene, and the murderer’s cheap selfish motive, the murderer expects to be arrested by Columbo. “No, others will come and arrest you later,” says Columbo, who suddenly seems uninterested in the whole matter. Columbo seems to have only wanted to expose the illusory reality the killer created to mislead everyone. The murderer is the one who walks into the trap first. To make everything look real, the murderer must live strictly according to the insidious plan from the very first act. Maybe that is why the murderer often seems to breathe a sigh of relief in the final act. Columbo not only exposes the criminal, but also frees the criminal mind from constantly living trapped in its own calculations.

In the conversation at the end, the otherwise active killer seems numbed by Columbo, calm and without a winning smile. Even the murderer is for the first time happily absent-minded.

How does Columbo manage to uncover the insidious plan? We like to think that Columbo succeeds in exposing the murderer because Columbo is even smarter. If Columbo switched sides and planned crimes, no one could expose him! He would be a super-intelligence that could satisfy every wish, like the genie in the lamp. Sometimes even the murderer seems to think along these lines and offers Columbo employment and a brilliant career. With Columbo as accomplice, the murderer would be invincible. But Columbo does not seem to care more about his future than about his appearance: “No, never, I couldn’t do that.” He loves his work, he explains, but hardly gives the impression of being a police lieutenant, but is sometimes mistaken for a vagrant who is kindly asked to remove himself from the scene of the murder. Nuns can offer him food and clothes. Is Columbo the one actually creating the false appearance? Is he the one with the most cunning plan? Is his absent-mindedness just a form of ironic pretense to lure the murderer into the trap?

Columbo probably benefits from his disarming simplicity and absent-minded demeanor. But although we sometimes see him setting traps for the killer, we never see him disguise himself as a vagrant. When his wife has given him a nicer coat, he seems genuinely bothered by it, as if he were dressed up. Is Columbo’s confusion sincere after all? Is it the confusion he loves about his work? Is it perhaps the confusion that eventually reveals the murderer’s watertight plan?

Columbo’s colleagues are not confused. They follow the rules of the game and soon have exactly the conviction the murderer planned for them according to the manual: the murderer has no motive, has a watertight alibi, and cannot be tied to the scene of the murder. Technical evidence, on the contrary, clearly points in a different direction. If the colleagues were leading the investigation, the murderer would have already been removed from the list of suspects. This is how a colleague complains when he feels that Columbo is slowing down the investigation by not following the plan of the criminal mastermind:

Sergeant Hoffman: Now what do you think Lieutenant, do you really think that Deschler didn’t shoot Galesko in the leg?

Columbo: I’ll tell you something, Sergeant, I don’t know what to think.

The injured Galesko is in fact the murderer. He shot himself in the leg after killing Deschler, to make the killing look like self-defense against “his wife’s kidnapper.” Galseko has already murdered his wife, having staged the kidnapping and planted the clues that point to Deschler. Why did Galesko murder his wife? Because he felt she was obscuring his bright future. The murderers in the TV series not only plan their deeds, but also their lives. Without ideas of bright futures, they would lack motive to plan murder.

Neither the killer nor the colleague suffers from uncertainty, they both sleep well. Only Columbo is awake: “I don’t know what to think.” Therefore, he tries to tie up loose ends. Like the philosopher Socrates in ancient Athens, Columbo knows that he does not know. Therefore, he torments the murderer (and the colleagues) with vexing questions that do not belong to the game, but rather revolve around it. Now you probably want to direct Columbo’s most famous line at me: “Oh, just one more thing!” For did I not say that Columbo immediately knows who the murderer is? Yes, I did. Columbo already “knows” who the murderer is. How? Does he know it through his superior intelligence that reveals the whole case in a flash? No, but because the murderer does not react like someone who does not know. When informed of the murder, the killer reacts strangely: like someone who already knows. Lack of confusion is the hallmark of the murderer.

When Columbo reveals the tangle of thoughts that already in the first act ensnared the murderer, the perpetrator goes to prison without complaint. Handcuffs are redundant when the self-made ones are finally unlocked. Columbo has calmed the criminal mind. The culprit is free from the murder plan that would secure the future plan. Suddenly everything is real, just real.

Just one more thing: Merry Christmas and do not plan too much!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

The dialogue between Hoffman and Columbo is from the episode Negative Reaction (1974). Columbo’s response to the career offer is from The Bye-Bye Sky-High I.Q. Murder Case (1977).

The image is AI-generated in Microsoft Designer by Ashkan Atry.

This post in Swedish

Thinking about thinking

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

Philosophy on a chair

Philosophy is an unusual activity, partly because it can be conducted to such a large extent while sitting still. Philosophers do not need research vessels, laboratories or archives to work on their questions. Just a chair to sit on. Why is it like that?

The answer is that philosophers examine our ways of thinking, and we are never anywhere but where we are. A chair takes us exactly as far as we need: to ourselves. Philosophizing on a chair can of course look self-absorbed. How can we learn anything significant from “thinkers” who neither seem to move nor look around the world? If we happen to see them sitting still in their chairs and thinking, they can undeniably appear to be cut off from the complex world in which the rest of us must live and navigate. Through its focus on human thought, philosophy can seem to ignore our human world and not be of any use to the rest of us.

What we overlook with such an objection to philosophy is that our complex human world already reflects to a large extent our human ways of thinking. To the extent that these ways of thinking are confused, limited, one-sided and unjust, our world will also be confused, limited, one-sided and unjust. When we live and move in this human world, which reflects our ways of thinking, can it not be said that we live somewhat inwardly, without noticing it? We act in a world that reflects ourselves, including the shortcomings in our ways of thinking.

If so, maybe it is not so introverted to sit down and examine these ways of thinking? On the contrary, this seems to enable us to free ourselves and the world from human thought patterns that sometimes limit and distort our perspectives without us realizing it. Of course, research vessels, laboratories and archives also broaden our perspectives on the world. But we already knew that. I just wanted to open our eyes to a more unexpected possibility: that even a chair can take us far, if we practice philosophy on it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Finding the way when there is none

A difficulty for academic writers is managing the dual role of both knowing and not knowing, of both showing the way and not finding it. There is an expectation that such writers should already have the knowledge they are writing about, that they should know the way they show others right from the start. As readers, we are naturally delighted and grateful to share the authors’ knowledge and insight.

But academic writers usually write because something strikes them as puzzling. They write for the same reason that readers read: because they lack the knowledge and clarity required to find the way through the questions. This lack stimulates them to research and write. The way that did not exist, takes shape when they tackle their questions.

This dual role as a writer often worries students who are writing an essay or dissertation for the first time. They can easily perceive themselves as insufficiently knowledgeable to have the right to tackle the work. Since they lack the expertise that they believe is required of academic writers from the outset, does it not follow that they are not yet mature enough to begin the work? Students are easily paralyzed by the knowledge demands they place on themselves. Therefore, they hide their questions instead of tackling them.

It always comes as a surprise, that the way actually takes shape as soon as we ask for it. Who dares to believe that? Research is a dynamic interplay with our questions: with ignorance and lack of clarity. An academic writer is not primarily someone who knows a lot and who therefore can show others the way, but someone who dares and is even stimulated by this duality of both knowing and not knowing, of both finding and not finding the way.

If we have something important to learn from the exploratory writers, it is perhaps that living knowledge cannot be separated as pure knowledge and nothing but knowledge. Knowledge always interacts with its opposite. Therefore, essay writing students already have the most important asset to be able to write in an exploratory way, namely the questions they are wondering about. Do not hide the questions, but let them take center stage. Let the text revolve around what you do not know. Knowledge without contact with ignorance is dead.  It solves no one’s problem, it answers no one’s question, it removes no one’s confusion. So let the questions sprout in the soil of the text, and the way will soon take shape.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about authorship

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

« Older posts