A blog from the Centre for Research Ethics & Bioethics (CRB)

Year: 2025 (Page 2 of 2)

Why does science ask the question of artificial consciousness?

The possibility of conscious AI is increasingly perceived as a legitimate and important scientific question. This interest has arisen after a long history of scientific doubts about the possibility of consciousness not only in other animals, but sometimes even in humans. The very concept of consciousness was for a period considered scientifically suspect. But now the question of conscious AI is being raised within science.

For anyone interested in how such a mind-boggling question can be answered philosophically and scientifically, I would like to recommend an interesting AI-philosophical exchange of views in the French journal Intellectica. The exchange (which is in English) revolves around an article by two philosophers, Jonathan Birch and Kristin Andrews, who for several years have discussed consciousness not only among mammals, but also among birds, fish, cephalopods, crustaceans, reptiles, amphibians and insects. The two philosophers carefully distinguish between psychological questions about what might make us emotionally attracted to believe that an AI system is conscious, and logical questions about what philosophically and scientifically can count as evidence for conscious AI. It is to this logical perspective that they want to contribute. How can we determine whether an artificial system is truly conscious; not just be seduced into believing it because the system emotionally convincingly mirrors the behavior of subjectively experiencing humans? Their basic idea is that we should first study consciousness in a wide range of animal species beyond mammals. Partly because the human brain is too different from (today’s) artificial systems to serve as a suitable reference point, but above all because such a broad comparison can help us identify the essential features of consciousness: features that could be used as markers for consciousness in artificial systems. The two philosophers’ proposal is thus that by starting from different forms of animal consciousness, we can better understand how we should philosophically and scientifically seek evidence for or against conscious AI.

One of my colleagues at CRB, Kathinka Evers, also a philosopher, comments on the article. She appreciates Birch and Andrews’ discussion as philosophically clarifying and sees the proposal to approach the question of conscious AI by studying forms of consciousness in a wide range of animal species as well argued. However, she believes that a number of issues require more attention. Among other things, she asks whether the transition from carbon- to silicon-based substrates does not require more attention than Birch and Andrews give it.

Birch and Andrews propose a thought experiment in which a robot rat behaves exactly like a real rat. It passes the same cognitive and behavioral tests. They further assume that the rat brain is accurately depicted in the robot, neuron for neuron. In such a case, they argue, it would be inconsistent not to accept the same pain markers that apply to the rat for the robot as well. The cases are similar, they argue, the transition from carbon to silicon does not provide sufficient reason to doubt that the robot rat can feel pain when it exhibits the same features that mark pain in the real rat. But the cases are not similar, Kathinka Evers points out, because the real rat, unlike the robot, is alive. If life is essential for consciousness, then it is not inconsistent to doubt that the robot can feel pain even in this thought experiment. Someone could of course associate life with consciousness and argue that a robot rat that exhibits the essential features of consciousness must also be considered alive. But if the purpose is to identify what can logically serve as evidence for conscious AI, the problem remains, says Kathinka Evers, because we then need to clarify how the relationship between life and consciousness should be investigated and how the concepts should be defined.

Kathinka Evers thus suggests several questions of relevance to what can logically be considered evidence for conscious AI. But she also asks a more fundamental question, which can be sensed throughout her commentary. She asks why the question of artificial consciousness is even being raised in science today. As mentioned, one of Birch and Andrews’ aims was to avoid the answer being influenced by psychological tendencies to interpret an AI that convincingly reflects human emotions as if it were conscious. But Kathinka Evers asks, as I read her, whether this logical purpose may not come too late. Is not the question already a temptation? AI is trained on human-generated data to reflect human behavior, she points out. Are we perhaps seeking philosophical and scientific evidence regarding a question that seems significant simply because we have a psychological tendency to identify with our digital mirror images? For a question to be considered scientific and worth funding, some kind of initial empirical support is usually required, but there is no evidence whatsoever for the possibility of consciousness in non-living entities such as AI systems. The question of whether an AI can be conscious has no more empirical support than the question of whether volcanoes can experience their eruptions, Kathinka Evers points out. There is a great risk that we will scientifically try to answer a question that lacks scientific basis. No matter how carefully we seek the longed-for answer, the question itself seems imprudent.

I am reminded of the myth of Narcissus. After a long history of rejecting the love of others (the consciousness of others), he finally fell in love with his own (digital) reflection, tried hopelessly to hug it, and was then tormented by an eternal longing for the image. Are you there? Will the reflection respond? An AI will certainly generate a response that speaks to our human emotions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Birch Jonathan, Andrews Kristin (2024/2). To Understand AI Sentience, First Understand it in Animals. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 213-226.

Evers Kathinka (2024/2). To understand sentience in AI first understand it in animals. Commentary to Jonathan Birch and Kristin Andrews. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 229-232.

This post in Swedish

We challenge habits of thought

Ethics as an integral part of standard care

Healthcare professionals experience ethical dilemmas and ethically challenging situations on a daily basis. A child receiving important treatment may have difficulty sitting still. How should one think about physically restraining children in such situations? In order to provide good care, healthcare professionals may regularly need time and support to reflect on ethical dilemmas that may arise in their work.

Experiences from an attempt to introduce regular reflection on ethics cases are reported in an article with Pernilla Pergert as the main author. Staff in pediatric cancer care received training in conducting so-called ethics rounds, where healthcare professionals meet to discuss relevant ethics cases. The course participants were assigned to arrange ethics rounds at their respective workplaces both during and after the training. They were then interviewed about their experiences. Hopefully, the results can help others who are planning to introduce ethics rounds.

The experiences revolved around the challenge of positioning ethics in the workplace. How do you find time and space for regular ethical reflection in healthcare? Positioning ethics was not least about the status of ethics in a healthcare organization that prioritizes direct patient care. From such a perspective, ethics rounds may be seen as a luxury that does not really belong to the care work itself, even though ethical reflection is necessary for good care.

The interviewees also spoke about different strategies for positioning ethics. For example, it was considered important that several interested parties form alliances where they collaborate and share responsibility for introducing ethics rounds. This also helps ensure that several different professional groups can be included in the ethics rounds, such as physicians, nurses, social workers and psychologists. It was also considered important to talk about the ethics rounds and their benefits at staff meetings, as well as to identify relevant patient cases with ethical dilemmas that may create concern, uncertainty and conflicts in the care work. These ethical dilemmas do not have to be big and difficult, also more frequently occurring everyday ethical challenges need to be discussed. Finally, the importance of scheduling the ethics rounds at fixed times was emphasized.

The authors conclude that their study highlights the need to position ethics in healthcare so that staff can practice ethics as part of their care work. The study also exemplifies strategies for achieving this. A major challenge, the authors emphasize, is the polarization between care and ethics, as if ethics were somehow outside the actual care work. But if ethical dilemmas are part of everyday healthcare, then ethics should be seen as an integral part of standard care, the authors argue.

Read the article here: Positioning Ethics When Direct Patient Care is Prioritized: Experiences from Implementing Ethics Case Reflection Rounds in Childhood Cancer Care.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Pergert, P., Molewijk, B. & Bartholdson, C. Positioning Ethics When Direct Patient Care is Prioritized: Experiences from Implementing Ethics Case Reflection Rounds in Childhood Cancer Care. HEC Forum (2024). https://doi.org/10.1007/s10730-024-09541-6

This post in Swedish

We like real-life ethics

Conceivability and feasibility of artificial consciousness

Can artificial consciousness be engineered, is the endeavor even conceivable?  In a number of previous posts, I have explored the possibility of developing AI consciousness from different perspectives, including ethical analysis, a comparative analysis of artificial and biological consciousness, and a reflection about the fundamental motivation behind the development of AI consciousness.

Together with Kathinka Evers from CRB, and with other colleagues from the CAVAA project, I recently published a new paper which aims to clarify the first preparatory steps that would need to be taken on the path to AI consciousness: Preliminaries to artificial consciousness: A multidimensional heuristic approach. These first requirements are above all logical and conceptual. We must understand and clarify the concepts that motivate the endeavor. In fact, the growing discussion about AI consciousness often lacks consistency and clarity, which risks creating confusion about what is logically possible, conceptually plausible, and technically feasible.

As a possible remedy to these risks, we propose an examination of the different meanings attributed to the term “consciousness,” as the concept has many meanings and is potentially ambiguous. For instance, we propose a basic distinction between the cognitive and the experiential dimensions of consciousness: awareness can be understood as the ability to process information, store it in memory, and possibly retrieve it if relevant to the execution of specific tasks, while phenomenal consciousness can be understood as subjective experience (“what it is like to be” in a particular state, such as being in pain).

This distinction between cognitive and experiential dimensions is just one illustration of how the multidimensional nature of consciousness is clarified in our model, and how the model can support a more balanced and realistic discussion of the replication of consciousness in AI systems. In our multidisciplinary article, we try to elaborate a model that serves both as a theoretical tool for clarifying key concepts and as an empirical guide for developing testable hypotheses. Developing concepts and models that can be tested empirically is crucial for bridging philosophy and science, eventually making philosophy more informed by empirical data and improving the conceptual architecture of science.

In the article we also illustrate how our multidimensional model of consciousness can be tested empirically. We focus on awareness as a case study. As we see it, awareness has two fundamental capacities: the capacity to select relevant information from the environment, and the capacity to intentionally use this information to achieve specific goals. Basically, in order to be considered aware, the information processing should be more sophisticated than a simple input-output processing. For example, the processing needs to evaluate the relevance of information on the basis of subjective priors, such as needs and expectations. Furthermore, in order to be considered aware, information processing should be combined with a capacity to model or virtualize the world, in order to predict more distant future states. To truly be markers of awareness, these capacities for modelling and virtualization should be combined with an ability to intentionally use them for goal-directed behavior.

There are already some technical applications that exhibit capacities like these. For instance, researchers from the CAVAA project have developed a robot system which is able to adapt and correct its functioning and to learn “on the fly.” These capacities make the system able to dynamically and autonomously adapt its behavior to external circumstances to achieve its goals. This illustrates how awareness as a dimension of consciousness can already be engineered and reproduced.

Is this sufficient to conclude that AI consciousness is a fact? Yes and no. The full spectrum of consciousness has not yet been engineered and perhaps its complete reproduction is not conceivable or feasible. In fact, the phenomenal dimension of consciousness appears as a stumbling block against “full” AI consciousness. Among other things, because subjective experience arises from the capacity of biological subjects to evaluate the world, that is, to assign specific values to it on the basis of subjective needs. These needs are not just cognitive needs, as in the case of awareness, but emotionally charged and with a more comprehensive impact on the subjective state. Nevertheless, we cannot rule out this possibility a priori, and the fundamental question whether there can be a “ghost in the machine” remains open for further investigation.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

K. Evers, M. Farisco, R. Chatila, B.D. Earp, I.T. Freire, F. Hamker, E. Nemeth, P.F.M.J. Verschure, M. Khamassi, Preliminaries to artificial consciousness: A multidimensional heuristic approach, Physics of Life Reviews, Volume 52, 2025, Pages 180-193, ISSN 1571-0645, https://doi.org/10.1016/j.plrev.2025.01.002

We like challenging questions

The need for self-critical expertise in public policy making

Academics are often recruited as experts in committees tasked with developing guidelines for public services, such as healthcare. It is of course important that policy documents for public services are based on knowledge and understanding of the problems. At the same time, the role of an expert is far from self-evident, because the problems that need to be addressed are not purely academic and cannot be defined in the same way that researchers define their research questions. A competent academic who accepts the assignment as an expert therefore has reason to feel both confident and uncertain. It would be unfortunate otherwise. This also affects the expectations of those around them, not least the authority that commissions the experts to develop the guidelines. The expert should be given the opportunity to point out any ambiguities in the committee’s assignment and also to be uncertain about his or her role as an expert. Again, it would be unfortunate otherwise. But if the expert role is contradictory, if it contains both certainty and uncertainty, both knowledge and self-criticism, how are we to understand it?

A realistic starting point for discussing this question is an article in Politics & Policy, written by Erica Falkenström and Rebecca Selberg. They conducted an empirical case study of ethical problems related to the development of Swedish guidelines for intensive care during the COVID-19 pandemic: “National principles for prioritization in intensive care under extraordinary circumstances.” The expert group consisted of 11 men, all physicians or philosophers. The lack of diversity is obviously problematic. The professional group that most directly comes into contact with the organizational challenges in healthcare, nurses, mostly women, was not represented in the expert group. Nor did the expert group include any social scientists, who could have contributed knowledge about structural problems in Swedish healthcare even before the pandemic broke out, such as problems related to the fact that elderly care in Sweden is administered separately by the municipalities. Patients in municipal nursing homes were among the most severely affected groups during the pandemic. They were presented in the policy document as a frail group that should preferably be kept away from hospitals (where the most advanced medical care is provided), and instead be cared for on site in the nursing homes. A problematic aspect of this was that the group of elderly patients in municipal care did not have access to competent medical assessment of their individual ability to cope with intensive care, which could possibly be seen as discriminatory. This reduction in the number of patients requiring intensive care may in turn have given the regional authorities responsible for intensive care reason to claim that they had sufficient resources. Moreover, if one of the purposes of the guidelines was to reduce stress among healthcare staff, one might wonder what impact the guidelines had on the stress level of municipal employees in nursing homes.

The authors identify ethical issues concerning three aspects of the work to develop the national guidelines: regarding the starting points, regarding the content of the document, and regarding the implementation of the guidelines. They also discuss an alternative political-philosophical way of approaching the role of being an expert, which could counteract the problems described in the case study. This alternative philosophical approach, “engaged political philosophy,” is contrasted with a more conventional philosophical expert role, which according to the alternative view overemphasizes the role of philosophy. Among other things, by letting philosophical theory define the problem without paying sufficient attention to the context. Instead, more open questions should be asked. Why did the problem become a public issue right now? What are the positions and what drives people apart? By starting from such open-ended questions about the context, the politically engaged philosopher can identify values ​​at stake, the facts of the current situation and its historical background, and possible contemporary alternatives. As well as including several different forms of relevant expertise. A broader understanding of the circumstances that created the problem can also help authorities and experts to understand when it would be better not to propose a new policy, the authors point out.

I personally think that the risk of experts overemphasizing the importance of their own forms of knowledge is possibly widespread and not unique to philosophy. An alternative approach to the role of being an expert probably requires openness to its basic contradiction: the expert both knows and does not know. No academic discipline can make exclusive claim to such self-critical awareness, although self-examination can be described as philosophical in a broad sense that takes us beyond academic boundaries.

I recommend the article in Politics & Policy as a fruitful case study for further research and reflection on challenges in the role of being an expert: Ethical Problems and the Role of Expertise in Health Policy: A Case Study of Public Policy Making in Sweden During COVID-19.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Falkenström, E. and Selberg, R. (2025), Ethical Problems and the Role of Expertise in Health Policy: A Case Study of Public Policy Making in Sweden During COVID-19. Politics & Policy, 53: e12646. https://doi.org/10.1111/polp.12646

This post in Swedish

We recommend readings

Do the goals of care reflect the elderly patient’s personal preferences?

Person-centered care is not only an ethical approach that values ​​the patient’s personal preferences and decision-making. It is also a concrete way to improve care and the patient’s quality of life. This is especially important when caring for elderly patients, who may have multiple chronic conditions and various functional limitations. This requires sensitivity to the patient’s description of their situation and joint planning to adapt care to the patient’s individual needs and wishes. The care plan should be documented in the patient’s medical record in the form of evaluable goals.

A new Swedish study investigated the presence of person-centered, evaluable goals in the care plans for patients at a geriatric psychiatric outpatient clinic. It was found that the goals documented in the patients’ medical records had a biomedical focus on the disease: on recovery or on reduced symptoms. Although the analysis of the medical records revealed that the patients themselves also expressed other needs, such as existential needs and the need for support in carrying out everyday activities they perceived were important for a better quality of life, these personal wishes were not reflected in the care plans in the form of evaluable goals.

A biomedical focus on disease treatment could also manifest itself in the form of decisions to reduce the prescription of addictive drugs, without the care plan indicating alternative measures or mentioning the effects that this medical goal could have on the patient.

The authors point out that the fact that the medical records nevertheless documented the patients’ personal wishes indicates that there was a certain degree of person-centered interaction with the patients. However, since the conversations did not result in documented goals of care, the person-centered process seems to have stopped halfway, the authors argue in their discussion of the results. The patients’ stories were included, but were not incorporated into the medical decision-making process and the planning of care.

An aim of the study was also to examine psychiatric care plans at the end of life. Although the proximity to death and the possibility of palliative care could be mentioned in the medical records, the goals were rarely changed from curative to palliative care. Moreover, neither the healthcare professionals nor the patients seemed to view psychiatric care as part of palliative care. On the contrary, they seemed to view palliative care as a reason to end psychiatric care. None of the few decisions to change the focus of care led in practice to any straightforward palliative approach.

The absence of the concept of palliative care, despite the fact that the patients were close to death when the studied goals of care were established, is surprising, according to the authors. Conversations about goals and hopes at the end of life should be self-evident in geriatric psychiatry, and in their discussion, the authors suggest concrete tools that are already available to support such conversations. Given the complex combination of conditions and the proximity to death, there are strong reasons to formulate care plans with an increased focus on improved quality of life and not just on restored mental health, the authors argue.

In their conclusion, the authors point out the need for more research on how person-centered care interacts with the planning of evaluable goals. They also point out the importance of a palliative approach in geriatric psychiatric care, where patients may suffer from multiple concurrent conditions as well as more or less severe and long-term mental disorders.

Read the article here: Psychiatric Goals of Care at the End of Life: A Qualitative Analysis of Medical Records at a Geriatric Psychiatric Outpatient Clinic.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kullenberg, Helena, Helgesson, Gert, Juth, Niklas, Lindblad, Anna, Psychiatric Goals of Care at the End of Life: A Qualitative Analysis of Medical Records at a Geriatric Psychiatric Outpatient Clinic, Journal of Aging Research, 2024, 2104985, 10 pages, 2024. https://doi.org/10.1155/jare/2104985

This post in Swedish

Ethics needs empirical input

Newer posts »