A blog from the Centre for Research Ethics & Bioethics (CRB)

Year: 2024 (Page 1 of 2)

Philosophy on a chair

Philosophy is an unusual activity, partly because it can be conducted to such a large extent while sitting still. Philosophers do not need research vessels, laboratories or archives to work on their questions. Just a chair to sit on. Why is it like that?

The answer is that philosophers examine our ways of thinking, and we are never anywhere but where we are. A chair takes us exactly as far as we need: to ourselves. Philosophizing on a chair can of course look self-absorbed. How can we learn anything significant from “thinkers” who neither seem to move nor look around the world? If we happen to see them sitting still in their chairs and thinking, they can undeniably appear to be cut off from the complex world in which the rest of us must live and navigate. Through its focus on human thought, philosophy can seem to ignore our human world and not be of any use to the rest of us.

What we overlook with such an objection to philosophy is that our complex human world already reflects to a large extent our human ways of thinking. To the extent that these ways of thinking are confused, limited, one-sided and unjust, our world will also be confused, limited, one-sided and unjust. When we live and move in this human world, which reflects our ways of thinking, can it not be said that we live somewhat inwardly, without noticing it? We act in a world that reflects ourselves, including the shortcomings in our ways of thinking.

If so, maybe it is not so introverted to sit down and examine these ways of thinking? On the contrary, this seems to enable us to free ourselves and the world from human thought patterns that sometimes limit and distort our perspectives without us realizing it. Of course, research vessels, laboratories and archives also broaden our perspectives on the world. But we already knew that. I just wanted to open our eyes to a more unexpected possibility: that even a chair can take us far, if we practice philosophy on it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

End-of-life care: ethical challenges experienced by critical care nurses

In an intensive care unit, seriously ill patients who need medical and technical support for central bodily functions, such as breathing and circulation, are monitored and treated. Usually it goes well, but not all patients survive, despite the advanced and specialized care. An intensive care unit can be a stressful environment for the patient, not least because of the technical equipment to which the patient is connected. When transitioning to end-of-life care, one therefore tries to create a calmer and more dignified environment for the patient, among other things by reducing the use of life-sustaining equipment and focusing on reducing pain and anxiety.

The transition to end-of-life care can create several ethically challenging situations for critical care nurses. What do these challenges look like in practice? The question is investigated in an interview study with nurses at intensive care units in a Swedish region. What did the interviewees say about the transition to end-of-life care?

A challenge that many interviewees mentioned was when life-sustaining treatment was continued at the initiative of the physician, despite the fact that the nurses saw no signs of improvement in the patient and judged that the probability of survival was very low. There was concern that the patient’s suffering was thus prolonged and that the patient was deprived of the right to a peaceful and dignified death. There was also concern that continued life-sustaining treatment could give relatives false hope that the patient would survive, and that this prevented the family from supporting the patient at the end of life. Other challenges had to do with the dosage of pain and anti-anxiety drugs. The nurses naturally sought a good effect, but at the same time were afraid that too high doses could harm the patient and risk hastening death. The critical care nurses also pointed out that family members could request higher doses for the patient, which increased the concern about the risk of possibly shortening the patient’s life.

Other challenges had to do with situations where the patient’s preferences are unknown, perhaps because the patient is unconscious. Another challenge that was mentioned is when conscious patients have preferences that conflict with the nurses’ professional judgments and values. A patient may request that life-sustaining treatment cease, while the assessment is that the patient’s life can be significantly extended by continued treatment. Additional challenging situations can arise when the family wants to protect the patient from information that death is imminent, which violates the patient’s right to information about diagnosis and prognosis.

Finally, various situations surrounding organ donation were mentioned as ethically challenging. For example, family members may oppose the patient’s decision to donate organs. It may also happen that the family does not understand that the patient suffered a total cerebral infarction, and believes that the patient died during the donation surgery.

The results provide a good insight into ethical challenges in end-of-life care that critical care nurses experience. Read the article here: Critical care nurses’ experiences of ethical challenges in end-of-life care.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Palmryd L, Rejnö Å, Alvariza A, Godskesen T. Critical care nurses’ experiences of ethical challenges in end-of-life care. Nursing Ethics. 2024;0(0). doi:10.1177/09697330241252975

This post in Swedish

Ethics needs empirical input

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Of course, but: ethics in palliative practice

What is obvious in principle may turn out to be less obvious in practice. That would be at least one possible interpretation of a new study on ethics in palliative care.

Palliative care is given to patients with life-threatening illnesses that cannot be cured. Although palliative care can sometimes contribute to extending life somewhat, the focus is on preventing and alleviating symptoms in the final stages of life. The patient can also receive support to deal with worries about death, as well as guidance on practical issues regarding finances and relationships with relatives.

As in all care, respect for the patient’s autonomy is central in palliative care. To the extent possible, the patient should be given the opportunity to participate in the medical decision-making and receive information that corresponds to the patient’s knowledge and wishes for information. This means that if a patient does not wish information about their health condition and future prospects, this should also be respected. How do palliative care professionals handle such a situation, where a patient does not want to know?

The question is investigated in an interview study by Joar Björk, who is a clinical ethicist and physician in palliative home care. He conducted six focus group interviews with staff in palliative care in Sweden, a total of 33 participants. Each interview began with an outline of an ethically challenging patient case. A man with disseminated prostate cancer is treated by a palliative care team. He has previously reiterated that it is important for him to gain complete knowledge of the illness and how his death may look. Because the team had to deal with many physical symptoms, they have not yet had time to answer his questions. When they finally get time to talk to him, he suddenly says that he does not want more information and that the issue should not be raised again. He gives no reason for his changed position, but nothing else seems to have changed and he seems to be in his right mind.

What did the interviewees say about the made-up case? The initial reaction was that it goes without saying that the patient has the right not to be informed. If a patient does not want information, then you must not impose the information on him, but must “meet the patient where he is.” But the interviewees still began to wonder about the context. Why did the man suddenly change his mind? Although the case description states that the man is competent to make decisions, this began to be doubted. Or could someone close to him have influenced him? What at first seemed obvious later appeared to be problematic.

The interviewees emphasized that in a case like this one must dig deeper and investigate whether it is really true that the patient does not want to be informed. Maybe he said that he does not want to know to appear brave, or to protect loved ones from disappointing information? Preferences can also change over time. Suddenly you do not want what you just wanted, or thought you wanted. Palliative care is a process, it was emphasized in the interviews. Thanks to the fact that the care team has continuous contact with the patient, it was felt that one could carefully probe what he really wants at regular intervals.

Other values were also at stake for the interviewees, which could further contribute to undermining what at first seemed obvious. For example, that the patient has the right to a dignified, peaceful and good death. If he is uninformed that he has a very short time left to live, he cannot prepare for death, say goodbye to loved ones, or finish certain practical tasks. It may also be more difficult to plan and provide good care to an uninformed patient, and it may feel dishonest to know something important but not tell the person concerned. The interviewees also considered the consequences for relatives of the patient’s reluctance to be informed.

The main result of the study is that the care teams found it difficult to handle a situation where a patient suddenly changes his mind and does not want to be informed. Should they not have experienced these difficulties? Should they accept what at first seemed self-evident in principle, namely that the patient has the right not to know? The interviewees themselves emphasized that care is a process, a gradually unfolding relationship, and that it is important to be flexible and continuously probe the changing will of the patient. Perhaps, after all, it is not so difficult to deal with the case in practice, even if it is not as simple as it first appeared?

The interviewees seemed unhappy about the patient’s decision, but at the same time seemed to feel that there were ways forward and that time worked in their favor. In the end, the patient probably wants to know, after all, they seemed to think. Should they not have had such an attitude towards the patient’s decision?

Read the author’s interesting discussion of the study results here: “It is very hard to just accept this” – a qualitative study of palliative care teams’ ethical reasoning when patients do not want information.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk, J. “It is very hard to just accept this” – a qualitative study of palliative care teams’ ethical reasoning when patients do not want information. BMC Palliative Care 23, 91 (2024). https://doi.org/10.1186/s12904-024-01412-8

This post in Swedish

We like real-life ethics

What is hidden behind the concept of research integrity?

In order to counteract scientific misconduct and harmful research, one often talks about protecting and supporting research integrity. The term seems to cover three different aspects of research, although the differences may not always be fully in mind. The term can refer to the character traits of individual researchers, for example, that the researcher values truth and precision and has good intentions. But the term can also refer to the research process, for example, that the method, data and results are correctly chosen, well executed and faithfully reproduced in scientific publications. Third, the term can refer to research-related institutions and systems, such as universities, ethical review, legislation and scientific journals. In the latter case, it is usually emphasized that research integrity presupposes institutional conditions beyond the moral character of individual researchers.

Does such a varied concept have to be problematic? Of course not, but possibly the concept of research integrity is less suitable, argue Gert Helgesson and William Bülow in an article that you can read here: Research Integrity and Hidden Value Conflicts.

In the article, they first discuss some ambiguities in the three uses of the concept of research integrity. Which personal traits are desirable in researchers and which values should they endorse? Does the integrity of the research process cover all ethically relevant aspects of research, including the application process, for example? Are research-related institutions actors with research integrity, or are they rather means that support research integrity?

Mentioning these ambiguities is not, as I understand it, intended as a decisive objection. Nor do the authors think that it is generally a shortcoming if concepts have a wide and varied use. But the concept of research integrity risks hiding value conflicts through its varying use, they argue. Suppose someone claims that, in order to protect and support research integrity, we should criminalize serious forms of scientific misconduct. This is perhaps true if by research integrity we refer to aspects of the research process, for example, that results are accurate and reliable. But the stricter regulation of research that this entails risks reducing the responsibility of individual researchers, which can undermine research integrity in the first sense. How should we compare the value of research integrity in the different senses? What does it mean to “increase research integrity”?

The concept of research integrity is not useless, the authors point out. But if we want to make value conflicts visible, if we want to clarify what we mean by research integrity and which forms of integrity are most important, as well as clear up the ambiguities mentioned above, then we will examine issues that are appropriately described as issues of research ethics.

If I understand the authors correctly, they mean that ethical questions about research should be characterized as research ethics. It is unfortunate that “research integrity” has come to function as an alternative designation for ethical questions about research. Everything becomes clearer if any questions about “research integrity,” if we want to use the concept, fall under research ethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Helgesson, G., Bülow, W. Research Integrity and Hidden Value Conflicts. Journal of Academic Ethics 21, 113–123 (2023). https://doi.org/10.1007/s10805-021-09442-0

This post in Swedish

We like ethics

Finding the way when there is none

A difficulty for academic writers is managing the dual role of both knowing and not knowing, of both showing the way and not finding it. There is an expectation that such writers should already have the knowledge they are writing about, that they should know the way they show others right from the start. As readers, we are naturally delighted and grateful to share the authors’ knowledge and insight.

But academic writers usually write because something strikes them as puzzling. They write for the same reason that readers read: because they lack the knowledge and clarity required to find the way through the questions. This lack stimulates them to research and write. The way that did not exist, takes shape when they tackle their questions.

This dual role as a writer often worries students who are writing an essay or dissertation for the first time. They can easily perceive themselves as insufficiently knowledgeable to have the right to tackle the work. Since they lack the expertise that they believe is required of academic writers from the outset, does it not follow that they are not yet mature enough to begin the work? Students are easily paralyzed by the knowledge demands they place on themselves. Therefore, they hide their questions instead of tackling them.

It always comes as a surprise, that the way actually takes shape as soon as we ask for it. Who dares to believe that? Research is a dynamic interplay with our questions: with ignorance and lack of clarity. An academic writer is not primarily someone who knows a lot and who therefore can show others the way, but someone who dares and is even stimulated by this duality of both knowing and not knowing, of both finding and not finding the way.

If we have something important to learn from the exploratory writers, it is perhaps that living knowledge cannot be separated as pure knowledge and nothing but knowledge. Knowledge always interacts with its opposite. Therefore, essay writing students already have the most important asset to be able to write in an exploratory way, namely the questions they are wondering about. Do not hide the questions, but let them take center stage. Let the text revolve around what you do not know. Knowledge without contact with ignorance is dead.  It solves no one’s problem, it answers no one’s question, it removes no one’s confusion. So let the questions sprout in the soil of the text, and the way will soon take shape.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about authorship

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

The doubtful beginnings of philosophy

Philosophy begins with doubt, this has been emphasized by many philosophers. But what does it mean to doubt? To harbor suspicions? To criticize accepted beliefs? In that case, doubt is based on thinking we know better. We believe that we have good reason to doubt.

Is that doubting? Thinking that you know? It sounds paradoxical, but it is probably the most common form of doubt. We doubt, and think we can easily explain why. But this is hardly the doubt of philosophy. For in that case philosophy would not begin with doubt, but with belief or knowledge. If a philosopher doubts, and easily motivates the doubt, the philosopher will soon doubt her own motive for doubting. To doubt, as a philosopher doubts, is to doubt one’s own thought. It is to admit: I don’t know.

Perhaps I have already quoted Socrates’ famous self-description too many times, but there is a treasure buried in these simple words:

“when I don’t know things, I don’t think that I do either.”

The oracle at Delphi had said of Socrates that he was the wisest of all. Since Socrates did not consider himself more knowledgeable than others, he found the statement puzzling. What could the oracle mean? The self-description above was Socrates’ solution to the riddle. If I am wiser than others, he thought, then my wisdom cannot consist in knowing more than others, because I do not. But I have a peculiar trait, and that is that when I do not know, I do not think I know either. Everyone I question here in Athens, on the other hand, seems to have the default attitude that they know, even when I can demonstrate that they do not. Whatever I ask them, they think they know the answer! I am not like that. If I do not know, I do not react as if I knew either. Perhaps this was what the oracle meant by my superior wisdom?

So, what did Socrates’ wisdom consist in? In beginning with doubt. But must he not have had reason to doubt? Surely, he must have known something, some intuition at least, which gave him reason to doubt! Curiously, Socrates seems to have doubted without good reason. He said that he heard an inner voice urging him to stop and be silent, just as he was about to speak verbosely as if he knew something: Socrates’ demon. But how could an “inner voice” make Socrates wise? Is that not rather a sure sign of madness?

I do not think we should make too much of the fact that Socrates chose to describe the situation in terms of an inner voice. The important thing is that he does not react, when he does not know. Imagine someone who has become clearly aware of her own reflex to get angry. The moment she notices that she is about to get angry, she becomes completely calm instead. The drama is over before it begins. Likewise, Socrates became completely calm the moment he noted his own reflex to start talking as if he knew something. He was clearly aware of his own knowledge reflex.

What is the knowledge reflex? We have already felt its activity in the post. It struck us when we thought we knew that a wise person cannot doubt without reason. It almost drove us mad! If Socrates doubted, he must have had good reason! If an “inner voice” inspired doubt, it would not be wisdom, but a sure sign of madness! This is the knowledge reflex. To suddenly not be able to stop talking, as if we had particularly good reason to assert ourselves. Socrates never reacted that way. In those situations, he noted the knowledge reflex and immediately became perfectly calm.

The value of becoming completely calm just when the knowledge reflex wants to set us in motion is that it makes us free to examine ourselves. If we let the knowledge reflex drive our doubts – “this is highly dubious, because…” – we would not question ourselves, but assert ourselves. We would doubt the way we humans generally doubt, because we think we have reason to doubt. Of course, Socrates does not doubt arbitrarily, like a madman, but the source of his doubt becomes apparent only in retrospect. Philosophy is love for the clarity we lack when philosophizing begins. Without this loving attitude towards what we do not know, our collective human knowledge risks becoming a colossus on clay feet – is it already wobbly?

When the knowledge reflex no longer controls us, but is numbed by philosophical self-doubt, we are free to think independently and clearly. Therefore, philosophy begins with doubt and not with belief or knowledge.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Plato. “The Apology of Socrates.” In The Last Days of Socrates, translated by Christopher Rowe, 32-62. Penguin Books, 2010.

This post in Swedish

Thinking about thinking

« Older posts