A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 1 of 19)

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

Philosophy on a chair

Philosophy is an unusual activity, partly because it can be conducted to such a large extent while sitting still. Philosophers do not need research vessels, laboratories or archives to work on their questions. Just a chair to sit on. Why is it like that?

The answer is that philosophers examine our ways of thinking, and we are never anywhere but where we are. A chair takes us exactly as far as we need: to ourselves. Philosophizing on a chair can of course look self-absorbed. How can we learn anything significant from “thinkers” who neither seem to move nor look around the world? If we happen to see them sitting still in their chairs and thinking, they can undeniably appear to be cut off from the complex world in which the rest of us must live and navigate. Through its focus on human thought, philosophy can seem to ignore our human world and not be of any use to the rest of us.

What we overlook with such an objection to philosophy is that our complex human world already reflects to a large extent our human ways of thinking. To the extent that these ways of thinking are confused, limited, one-sided and unjust, our world will also be confused, limited, one-sided and unjust. When we live and move in this human world, which reflects our ways of thinking, can it not be said that we live somewhat inwardly, without noticing it? We act in a world that reflects ourselves, including the shortcomings in our ways of thinking.

If so, maybe it is not so introverted to sit down and examine these ways of thinking? On the contrary, this seems to enable us to free ourselves and the world from human thought patterns that sometimes limit and distort our perspectives without us realizing it. Of course, research vessels, laboratories and archives also broaden our perspectives on the world. But we already knew that. I just wanted to open our eyes to a more unexpected possibility: that even a chair can take us far, if we practice philosophy on it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Finding the way when there is none

A difficulty for academic writers is managing the dual role of both knowing and not knowing, of both showing the way and not finding it. There is an expectation that such writers should already have the knowledge they are writing about, that they should know the way they show others right from the start. As readers, we are naturally delighted and grateful to share the authors’ knowledge and insight.

But academic writers usually write because something strikes them as puzzling. They write for the same reason that readers read: because they lack the knowledge and clarity required to find the way through the questions. This lack stimulates them to research and write. The way that did not exist, takes shape when they tackle their questions.

This dual role as a writer often worries students who are writing an essay or dissertation for the first time. They can easily perceive themselves as insufficiently knowledgeable to have the right to tackle the work. Since they lack the expertise that they believe is required of academic writers from the outset, does it not follow that they are not yet mature enough to begin the work? Students are easily paralyzed by the knowledge demands they place on themselves. Therefore, they hide their questions instead of tackling them.

It always comes as a surprise, that the way actually takes shape as soon as we ask for it. Who dares to believe that? Research is a dynamic interplay with our questions: with ignorance and lack of clarity. An academic writer is not primarily someone who knows a lot and who therefore can show others the way, but someone who dares and is even stimulated by this duality of both knowing and not knowing, of both finding and not finding the way.

If we have something important to learn from the exploratory writers, it is perhaps that living knowledge cannot be separated as pure knowledge and nothing but knowledge. Knowledge always interacts with its opposite. Therefore, essay writing students already have the most important asset to be able to write in an exploratory way, namely the questions they are wondering about. Do not hide the questions, but let them take center stage. Let the text revolve around what you do not know. Knowledge without contact with ignorance is dead.  It solves no one’s problem, it answers no one’s question, it removes no one’s confusion. So let the questions sprout in the soil of the text, and the way will soon take shape.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about authorship

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

The doubtful beginnings of philosophy

Philosophy begins with doubt, this has been emphasized by many philosophers. But what does it mean to doubt? To harbor suspicions? To criticize accepted beliefs? In that case, doubt is based on thinking we know better. We believe that we have good reason to doubt.

Is that doubting? Thinking that you know? It sounds paradoxical, but it is probably the most common form of doubt. We doubt, and think we can easily explain why. But this is hardly the doubt of philosophy. For in that case philosophy would not begin with doubt, but with belief or knowledge. If a philosopher doubts, and easily motivates the doubt, the philosopher will soon doubt her own motive for doubting. To doubt, as a philosopher doubts, is to doubt one’s own thought. It is to admit: I don’t know.

Perhaps I have already quoted Socrates’ famous self-description too many times, but there is a treasure buried in these simple words:

“when I don’t know things, I don’t think that I do either.”

The oracle at Delphi had said of Socrates that he was the wisest of all. Since Socrates did not consider himself more knowledgeable than others, he found the statement puzzling. What could the oracle mean? The self-description above was Socrates’ solution to the riddle. If I am wiser than others, he thought, then my wisdom cannot consist in knowing more than others, because I do not. But I have a peculiar trait, and that is that when I do not know, I do not think I know either. Everyone I question here in Athens, on the other hand, seems to have the default attitude that they know, even when I can demonstrate that they do not. Whatever I ask them, they think they know the answer! I am not like that. If I do not know, I do not react as if I knew either. Perhaps this was what the oracle meant by my superior wisdom?

So, what did Socrates’ wisdom consist in? In beginning with doubt. But must he not have had reason to doubt? Surely, he must have known something, some intuition at least, which gave him reason to doubt! Curiously, Socrates seems to have doubted without good reason. He said that he heard an inner voice urging him to stop and be silent, just as he was about to speak verbosely as if he knew something: Socrates’ demon. But how could an “inner voice” make Socrates wise? Is that not rather a sure sign of madness?

I do not think we should make too much of the fact that Socrates chose to describe the situation in terms of an inner voice. The important thing is that he does not react, when he does not know. Imagine someone who has become clearly aware of her own reflex to get angry. The moment she notices that she is about to get angry, she becomes completely calm instead. The drama is over before it begins. Likewise, Socrates became completely calm the moment he noted his own reflex to start talking as if he knew something. He was clearly aware of his own knowledge reflex.

What is the knowledge reflex? We have already felt its activity in the post. It struck us when we thought we knew that a wise person cannot doubt without reason. It almost drove us mad! If Socrates doubted, he must have had good reason! If an “inner voice” inspired doubt, it would not be wisdom, but a sure sign of madness! This is the knowledge reflex. To suddenly not be able to stop talking, as if we had particularly good reason to assert ourselves. Socrates never reacted that way. In those situations, he noted the knowledge reflex and immediately became perfectly calm.

The value of becoming completely calm just when the knowledge reflex wants to set us in motion is that it makes us free to examine ourselves. If we let the knowledge reflex drive our doubts – “this is highly dubious, because…” – we would not question ourselves, but assert ourselves. We would doubt the way we humans generally doubt, because we think we have reason to doubt. Of course, Socrates does not doubt arbitrarily, like a madman, but the source of his doubt becomes apparent only in retrospect. Philosophy is love for the clarity we lack when philosophizing begins. Without this loving attitude towards what we do not know, our collective human knowledge risks becoming a colossus on clay feet – is it already wobbly?

When the knowledge reflex no longer controls us, but is numbed by philosophical self-doubt, we are free to think independently and clearly. Therefore, philosophy begins with doubt and not with belief or knowledge.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Plato. “The Apology of Socrates.” In The Last Days of Socrates, translated by Christopher Rowe, 32-62. Penguin Books, 2010.

This post in Swedish

Thinking about thinking

Time to forget time

A theme in recent blog posts has been our need for time. Patients need time to be listened to; time to ask questions; time to decide whether they want to be included in clinical studies, and time for much more. Healthcare workers need time to understand the patients’ situation; time to find solutions to the individual problems of patients suffering from rheumatoid arthritis, and time for much more. This theme, our need for time, got me thinking about what is so great about time.

It could be tempting to conduct time and motion studies of our need for time. How much time does the patient need to spend with the doctor to feel listened to? How much time does the nurse need to spend with the patient to get the experience of providing good care? The problem with such studies is that they destroy the greatness of time. To give the patient or the nurse the measured time, prescribed by the time study, is to glance at the clock. Would you feel listened to if the person you were talking to had a stopwatch hanging around their neck? Would you be a good listener yourself if you waited for the alarm signal from the stopwatch hanging around your neck?

Time studies do not answer our question of what we need, when we need time. If it was really a certain amount of time we needed, say fifteen minutes, then it should make no difference if a ticking stopwatch hung around the neck. But it makes a difference! The stopwatch steals our time. So, what is so great about time?

I think the answer is well on its way to revealing itself, precisely because we give it time to come at its own pace. What we need when we need time, is to forget time! That is the great thing about having time. That we no longer think about it.

Again, it can be tempting to conduct time studies. How much time does the patient and the doctor need to forget time? Again, time studies ruin the greatness of time. How? They frame everything in time. They force us to think about time, even when the point is to forget it.

Our need for time is not about measured quantities of time, but about the timeless quality of not thinking about time. Thinking about time steals time from us. Since it is not really about time, it does not have to take that long.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

Moral stress: what does the COVID-19 pandemic teach us about the concept?

Newly formed concepts can sometimes satisfy such urgent linguistic needs that they immediately seem completely self-evident. Moral stress is probably such a concept. It is not many decades old. Nevertheless, the concept probably appeared from the beginning as an all-too-familiar reality for many healthcare workers.

An interesting aspect of these immediately self-evident concepts is that they effortlessly find their own paths through language, despite our efforts to define the right path. They are simply too striking in living spoken language to be captured in the more rigid written language of definitions. However, the first definition of moral stress was fairly straightforward. This is how Andrew Jameton defined the concept:

“Moral distress arises when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action.”

Although the definition is not complicated in the written language, it still prevents the concept from speaking freely, as it wants to. For, do we not spontaneously want to talk about moral stress in other situations as well? For example, in situations where two different actions can be perceived as the right ones, but if we choose one action it excludes the other? Or in situations where something other than “institutional constraints” prevents the right course of action? Perhaps a sudden increase in the number of patients.

Here is a later definition of moral stress, which leaves more open (by Kälvemark, Höglund and Hansson):

“Traditional negative stress symptoms that occur due to situations that involve an ethical dimension where the health care provider feels he/she is not able to preserve all interests at stake.”

This definition allows the concept to speak more freely, in more situations than the first, although it is possibly slightly more complicated in the written language. That is of course no objection. A definition has other functions than the concept being defined, it does not have to be catchy like a song chorus. But if we compare the definitions, we can notice how both express the authors’ ideas about morality, and thus about moral stress. In the first definition, the author has the idea that morality is a matter of conscience and that moral stress occurs when institutional constraints of the profession prevent the practitioner from acting as conscience demands. Roughly. In the second definition, the authors have the idea that morality is rather a kind of balancing of different ethical values and interests and that moral stress arises in situations that prevent the trade-offs from being realized. Roughly.

Why do I dwell on the written and intellectual aspects of the definitions, even though it is hardly an objection to a definition? It has to do with the relationship between our words and our ideas about our words. Successful words find their own paths in language despite our ideas about the path. In other words: despite our definitions. Jameton both coined and defined moral (di)stress, but the concept almost immediately stood, and walked, on its own feet. I simply want to remind you that spoken-language spontaneity can have its own authority, its own grounding in reality, even when it comes to newly formed concepts introduced through definitions.

An important reason why the newly formed concept of moral stress caught on so immediately is probably that it put into words pressing problems for healthcare workers. Issues that needed to be noticed, discussed and dealt with. One way to develop the definition of moral stress can therefore be to listen to how healthcare workers spontaneously use the concept about situations they themselves have experienced.

A study in BMC Medical Ethics does just this. Together with three co-authors, Martina E. Gustavsson investigated how Swedish healthcare workers (assistants, nurses, doctors, etc.) described moral stress during the COVID-19 pandemic. After answering a number of questions, the participants were requested to describe, in a free text response, situations during the pandemic in which they experienced moral stress. These free text answers were conceptually analyzed with the aim of formulating a refined definition of moral stress.

An overarching theme in the free text responses turned out to be: being prevented from providing good care to needy patients. The healthcare workers spoke of a large number of obstacles. They perceived problems that needed to be solved, but felt that they were not taken seriously, that they were inadequate or forced to act outside their areas of expertise. What stood in the way of good care? The participants in the study spoke, among other things, about unusual conditions for decision-making during the pandemic, about tensions in the work team (such as colleagues who did not dare to go to work for fear of being infected), about substandard communication with the organizational management. All this created moral stress.

But they also talked about the pandemic itself as an obstacle. The prioritization of COVID-19 patients meant that other patients received worse care and were exposed to the risk of infection. The work was also hindered by a lack of resources, such as personal protective equipment, while the protective equipment prevented staff from comforting worried patients. The visiting restrictions also forced staff to act as guards against patients’ relatives and isolate infected patients from their children and partners. Finally, the pandemic prevented good end-of-life care. This too was morally stressful.

How can the healthcare workers’ free text responses justify a refined definition of moral stress? Martina E. Gustafsson and co-authors consider the definition above by Kälvemark, Höglund and Hansson as a good definition to start from. But one type of situation that the participants in the study described probably falls outside that definition, namely the situation of not being taken seriously, of feeling inadequate and powerless. The study therefore proposes the following definition, which includes these situations:

“Moral stress is the kind of stress that arises when confronted with a moral challenge, a situation in which it is difficult to resolve a moral problem and in which it is difficult to act, or feeling insufficient when you act, in accordance with your own moral values.”

Here, too, one can sense an idea of morality, and thus of moral stress. The authors think of morality as being about solving moral problems, and that moral stress arises when this endeavor encounters challenges, or when one feels inadequate in the attempts to solve the problems. The definition can be considered a refined idea of what moral stress is. It describes more precisely the relevant situations where healthcare workers spontaneously want to talk about moral stress.

Obviously, we can learn a lot about the concept of moral stress from the experience of the COVID-19 pandemic. Read the study here, which contains poignant descriptions of morally stressful situations during the pandemic: “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic.”

Finally, I would like to mention two general lessons about language, which in my view the study highlights. The first is that we can learn a lot about our concepts through the difficulties of defining them. The study took this “definition resistance” seriously by listening to how healthcare workers spontaneously talk about moral stress. This created friction that helped refine the definition. The second lesson is that we often use words despite our ideas about what the words mean or should mean. Spoken language spontaneity has a natural weight and authority that we easily overlook, but from which we have much to learn – as in this empirical study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gustavsson, M.E., von Schreeb, J., Arnberg, F.K. et al. “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic”. BMC Med Ethics 24, 110 (2023). https://doi.org/10.1186/s12910-023-00993-y

This post in Swedish

Minding our language

« Older posts