A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 2 of 33)

Does knowing the patient make a moral difference?

Several ethical concepts and principles govern how patients should be treated in healthcare. For example, healthcare professionals should respect patients’ autonomy. Moreover, they should act in the patients’ best interest and avoid actions that can cause harm. Patients must also be treated fairly. However, exactly how such ethical concepts and principles should be applied can vary in different situations.

A new article examines whether the application may depend on whether the healthcare personnel know the patient (in the sense of having knowledge about the patient). Some healthcare situations are characterized by the fact that the patient is unknown to the personnel: they have never met the patient before. Other situations are characterized by familiarity: the personnel have had continuous contact with the patient for a long time. In the latter situations, the personnel know the patient’s personality, living conditions, preferences and needs. Does such familiarity with the patient make any difference to how patients should be treated ethically by the healthcare staff, ask the authors of the article, Joar Björk and Anna Hirsch.

It may be tempting to reply that knowing the patient should not be allowed to play any role, that it follows from the principle of justice that familiarity should not be allowed to make any difference. Of course, the principle of justice places limits on the importance of familiarity with the patient. But in healthcare there is still this difference between situations marked by unfamiliarity and situations marked by familiarity. Consider the difference between screening and palliative home care. Should not this difference sometimes make a moral difference?

Presumably familiarity can sometimes make a moral difference, the authors argue. They give examples of how, not least, autonomy can take different forms depending on whether the situation is characterized by familiarity or unfamiliarity. Take the question of when and how patients should be allowed to delegate their decision-making to the healthcare personnel. If the personnel do not know the patient at all, it seems to be at odds with autonomy to take over the patient’s decision-making, even if the patient wishes it. However, if the personnel are well acquainted with the patient, it may be more consistent with autonomy to take over parts of the decision-making, if the patient so wishes. The authors provide additional examples. Suppose a patient has asked not to be informed prior to treatment, but the staff know the patient well and know that a certain part of the information could make this particular patient want to change certain decisions about the treatment. Would it then not be ethically correct to give the patient at least that part of the information and problematic not to do so? Or suppose a patient begins to change their preferences back and forth. If the patient is unfamiliar to the staff, it may be correct to always let the most recent preference apply. (One may not even be aware that the patient had other preferences before.) If, on the other hand, the patient is well known, the staff may have to take into account both past and present preferences and make a more global assessment of the changes and of autonomy.

The authors also exemplify how the application of other moral concepts and principles can take different forms, depending on whether the relationship with the patient is characterized by familiarity or unfamiliarity. Even the principle of justice could in some cases take different form, depending on whether the personnel know the patient or not, they suggest. If you want to see a possible example of this, read the article here: An “ethics of strangers”? On knowing the patient in clinical ethics.

The authors finally argue that care decisions regarding autonomy, justice and acting in the best interest of the patient are probably made with greater precision if the personnel know the patient well. They argue that healthcare professionals therefore should strive to get to know their patients. They also argue that healthcare systems where a greater proportion of the staff know a greater proportion of the patients are preferable from an ethical point of view, for example systems that promote therapeutic continuity.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk, J., Hirsch, A. An “ethics of strangers”? On knowing the patient in clinical ethics. Med Health Care and Philosophy 27, 389–397 (2024). https://doi.org/10.1007/s11019-024-10213-y

This post in Swedish

We have a clinical perspective

End-of-life care: ethical challenges experienced by critical care nurses

In an intensive care unit, seriously ill patients who need medical and technical support for central bodily functions, such as breathing and circulation, are monitored and treated. Usually it goes well, but not all patients survive, despite the advanced and specialized care. An intensive care unit can be a stressful environment for the patient, not least because of the technical equipment to which the patient is connected. When transitioning to end-of-life care, one therefore tries to create a calmer and more dignified environment for the patient, among other things by reducing the use of life-sustaining equipment and focusing on reducing pain and anxiety.

The transition to end-of-life care can create several ethically challenging situations for critical care nurses. What do these challenges look like in practice? The question is investigated in an interview study with nurses at intensive care units in a Swedish region. What did the interviewees say about the transition to end-of-life care?

A challenge that many interviewees mentioned was when life-sustaining treatment was continued at the initiative of the physician, despite the fact that the nurses saw no signs of improvement in the patient and judged that the probability of survival was very low. There was concern that the patient’s suffering was thus prolonged and that the patient was deprived of the right to a peaceful and dignified death. There was also concern that continued life-sustaining treatment could give relatives false hope that the patient would survive, and that this prevented the family from supporting the patient at the end of life. Other challenges had to do with the dosage of pain and anti-anxiety drugs. The nurses naturally sought a good effect, but at the same time were afraid that too high doses could harm the patient and risk hastening death. The critical care nurses also pointed out that family members could request higher doses for the patient, which increased the concern about the risk of possibly shortening the patient’s life.

Other challenges had to do with situations where the patient’s preferences are unknown, perhaps because the patient is unconscious. Another challenge that was mentioned is when conscious patients have preferences that conflict with the nurses’ professional judgments and values. A patient may request that life-sustaining treatment cease, while the assessment is that the patient’s life can be significantly extended by continued treatment. Additional challenging situations can arise when the family wants to protect the patient from information that death is imminent, which violates the patient’s right to information about diagnosis and prognosis.

Finally, various situations surrounding organ donation were mentioned as ethically challenging. For example, family members may oppose the patient’s decision to donate organs. It may also happen that the family does not understand that the patient suffered a total cerebral infarction, and believes that the patient died during the donation surgery.

The results provide a good insight into ethical challenges in end-of-life care that critical care nurses experience. Read the article here: Critical care nurses’ experiences of ethical challenges in end-of-life care.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Palmryd L, Rejnö Å, Alvariza A, Godskesen T. Critical care nurses’ experiences of ethical challenges in end-of-life care. Nursing Ethics. 2024;0(0). doi:10.1177/09697330241252975

This post in Swedish

Ethics needs empirical input

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Of course, but: ethics in palliative practice

What is obvious in principle may turn out to be less obvious in practice. That would be at least one possible interpretation of a new study on ethics in palliative care.

Palliative care is given to patients with life-threatening illnesses that cannot be cured. Although palliative care can sometimes contribute to extending life somewhat, the focus is on preventing and alleviating symptoms in the final stages of life. The patient can also receive support to deal with worries about death, as well as guidance on practical issues regarding finances and relationships with relatives.

As in all care, respect for the patient’s autonomy is central in palliative care. To the extent possible, the patient should be given the opportunity to participate in the medical decision-making and receive information that corresponds to the patient’s knowledge and wishes for information. This means that if a patient does not wish information about their health condition and future prospects, this should also be respected. How do palliative care professionals handle such a situation, where a patient does not want to know?

The question is investigated in an interview study by Joar Björk, who is a clinical ethicist and physician in palliative home care. He conducted six focus group interviews with staff in palliative care in Sweden, a total of 33 participants. Each interview began with an outline of an ethically challenging patient case. A man with disseminated prostate cancer is treated by a palliative care team. He has previously reiterated that it is important for him to gain complete knowledge of the illness and how his death may look. Because the team had to deal with many physical symptoms, they have not yet had time to answer his questions. When they finally get time to talk to him, he suddenly says that he does not want more information and that the issue should not be raised again. He gives no reason for his changed position, but nothing else seems to have changed and he seems to be in his right mind.

What did the interviewees say about the made-up case? The initial reaction was that it goes without saying that the patient has the right not to be informed. If a patient does not want information, then you must not impose the information on him, but must “meet the patient where he is.” But the interviewees still began to wonder about the context. Why did the man suddenly change his mind? Although the case description states that the man is competent to make decisions, this began to be doubted. Or could someone close to him have influenced him? What at first seemed obvious later appeared to be problematic.

The interviewees emphasized that in a case like this one must dig deeper and investigate whether it is really true that the patient does not want to be informed. Maybe he said that he does not want to know to appear brave, or to protect loved ones from disappointing information? Preferences can also change over time. Suddenly you do not want what you just wanted, or thought you wanted. Palliative care is a process, it was emphasized in the interviews. Thanks to the fact that the care team has continuous contact with the patient, it was felt that one could carefully probe what he really wants at regular intervals.

Other values were also at stake for the interviewees, which could further contribute to undermining what at first seemed obvious. For example, that the patient has the right to a dignified, peaceful and good death. If he is uninformed that he has a very short time left to live, he cannot prepare for death, say goodbye to loved ones, or finish certain practical tasks. It may also be more difficult to plan and provide good care to an uninformed patient, and it may feel dishonest to know something important but not tell the person concerned. The interviewees also considered the consequences for relatives of the patient’s reluctance to be informed.

The main result of the study is that the care teams found it difficult to handle a situation where a patient suddenly changes his mind and does not want to be informed. Should they not have experienced these difficulties? Should they accept what at first seemed self-evident in principle, namely that the patient has the right not to know? The interviewees themselves emphasized that care is a process, a gradually unfolding relationship, and that it is important to be flexible and continuously probe the changing will of the patient. Perhaps, after all, it is not so difficult to deal with the case in practice, even if it is not as simple as it first appeared?

The interviewees seemed unhappy about the patient’s decision, but at the same time seemed to feel that there were ways forward and that time worked in their favor. In the end, the patient probably wants to know, after all, they seemed to think. Should they not have had such an attitude towards the patient’s decision?

Read the author’s interesting discussion of the study results here: “It is very hard to just accept this” – a qualitative study of palliative care teams’ ethical reasoning when patients do not want information.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk, J. “It is very hard to just accept this” – a qualitative study of palliative care teams’ ethical reasoning when patients do not want information. BMC Palliative Care 23, 91 (2024). https://doi.org/10.1186/s12904-024-01412-8

This post in Swedish

We like real-life ethics

What is hidden behind the concept of research integrity?

In order to counteract scientific misconduct and harmful research, one often talks about protecting and supporting research integrity. The term seems to cover three different aspects of research, although the differences may not always be fully in mind. The term can refer to the character traits of individual researchers, for example, that the researcher values truth and precision and has good intentions. But the term can also refer to the research process, for example, that the method, data and results are correctly chosen, well executed and faithfully reproduced in scientific publications. Third, the term can refer to research-related institutions and systems, such as universities, ethical review, legislation and scientific journals. In the latter case, it is usually emphasized that research integrity presupposes institutional conditions beyond the moral character of individual researchers.

Does such a varied concept have to be problematic? Of course not, but possibly the concept of research integrity is less suitable, argue Gert Helgesson and William Bülow in an article that you can read here: Research Integrity and Hidden Value Conflicts.

In the article, they first discuss some ambiguities in the three uses of the concept of research integrity. Which personal traits are desirable in researchers and which values should they endorse? Does the integrity of the research process cover all ethically relevant aspects of research, including the application process, for example? Are research-related institutions actors with research integrity, or are they rather means that support research integrity?

Mentioning these ambiguities is not, as I understand it, intended as a decisive objection. Nor do the authors think that it is generally a shortcoming if concepts have a wide and varied use. But the concept of research integrity risks hiding value conflicts through its varying use, they argue. Suppose someone claims that, in order to protect and support research integrity, we should criminalize serious forms of scientific misconduct. This is perhaps true if by research integrity we refer to aspects of the research process, for example, that results are accurate and reliable. But the stricter regulation of research that this entails risks reducing the responsibility of individual researchers, which can undermine research integrity in the first sense. How should we compare the value of research integrity in the different senses? What does it mean to “increase research integrity”?

The concept of research integrity is not useless, the authors point out. But if we want to make value conflicts visible, if we want to clarify what we mean by research integrity and which forms of integrity are most important, as well as clear up the ambiguities mentioned above, then we will examine issues that are appropriately described as issues of research ethics.

If I understand the authors correctly, they mean that ethical questions about research should be characterized as research ethics. It is unfortunate that “research integrity” has come to function as an alternative designation for ethical questions about research. Everything becomes clearer if any questions about “research integrity,” if we want to use the concept, fall under research ethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Helgesson, G., Bülow, W. Research Integrity and Hidden Value Conflicts. Journal of Academic Ethics 21, 113–123 (2023). https://doi.org/10.1007/s10805-021-09442-0

This post in Swedish

We like ethics

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

Better evidence may solve a moral dilemma

More than 5 million women become pregnant in the EU every year and a majority take at least one medication during pregnancy. A problem today is that as few as 5% of available medications have been adequately monitored, tested and labelled with safety information for use in pregnant and breastfeeding women. The field is difficult to study and has suffered from a lack of systematically gathered insights that could lead to more effective data generation methodologies. Fragmentation and misinformation results in confusing and contradictory communication and perception of risks by both health professionals and women and their families. For the doctor who prescribes the medicine, a genuine moral dilemma arises. In order not to expose the child to risks, the lack of good scientific evidence in many cases means that, for precautionary reasons, the drug treatment is discontinued or the mother is advised not to breastfeed. At the same time, the mother benefits most from the prescribed medicine and we know that breastfeeding is good for both the newborn and the mother.

Within the project ConcePTION, several studies are underway to investigate the effect of drugs both during pregnancy and during breastfeeding. Based on the need to meet regulatory requirements, procedures have been established for breast milk collection, informed consent, shipping, storage and analysis of pharmacokinetic properties (how drugs are metabolized in the body). Five demonstration studies are conducted. The University of Oslo is doing such a study on a drug called Levocetirizine, the University Hospital of Toulouse is studying Amoxicillin and the University Hospital of Lausanne is studying the drug Venlafaxine.

In Sweden, in two demonstration studies, we will collect breast milk and blood samples from the mother and the child for two drugs: metformin, which is used in the treatment of type 2 diabetes and prednisolone, which is used in the treatment of for example rheumatoid arthritis. In both cases, there is limited data, which is partly old, from the 1970s, and partly analyzed with outdated methods. Both studies are approved by The Swedish Medical Product Authority (MPA) as low intervention clinical trials (see below). 

The studies are a collaboration between Uppsala University and several clinical centers: Sahlgrenska University Hospital/East in Gothenburg, Örebro University Hospital, Center for Clinical Children’s Studies, Astrid Lindgren Children’s Hospital in Stockholm, Södra Älvsborgs Hospital in Borås and Umeå University Hospital, with adjacent biobanks. Breast milk from the woman and blood samples from both woman and child will be transported to Uppsala Biobank for storage and analyzed with mass spectrometric methods at the Department of Pharmacy at Uppsala University. Informed consent is obtained both for the sampling and for the possibility of conducting future research on the stored samples. Collaborating biobanks are: Uppsala Biobank, Biobank West in Gothenburg, Örebro Biobank, Stockholm Medical Biobank and Biobank North in Umeå. 

Through these two studies, research biobanks with breast milk and associated blood samples are established for the first time in Sweden. In the long run, doctors and women who become pregnant can get better information for their recommendations and decisions regarding the use of medicines. 

ConcePTION is funded by the Innovative Medicines Initiative (IMI), which is a collaboration between the European Commission and the European Medicines Federation. 

Approvals by the Swedish Medical Product Authority (MPA): Dnr: 5.1.1-2023-090592 and 5.1.1-2023-104170.

Mats G. Hansson, photo by Mikael Wallerstedt

Written by…

Mats G. Hansson, senior professor of biomedical ethics at Uppsala University’s Centre for Research Ethics & Bioethics.

This post in Swedish

Part of international collaborations

Women on AI-assisted mammography

The use of AI tools in healthcare has become a recurring theme on this blog. So far, the posts have mainly been about mobile and online apps for use by patients and the general public. Today, the theme is more advanced AI tools which are used professionally by healthcare staff.

Within the Swedish program for breast cancer screening, radiologists interpret large amounts of X-ray images to detect breast cancer at an early stage. The workload is great and most of the time the images show no signs of cancer or pre-cancers. Today, AI tools are being tested that could improve mammography in several ways. AI could be used as an assisting resource for the radiologists to detect additional tumors. It could also be used as an independent reader of images to relieve radiologists, as well as to support assessments of which patients should receive care more immediately.

For AI-assisted mammography to work, not only the technology needs to be developed. Researchers also need to investigate how women think about AI-assisted mammography. How do they perceive AI-assisted breast cancer screening? Four researchers, including Jennifer Viberg Johansson and Åsa Grauman at CRB, interviewed sixteen women who underwent mammography at a Swedish hospital where an AI tool was tested as a third reviewer of the X-ray images, along with the two radiologists.

Several of the interviewees emphasized that AI is only a tool: AI cannot replace the doctor because humans have abilities beyond image recognition, such as intuition, empathy and holistic thinking. Another finding was that some of the interviewees had a greater tolerance for human error than if the AI tool failed, which was considered unacceptable. Some argued that if the AI tool makes a mistake, the mistake will be repeated systematically, while human errors are occasional. Some believed that the responsibility when the technology fails lies with the humans and not with the technology.

Personally, I cannot help but speculate that the sharp distinction between human error, which is easier to reconcile with, and unacceptably failing technology, is connected to the fact that we can say of humans who fail: “After all, the radiologists surely did their best.” On the other hand, we hardly say about failing AI: “After all, the technology surely did its best.” Technology does not become subject to certain forms of conciliatory considerations.

The authors themselves emphasize that the participants in the study saw AI as a valuable tool in mammography, but held that the tool cannot replace humans in the process. The authors also emphasize that the interviewees preferred that the AI tool identify possible tumors with high sensitivity, even if this leads to many false positive results and thus to unnecessary worry and fear. In order for patients to understand AI-assisted healthcare, effective communication efforts are required, the authors conclude.

It is difficult to summarize the rich material from interview studies. For more results, read the study here: Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Dembrower K, Strand F, et al. Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024;14:e084014. doi: 10.1136/bmjopen-2024-084014

This post in Swedish

Approaching future issues

« Older posts Newer posts »