A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 1 of 32)

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

Better evidence may solve a moral dilemma

More than 5 million women become pregnant in the EU every year and a majority take at least one medication during pregnancy. A problem today is that as few as 5% of available medications have been adequately monitored, tested and labelled with safety information for use in pregnant and breastfeeding women. The field is difficult to study and has suffered from a lack of systematically gathered insights that could lead to more effective data generation methodologies. Fragmentation and misinformation results in confusing and contradictory communication and perception of risks by both health professionals and women and their families. For the doctor who prescribes the medicine, a genuine moral dilemma arises. In order not to expose the child to risks, the lack of good scientific evidence in many cases means that, for precautionary reasons, the drug treatment is discontinued or the mother is advised not to breastfeed. At the same time, the mother benefits most from the prescribed medicine and we know that breastfeeding is good for both the newborn and the mother.

Within the project ConcePTION, several studies are underway to investigate the effect of drugs both during pregnancy and during breastfeeding. Based on the need to meet regulatory requirements, procedures have been established for breast milk collection, informed consent, shipping, storage and analysis of pharmacokinetic properties (how drugs are metabolized in the body). Five demonstration studies are conducted. The University of Oslo is doing such a study on a drug called Levocetirizine, the University Hospital of Toulouse is studying Amoxicillin and the University Hospital of Lausanne is studying the drug Venlafaxine.

In Sweden, in two demonstration studies, we will collect breast milk and blood samples from the mother and the child for two drugs: metformin, which is used in the treatment of type 2 diabetes and prednisolone, which is used in the treatment of for example rheumatoid arthritis. In both cases, there is limited data, which is partly old, from the 1970s, and partly analyzed with outdated methods. Both studies are approved by The Swedish Medical Product Authority (MPA) as low intervention clinical trials (see below). 

The studies are a collaboration between Uppsala University and several clinical centers: Sahlgrenska University Hospital/East in Gothenburg, Örebro University Hospital, Center for Clinical Children’s Studies, Astrid Lindgren Children’s Hospital in Stockholm, Södra Älvsborgs Hospital in Borås and Umeå University Hospital, with adjacent biobanks. Breast milk from the woman and blood samples from both woman and child will be transported to Uppsala Biobank for storage and analyzed with mass spectrometric methods at the Department of Pharmacy at Uppsala University. Informed consent is obtained both for the sampling and for the possibility of conducting future research on the stored samples. Collaborating biobanks are: Uppsala Biobank, Biobank West in Gothenburg, Örebro Biobank, Stockholm Medical Biobank and Biobank North in Umeå. 

Through these two studies, research biobanks with breast milk and associated blood samples are established for the first time in Sweden. In the long run, doctors and women who become pregnant can get better information for their recommendations and decisions regarding the use of medicines. 

ConcePTION is funded by the Innovative Medicines Initiative (IMI), which is a collaboration between the European Commission and the European Medicines Federation. 

Approvals by the Swedish Medical Product Authority (MPA): Dnr: 5.1.1-2023-090592 and 5.1.1-2023-104170.

Mats G. Hansson, photo by Mikael Wallerstedt

Written by…

Mats G. Hansson, senior professor of biomedical ethics at Uppsala University’s Centre for Research Ethics & Bioethics.

This post in Swedish

Part of international collaborations

Women on AI-assisted mammography

The use of AI tools in healthcare has become a recurring theme on this blog. So far, the posts have mainly been about mobile and online apps for use by patients and the general public. Today, the theme is more advanced AI tools which are used professionally by healthcare staff.

Within the Swedish program for breast cancer screening, radiologists interpret large amounts of X-ray images to detect breast cancer at an early stage. The workload is great and most of the time the images show no signs of cancer or pre-cancers. Today, AI tools are being tested that could improve mammography in several ways. AI could be used as an assisting resource for the radiologists to detect additional tumors. It could also be used as an independent reader of images to relieve radiologists, as well as to support assessments of which patients should receive care more immediately.

For AI-assisted mammography to work, not only the technology needs to be developed. Researchers also need to investigate how women think about AI-assisted mammography. How do they perceive AI-assisted breast cancer screening? Four researchers, including Jennifer Viberg Johansson and Åsa Grauman at CRB, interviewed sixteen women who underwent mammography at a Swedish hospital where an AI tool was tested as a third reviewer of the X-ray images, along with the two radiologists.

Several of the interviewees emphasized that AI is only a tool: AI cannot replace the doctor because humans have abilities beyond image recognition, such as intuition, empathy and holistic thinking. Another finding was that some of the interviewees had a greater tolerance for human error than if the AI tool failed, which was considered unacceptable. Some argued that if the AI tool makes a mistake, the mistake will be repeated systematically, while human errors are occasional. Some believed that the responsibility when the technology fails lies with the humans and not with the technology.

Personally, I cannot help but speculate that the sharp distinction between human error, which is easier to reconcile with, and unacceptably failing technology, is connected to the fact that we can say of humans who fail: “After all, the radiologists surely did their best.” On the other hand, we hardly say about failing AI: “After all, the technology surely did its best.” Technology does not become subject to certain forms of conciliatory considerations.

The authors themselves emphasize that the participants in the study saw AI as a valuable tool in mammography, but held that the tool cannot replace humans in the process. The authors also emphasize that the interviewees preferred that the AI tool identify possible tumors with high sensitivity, even if this leads to many false positive results and thus to unnecessary worry and fear. In order for patients to understand AI-assisted healthcare, effective communication efforts are required, the authors conclude.

It is difficult to summarize the rich material from interview studies. For more results, read the study here: Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Dembrower K, Strand F, et al. Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024;14:e084014. doi: 10.1136/bmjopen-2024-084014

This post in Swedish

Approaching future issues

Mobile apps to check symptoms and get recommendations: what do users say?

What will you do if you feel sick or discover a rash and wonder what it is? Is it something serious? If you do not immediately contact healthcare, a common first step is to search for information on the internet. But there are also applications for mobiles and online, where users can check their symptoms. A chatbot asks for information about the symptoms. The user then receives a list of possible causes as well as a recommendation, for example to see a doctor.

Because the interaction with the chatbot can bring to mind a visit to the doctor who makes a diagnosis and recommends action, these apps raise questions that may have more to do with these tempting associations than with reality. Will the apps in the future make visiting the doctor redundant and lead to the devaluing of medical professions? Or will they, on the contrary, cause more visits to healthcare because the apps often make such recommendations? Do they contribute to better diagnostic processes with fewer misdiagnoses, or do they, on the contrary, interfere with the procedure of making a diagnosis?

The questions are important, provided they are grounded in reality. Are they? What do users really expect from these symptom checker apps? What are their experiences as users of such digital aids? There are hardly any studies on this yet. German researchers therefore conducted an interview study with participants who themselves used apps to check their symptoms. What did they say when they were interviewed?

The participants’ experiences were not unequivocal but highly variable and sometimes contradictory. But there was agreement on one important point. Participants trusted their own and the doctor’s judgments more than they trusted the app. Although opinions differed on whether the app could be said to provide “diagnoses,” and regardless of whether or not the recommendations were followed, the information provided by the app was considered to be indicative only, not authoritative. The fear that these apps would replace healthcare professionals and contribute to a devaluation of medical professions is therefore not supported in the study. The interviewees did not consider the apps as a substitute for consulting healthcare. Many saw them rather as decision support before possible medical consultation.

Some participants used the apps to prepare for medical appointments. Others used them afterwards to reflect on the outcome of the visit. However, most wanted more collaboration with healthcare professionals about using the apps, and some used the apps because healthcare professionals recommended them. This has an interesting connection to a Swedish study that I recently blogged about, where the participants were patients with rheumatoid arthritis. Some participants in that study had prepared their visits to the doctor very carefully by using a similar app, where they kept logbook of their symptoms. They felt all the more disappointed when they experienced that the doctor showed no interest in their observations. Maybe better planning and collaboration between patient and healthcare is needed regarding the use of similar apps?

Interview studies can provide valuable support for ethical reasoning. By giving us insights into a reality that we otherwise risk simplifying in our thinking, they help us ask better questions and discuss them in a more nuanced way. That the results are varied and sometimes even contradictory is therefore not a weakness. On the contrary, we get a more faithful picture of a whole spectrum of experiences, which do not always correspond to our usually more one-sided expectations. The participants in the German study did not discuss algorithmic bias, which is otherwise a common theme in the ethical debate about AI. However, some were concerned that they themselves might accidentally lead the app astray by giving biased input that expressed their own assumptions about the symptoms. Read the study here: “That’s just Future Medicine” – a qualitative study on users’ experiences of symptom checker apps.

Another unexpected result of the interview study was that several participants discussed using these symptom checker apps not only for themselves, but also for friends, partners, children and parents. They raised their concerns about this, as they perceived health information from family and friends as private. They were also concerned about the responsibility they assumed by communicating the analyzes and recommendations produced by the app to others. The authors argue that this unexpected finding raises new questions about responsibility and that the debate about digital aids related to health and care should be more attentive to relational ethical issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Müller, R., Klemmt, M., Koch, R. et al. “That’s just Future Medicine” – a qualitative study on users’ experiences of symptom checker apps. BMC Med Ethics 25, 17 (2024). https://doi.org/10.1186/s12910-024-01011-5

This post in Swedish

We recommend readings

Can positive action improve a meritocracy?

Despite political efforts to change the situation, gender imbalance is still evident in European universities and research institutions. A powerful tool for change is positive action. The tool may seem to be at odds with the meritocratic values that distinguish academia. Resistance to such measures may seem particularly well-motivated in science, which is supposed to be value-neutral and only let academic merit be the decisive factor behind researchers’ success in the competition for employment and research grants.

However, merits can be assessed and measured in different ways and merit systems may, for historical reasons, favor men over women. There are still societal expectations that the woman should take the main responsibility for children and aging parents, as well as for other household tasks. This pattern is reflected in working life, where female researchers can be expected to also take care of the academic housework. Dual household work reasonably gives women worse conditions in a competitive work environment that rewards productivity and quantity. Can the merit system then be said to be value neutral? Or does it prevent important changes not only to the gender distribution, but also to the system itself, which possibly favors quantity over quality, certain types of research questions over others, and self-absorbed competition over good collaboration?

Meritocracies, like everything else in this world, are changeable. They can change without ceasing to be meritocracies. Positive action could give the academic merit system a push in a possibly better direction, with better ways of assessing scholarly merit, soon helping to render the tool redundant. We therefore need to approach the question of positive action with our eyes open to both opportunities and risks.

The European project MINDtheGEPs (gender equality in research) recently published a policy brief, intended to support thoughtful implementation of positive action in European research. The tool can be used in three important areas: when awarding research grants and fellowships, when hiring full professors, and in the composition of evaluation committees. The policy brief provides an overview of common arguments for and against in the debate about positive action in European research organizations, divided into these three important areas. It is instructive to see the arguments side by side, as well as the counterarguments against the counterarguments. Because is it really self-evident that positive action must undermine a meritocracy?

Read MINDtheGEPs’ policy brief here: Gender quotas & positive action: An attack on meritocracy? There you will also find case studies of positive action at two Italian universities.

MINDtheGEPs hosts a series of Open Forums to discuss gender equality in the academic and research & innovation sectors, to facilitate knowledge exchange and mutual learning among scholars, practitioners and professionals supporting gender equality policies and measures. On 20 March 2024, their next Open Forum, they will share and discuss the contents of their latest policy brief – exploring the contentious topic of positive action, assessing arguments for and against, and drawing insights from MINDtheGEPs’ Gender Equality Plan development.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Solera C, Cipriani N, Holm Bodin A. (2023) Gender quotas & positive action: An attack on meritocracy? Zenodo. DOI: 10.5281/zenodo.1002437

This post in Swedish

We challenge habits of thought

Living with rheumatoid arthritis: how do patients perceive their interaction with healthcare and a self-care app?

Not all diseases can be cured, but medication along with other measures can alleviate the symptoms. Rheumatoid arthritis is one such disease. Medicines for symptoms such as swellings and stiffness have become very effective. As a patient, you can find good ways to live with the disease, even if it can mean more or less regular contacts with healthcare (depending on how you are affected). Not only with the doctor who prescribes medication, but often with an entire healthcare team: doctor, nurse, physiotherapist, occupational therapist and counselor. There are aids that make everyday life easier, such as orthopedic shoes, splints and easier-to-grip faucets at home, and many hospitals also offer patients education about the disease and how you can live and function with it, at home as well as at work.

The symptoms vary, not only between individuals but also for the same individual over time. The need for care and support is thus individual and changing. Therefore, it is important that the interaction between patient and healthcare works efficiently and with sensitivity to the patient’s unique situation at the moment. Since patients to a great extent have to deal with their illness on their own, and over time become increasingly knowledgeable about their own disease, it is important to listen to the patient. Not only to improve the patient’s experience of healthcare, but also to ensure that individual patients receive the care and support they need at the right moment. The patient may not be part of the healthcare team, but is still one of the most important team players.

There are digital self-care applications for rheumatoid arthritis, where the patients who choose to use the tools can get advice and information about the disease, prepare for contacts with healthcare, and keep a digital logbook about their symptoms, experiences and lifestyle. Such digital self-care apps can be assumed to make patients even more knowledgeable about their own disease. The logbook contains relevant observations, which the patient can describe in the meetings with the healthcare provider. What an asset to the care team!

Given the importance of good continuous team play between patient and healthcare in diseases such as rheumatoid arthritis, it is important that researchers regularly examine how patients experience the interaction. Jennifer Viberg Johansson, Hanna Blyckert and Karin Schölin Bywall recently conducted an interview study with patients at various hospitals in Sweden. The aim was to investigate not only the patients’ experiences of the interaction with healthcare, but also their experiences of a digital self-care app, and how the app affected the communication between patient and doctor.

The patients’ perception of their interaction with healthcare varied greatly. About half felt prioritized and excellently supported by the healthcare team and half felt neglected, some even dehumanized. This may reflect how different hospitals have different resources and competencies for rheumatoid arthritis, but also unclear communication about what the patients can expect. Many patients found the self-care app both useful and fun to use, and a good support when preparing for healthcare visits. At the same time, these detailed preparations could lead to even greater disappointment when it was felt that the doctor was not listening and barely looking at the patient.

Collaborative teamwork and clear communication is identified in the study as important contributing factors to patients’ well-being and ability to manage their illness. The patients valued time for dialogue with the rheumatologist and appreciated when their personal observations of life with the disease were listened to. Because some of the interviewed patients had the negative experience that the doctor did not listen to the observations they had compiled in the app, the authors believe that the use of digital tools should be promoted by the healthcare system and there should be an agreement on how the tool should be used at meetings to plan care and support.

For more details about the patients’ experiences, read the article here: Experiences of individuals with rheumatoid arthritis interacting with health care and the use of a digital self-care application: a qualitative interview study.

The study emphasizes the importance of patient-centered care for individuals with rheumatoid arthritis, as well as the importance of considering patients’ psychological well-being alongside their physical health. An important point in the study could perhaps be summarized as follows: appreciate the patient as a skilled team player.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Blyckert H, Schölin Bywall K. Experiences of individuals with rheumatoid arthritis interacting with health care and the use of a digital self-care application: a qualitative interview study. BMJ Open 2023;13:e072274. doi: 10.1136/bmjopen-2023-072274

This post in Swedish

In dialogue with patients

Moral stress: what does the COVID-19 pandemic teach us about the concept?

Newly formed concepts can sometimes satisfy such urgent linguistic needs that they immediately seem completely self-evident. Moral stress is probably such a concept. It is not many decades old. Nevertheless, the concept probably appeared from the beginning as an all-too-familiar reality for many healthcare workers.

An interesting aspect of these immediately self-evident concepts is that they effortlessly find their own paths through language, despite our efforts to define the right path. They are simply too striking in living spoken language to be captured in the more rigid written language of definitions. However, the first definition of moral stress was fairly straightforward. This is how Andrew Jameton defined the concept:

“Moral distress arises when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action.”

Although the definition is not complicated in the written language, it still prevents the concept from speaking freely, as it wants to. For, do we not spontaneously want to talk about moral stress in other situations as well? For example, in situations where two different actions can be perceived as the right ones, but if we choose one action it excludes the other? Or in situations where something other than “institutional constraints” prevents the right course of action? Perhaps a sudden increase in the number of patients.

Here is a later definition of moral stress, which leaves more open (by Kälvemark, Höglund and Hansson):

“Traditional negative stress symptoms that occur due to situations that involve an ethical dimension where the health care provider feels he/she is not able to preserve all interests at stake.”

This definition allows the concept to speak more freely, in more situations than the first, although it is possibly slightly more complicated in the written language. That is of course no objection. A definition has other functions than the concept being defined, it does not have to be catchy like a song chorus. But if we compare the definitions, we can notice how both express the authors’ ideas about morality, and thus about moral stress. In the first definition, the author has the idea that morality is a matter of conscience and that moral stress occurs when institutional constraints of the profession prevent the practitioner from acting as conscience demands. Roughly. In the second definition, the authors have the idea that morality is rather a kind of balancing of different ethical values and interests and that moral stress arises in situations that prevent the trade-offs from being realized. Roughly.

Why do I dwell on the written and intellectual aspects of the definitions, even though it is hardly an objection to a definition? It has to do with the relationship between our words and our ideas about our words. Successful words find their own paths in language despite our ideas about the path. In other words: despite our definitions. Jameton both coined and defined moral (di)stress, but the concept almost immediately stood, and walked, on its own feet. I simply want to remind you that spoken-language spontaneity can have its own authority, its own grounding in reality, even when it comes to newly formed concepts introduced through definitions.

An important reason why the newly formed concept of moral stress caught on so immediately is probably that it put into words pressing problems for healthcare workers. Issues that needed to be noticed, discussed and dealt with. One way to develop the definition of moral stress can therefore be to listen to how healthcare workers spontaneously use the concept about situations they themselves have experienced.

A study in BMC Medical Ethics does just this. Together with three co-authors, Martina E. Gustavsson investigated how Swedish healthcare workers (assistants, nurses, doctors, etc.) described moral stress during the COVID-19 pandemic. After answering a number of questions, the participants were requested to describe, in a free text response, situations during the pandemic in which they experienced moral stress. These free text answers were conceptually analyzed with the aim of formulating a refined definition of moral stress.

An overarching theme in the free text responses turned out to be: being prevented from providing good care to needy patients. The healthcare workers spoke of a large number of obstacles. They perceived problems that needed to be solved, but felt that they were not taken seriously, that they were inadequate or forced to act outside their areas of expertise. What stood in the way of good care? The participants in the study spoke, among other things, about unusual conditions for decision-making during the pandemic, about tensions in the work team (such as colleagues who did not dare to go to work for fear of being infected), about substandard communication with the organizational management. All this created moral stress.

But they also talked about the pandemic itself as an obstacle. The prioritization of COVID-19 patients meant that other patients received worse care and were exposed to the risk of infection. The work was also hindered by a lack of resources, such as personal protective equipment, while the protective equipment prevented staff from comforting worried patients. The visiting restrictions also forced staff to act as guards against patients’ relatives and isolate infected patients from their children and partners. Finally, the pandemic prevented good end-of-life care. This too was morally stressful.

How can the healthcare workers’ free text responses justify a refined definition of moral stress? Martina E. Gustafsson and co-authors consider the definition above by Kälvemark, Höglund and Hansson as a good definition to start from. But one type of situation that the participants in the study described probably falls outside that definition, namely the situation of not being taken seriously, of feeling inadequate and powerless. The study therefore proposes the following definition, which includes these situations:

“Moral stress is the kind of stress that arises when confronted with a moral challenge, a situation in which it is difficult to resolve a moral problem and in which it is difficult to act, or feeling insufficient when you act, in accordance with your own moral values.”

Here, too, one can sense an idea of morality, and thus of moral stress. The authors think of morality as being about solving moral problems, and that moral stress arises when this endeavor encounters challenges, or when one feels inadequate in the attempts to solve the problems. The definition can be considered a refined idea of what moral stress is. It describes more precisely the relevant situations where healthcare workers spontaneously want to talk about moral stress.

Obviously, we can learn a lot about the concept of moral stress from the experience of the COVID-19 pandemic. Read the study here, which contains poignant descriptions of morally stressful situations during the pandemic: “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic.”

Finally, I would like to mention two general lessons about language, which in my view the study highlights. The first is that we can learn a lot about our concepts through the difficulties of defining them. The study took this “definition resistance” seriously by listening to how healthcare workers spontaneously talk about moral stress. This created friction that helped refine the definition. The second lesson is that we often use words despite our ideas about what the words mean or should mean. Spoken language spontaneity has a natural weight and authority that we easily overlook, but from which we have much to learn – as in this empirical study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gustavsson, M.E., von Schreeb, J., Arnberg, F.K. et al. “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic”. BMC Med Ethics 24, 110 (2023). https://doi.org/10.1186/s12910-023-00993-y

This post in Swedish

Minding our language

Research nurses on ethical challenges in recruiting participants for clinical research

In clinical research with participating patients, research nurses play a central role. On a daily basis, they balance the values of care and the needs of research. For these nurses, it is clear that patients’ informed consent for research participation is more than just a one-time event completed by signing the form. The written consent is the beginning of a long relationship with the patients. The process requires effective communication throughout the course of the study, from obtaining consent to subsequent interactions with patients related to their consent. The research nurses must continuously ensure that participating patients are well informed about how the study is progressing, that they understand any changes to the set-up or to the risks and benefits. If conditions change too much, a new consent may need to be obtained.

Despite research nurses being so deeply involved in the entire consent process, there is a lack of research on this professional group’s experiences of and views on informed consent. What problems and opportunities do they experience? In an interview study, Tove Godskesen, Joar Björk and Niklas Juth studied the issue. They interviewed 14 Swedish research nurses about ethical challenges related to the consent process and how the challenges were handled.

The challenges were mainly about factors that could threaten voluntariness. Informed consent must be given voluntarily, but several factors can threaten this ethically important requirement. The nurses mentioned a number of factors, such as rushed decision-making in stressful situations, excessively detailed information to patients, doctors’ influence over patients, and disagreement within the family. An elusive threat to voluntariness is patients’ own sometimes unrealistic hopes for therapeutic benefit from research participation. Why is this elusive? Because the hopes can make the patients themselves motivated to participate. However, if the hopes are unrealistic, voluntariness can be said to be undermined even if the patients want to participate.

How do the research nurses deal with the challenges? An important measure is to give patients time in a calm environment to thoughtfully consider their participation and discuss it. This also reduces the risk of participants dropping out of the study, reasoned the nurses. Time with the patients also helps the research nurses to understand the patients’ situation, so that the recruitment does not take place hastily and perhaps on the basis of unrealistic expectations, they emphasized. The interviewees also said that they have an important role as advocates for the patients. In this role, the nurses may need time to understand and more closely examine the patients’ perspectives and reasons for potentially withdrawing from the study, and to find suitable solutions. It can also happen that patients say no to participation even though they really want to, perhaps because they are overwhelmed by all the information that made participation sound complicated. Again, the research nurses may need to give themselves and the patients time for in-depth conversations, so that patients who want to participate have the opportunity to do so. Maybe it is not as complicated as it seemed?

Read the important interview study here: Challenges regarding informed consent in recruitment to clinical research: a qualitative study of clinical research nurses’ experiences.

The study also highlights another possible problem that the research nurses raised, namely the questionable exclusion of certain groups from research participation (such as people who have difficulty understanding Swedish or have reduced cognitive ability). Such exclusion can mean that patients who want to participate in research are not allowed to do so, that certain groups have less access to new treatments, and that the scientific quality of the studies is hampered.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Godskesen, T., Björk, J. & Juth, N. Challenges regarding informed consent in recruitment to clinical research: a qualitative study of clinical research nurses’ experiences. Trials 24, 801 (2023). https://doi.org/10.1186/s13063-023-07844-6

This post in Swedish

Ethics needs empirical input

« Older posts