A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: interview studies

Women on AI-assisted mammography

The use of AI tools in healthcare has become a recurring theme on this blog. So far, the posts have mainly been about mobile and online apps for use by patients and the general public. Today, the theme is more advanced AI tools which are used professionally by healthcare staff.

Within the Swedish program for breast cancer screening, radiologists interpret large amounts of X-ray images to detect breast cancer at an early stage. The workload is great and most of the time the images show no signs of cancer or pre-cancers. Today, AI tools are being tested that could improve mammography in several ways. AI could be used as an assisting resource for the radiologists to detect additional tumors. It could also be used as an independent reader of images to relieve radiologists, as well as to support assessments of which patients should receive care more immediately.

For AI-assisted mammography to work, not only the technology needs to be developed. Researchers also need to investigate how women think about AI-assisted mammography. How do they perceive AI-assisted breast cancer screening? Four researchers, including Jennifer Viberg Johansson and Åsa Grauman at CRB, interviewed sixteen women who underwent mammography at a Swedish hospital where an AI tool was tested as a third reviewer of the X-ray images, along with the two radiologists.

Several of the interviewees emphasized that AI is only a tool: AI cannot replace the doctor because humans have abilities beyond image recognition, such as intuition, empathy and holistic thinking. Another finding was that some of the interviewees had a greater tolerance for human error than if the AI tool failed, which was considered unacceptable. Some argued that if the AI tool makes a mistake, the mistake will be repeated systematically, while human errors are occasional. Some believed that the responsibility when the technology fails lies with the humans and not with the technology.

Personally, I cannot help but speculate that the sharp distinction between human error, which is easier to reconcile with, and unacceptably failing technology, is connected to the fact that we can say of humans who fail: “After all, the radiologists surely did their best.” On the other hand, we hardly say about failing AI: “After all, the technology surely did its best.” Technology does not become subject to certain forms of conciliatory considerations.

The authors themselves emphasize that the participants in the study saw AI as a valuable tool in mammography, but held that the tool cannot replace humans in the process. The authors also emphasize that the interviewees preferred that the AI tool identify possible tumors with high sensitivity, even if this leads to many false positive results and thus to unnecessary worry and fear. In order for patients to understand AI-assisted healthcare, effective communication efforts are required, the authors conclude.

It is difficult to summarize the rich material from interview studies. For more results, read the study here: Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Dembrower K, Strand F, et al. Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024;14:e084014. doi: 10.1136/bmjopen-2024-084014

This post in Swedish

Approaching future issues

Mobile apps to check symptoms and get recommendations: what do users say?

What will you do if you feel sick or discover a rash and wonder what it is? Is it something serious? If you do not immediately contact healthcare, a common first step is to search for information on the internet. But there are also applications for mobiles and online, where users can check their symptoms. A chatbot asks for information about the symptoms. The user then receives a list of possible causes as well as a recommendation, for example to see a doctor.

Because the interaction with the chatbot can bring to mind a visit to the doctor who makes a diagnosis and recommends action, these apps raise questions that may have more to do with these tempting associations than with reality. Will the apps in the future make visiting the doctor redundant and lead to the devaluing of medical professions? Or will they, on the contrary, cause more visits to healthcare because the apps often make such recommendations? Do they contribute to better diagnostic processes with fewer misdiagnoses, or do they, on the contrary, interfere with the procedure of making a diagnosis?

The questions are important, provided they are grounded in reality. Are they? What do users really expect from these symptom checker apps? What are their experiences as users of such digital aids? There are hardly any studies on this yet. German researchers therefore conducted an interview study with participants who themselves used apps to check their symptoms. What did they say when they were interviewed?

The participants’ experiences were not unequivocal but highly variable and sometimes contradictory. But there was agreement on one important point. Participants trusted their own and the doctor’s judgments more than they trusted the app. Although opinions differed on whether the app could be said to provide “diagnoses,” and regardless of whether or not the recommendations were followed, the information provided by the app was considered to be indicative only, not authoritative. The fear that these apps would replace healthcare professionals and contribute to a devaluation of medical professions is therefore not supported in the study. The interviewees did not consider the apps as a substitute for consulting healthcare. Many saw them rather as decision support before possible medical consultation.

Some participants used the apps to prepare for medical appointments. Others used them afterwards to reflect on the outcome of the visit. However, most wanted more collaboration with healthcare professionals about using the apps, and some used the apps because healthcare professionals recommended them. This has an interesting connection to a Swedish study that I recently blogged about, where the participants were patients with rheumatoid arthritis. Some participants in that study had prepared their visits to the doctor very carefully by using a similar app, where they kept logbook of their symptoms. They felt all the more disappointed when they experienced that the doctor showed no interest in their observations. Maybe better planning and collaboration between patient and healthcare is needed regarding the use of similar apps?

Interview studies can provide valuable support for ethical reasoning. By giving us insights into a reality that we otherwise risk simplifying in our thinking, they help us ask better questions and discuss them in a more nuanced way. That the results are varied and sometimes even contradictory is therefore not a weakness. On the contrary, we get a more faithful picture of a whole spectrum of experiences, which do not always correspond to our usually more one-sided expectations. The participants in the German study did not discuss algorithmic bias, which is otherwise a common theme in the ethical debate about AI. However, some were concerned that they themselves might accidentally lead the app astray by giving biased input that expressed their own assumptions about the symptoms. Read the study here: “That’s just Future Medicine” – a qualitative study on users’ experiences of symptom checker apps.

Another unexpected result of the interview study was that several participants discussed using these symptom checker apps not only for themselves, but also for friends, partners, children and parents. They raised their concerns about this, as they perceived health information from family and friends as private. They were also concerned about the responsibility they assumed by communicating the analyzes and recommendations produced by the app to others. The authors argue that this unexpected finding raises new questions about responsibility and that the debate about digital aids related to health and care should be more attentive to relational ethical issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Müller, R., Klemmt, M., Koch, R. et al. “That’s just Future Medicine” – a qualitative study on users’ experiences of symptom checker apps. BMC Med Ethics 25, 17 (2024). https://doi.org/10.1186/s12910-024-01011-5

This post in Swedish

We recommend readings

Living with rheumatoid arthritis: how do patients perceive their interaction with healthcare and a self-care app?

Not all diseases can be cured, but medication along with other measures can alleviate the symptoms. Rheumatoid arthritis is one such disease. Medicines for symptoms such as swellings and stiffness have become very effective. As a patient, you can find good ways to live with the disease, even if it can mean more or less regular contacts with healthcare (depending on how you are affected). Not only with the doctor who prescribes medication, but often with an entire healthcare team: doctor, nurse, physiotherapist, occupational therapist and counselor. There are aids that make everyday life easier, such as orthopedic shoes, splints and easier-to-grip faucets at home, and many hospitals also offer patients education about the disease and how you can live and function with it, at home as well as at work.

The symptoms vary, not only between individuals but also for the same individual over time. The need for care and support is thus individual and changing. Therefore, it is important that the interaction between patient and healthcare works efficiently and with sensitivity to the patient’s unique situation at the moment. Since patients to a great extent have to deal with their illness on their own, and over time become increasingly knowledgeable about their own disease, it is important to listen to the patient. Not only to improve the patient’s experience of healthcare, but also to ensure that individual patients receive the care and support they need at the right moment. The patient may not be part of the healthcare team, but is still one of the most important team players.

There are digital self-care applications for rheumatoid arthritis, where the patients who choose to use the tools can get advice and information about the disease, prepare for contacts with healthcare, and keep a digital logbook about their symptoms, experiences and lifestyle. Such digital self-care apps can be assumed to make patients even more knowledgeable about their own disease. The logbook contains relevant observations, which the patient can describe in the meetings with the healthcare provider. What an asset to the care team!

Given the importance of good continuous team play between patient and healthcare in diseases such as rheumatoid arthritis, it is important that researchers regularly examine how patients experience the interaction. Jennifer Viberg Johansson, Hanna Blyckert and Karin Schölin Bywall recently conducted an interview study with patients at various hospitals in Sweden. The aim was to investigate not only the patients’ experiences of the interaction with healthcare, but also their experiences of a digital self-care app, and how the app affected the communication between patient and doctor.

The patients’ perception of their interaction with healthcare varied greatly. About half felt prioritized and excellently supported by the healthcare team and half felt neglected, some even dehumanized. This may reflect how different hospitals have different resources and competencies for rheumatoid arthritis, but also unclear communication about what the patients can expect. Many patients found the self-care app both useful and fun to use, and a good support when preparing for healthcare visits. At the same time, these detailed preparations could lead to even greater disappointment when it was felt that the doctor was not listening and barely looking at the patient.

Collaborative teamwork and clear communication is identified in the study as important contributing factors to patients’ well-being and ability to manage their illness. The patients valued time for dialogue with the rheumatologist and appreciated when their personal observations of life with the disease were listened to. Because some of the interviewed patients had the negative experience that the doctor did not listen to the observations they had compiled in the app, the authors believe that the use of digital tools should be promoted by the healthcare system and there should be an agreement on how the tool should be used at meetings to plan care and support.

For more details about the patients’ experiences, read the article here: Experiences of individuals with rheumatoid arthritis interacting with health care and the use of a digital self-care application: a qualitative interview study.

The study emphasizes the importance of patient-centered care for individuals with rheumatoid arthritis, as well as the importance of considering patients’ psychological well-being alongside their physical health. An important point in the study could perhaps be summarized as follows: appreciate the patient as a skilled team player.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Blyckert H, Schölin Bywall K. Experiences of individuals with rheumatoid arthritis interacting with health care and the use of a digital self-care application: a qualitative interview study. BMJ Open 2023;13:e072274. doi: 10.1136/bmjopen-2023-072274

This post in Swedish

In dialogue with patients

Research nurses on ethical challenges in recruiting participants for clinical research

In clinical research with participating patients, research nurses play a central role. On a daily basis, they balance the values of care and the needs of research. For these nurses, it is clear that patients’ informed consent for research participation is more than just a one-time event completed by signing the form. The written consent is the beginning of a long relationship with the patients. The process requires effective communication throughout the course of the study, from obtaining consent to subsequent interactions with patients related to their consent. The research nurses must continuously ensure that participating patients are well informed about how the study is progressing, that they understand any changes to the set-up or to the risks and benefits. If conditions change too much, a new consent may need to be obtained.

Despite research nurses being so deeply involved in the entire consent process, there is a lack of research on this professional group’s experiences of and views on informed consent. What problems and opportunities do they experience? In an interview study, Tove Godskesen, Joar Björk and Niklas Juth studied the issue. They interviewed 14 Swedish research nurses about ethical challenges related to the consent process and how the challenges were handled.

The challenges were mainly about factors that could threaten voluntariness. Informed consent must be given voluntarily, but several factors can threaten this ethically important requirement. The nurses mentioned a number of factors, such as rushed decision-making in stressful situations, excessively detailed information to patients, doctors’ influence over patients, and disagreement within the family. An elusive threat to voluntariness is patients’ own sometimes unrealistic hopes for therapeutic benefit from research participation. Why is this elusive? Because the hopes can make the patients themselves motivated to participate. However, if the hopes are unrealistic, voluntariness can be said to be undermined even if the patients want to participate.

How do the research nurses deal with the challenges? An important measure is to give patients time in a calm environment to thoughtfully consider their participation and discuss it. This also reduces the risk of participants dropping out of the study, reasoned the nurses. Time with the patients also helps the research nurses to understand the patients’ situation, so that the recruitment does not take place hastily and perhaps on the basis of unrealistic expectations, they emphasized. The interviewees also said that they have an important role as advocates for the patients. In this role, the nurses may need time to understand and more closely examine the patients’ perspectives and reasons for potentially withdrawing from the study, and to find suitable solutions. It can also happen that patients say no to participation even though they really want to, perhaps because they are overwhelmed by all the information that made participation sound complicated. Again, the research nurses may need to give themselves and the patients time for in-depth conversations, so that patients who want to participate have the opportunity to do so. Maybe it is not as complicated as it seemed?

Read the important interview study here: Challenges regarding informed consent in recruitment to clinical research: a qualitative study of clinical research nurses’ experiences.

The study also highlights another possible problem that the research nurses raised, namely the questionable exclusion of certain groups from research participation (such as people who have difficulty understanding Swedish or have reduced cognitive ability). Such exclusion can mean that patients who want to participate in research are not allowed to do so, that certain groups have less access to new treatments, and that the scientific quality of the studies is hampered.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Godskesen, T., Björk, J. & Juth, N. Challenges regarding informed consent in recruitment to clinical research: a qualitative study of clinical research nurses’ experiences. Trials 24, 801 (2023). https://doi.org/10.1186/s13063-023-07844-6

This post in Swedish

Ethics needs empirical input