A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: patient perceptions

Women on AI-assisted mammography

The use of AI tools in healthcare has become a recurring theme on this blog. So far, the posts have mainly been about mobile and online apps for use by patients and the general public. Today, the theme is more advanced AI tools which are used professionally by healthcare staff.

Within the Swedish program for breast cancer screening, radiologists interpret large amounts of X-ray images to detect breast cancer at an early stage. The workload is great and most of the time the images show no signs of cancer or pre-cancers. Today, AI tools are being tested that could improve mammography in several ways. AI could be used as an assisting resource for the radiologists to detect additional tumors. It could also be used as an independent reader of images to relieve radiologists, as well as to support assessments of which patients should receive care more immediately.

For AI-assisted mammography to work, not only the technology needs to be developed. Researchers also need to investigate how women think about AI-assisted mammography. How do they perceive AI-assisted breast cancer screening? Four researchers, including Jennifer Viberg Johansson and Åsa Grauman at CRB, interviewed sixteen women who underwent mammography at a Swedish hospital where an AI tool was tested as a third reviewer of the X-ray images, along with the two radiologists.

Several of the interviewees emphasized that AI is only a tool: AI cannot replace the doctor because humans have abilities beyond image recognition, such as intuition, empathy and holistic thinking. Another finding was that some of the interviewees had a greater tolerance for human error than if the AI tool failed, which was considered unacceptable. Some argued that if the AI tool makes a mistake, the mistake will be repeated systematically, while human errors are occasional. Some believed that the responsibility when the technology fails lies with the humans and not with the technology.

Personally, I cannot help but speculate that the sharp distinction between human error, which is easier to reconcile with, and unacceptably failing technology, is connected to the fact that we can say of humans who fail: “After all, the radiologists surely did their best.” On the other hand, we hardly say about failing AI: “After all, the technology surely did its best.” Technology does not become subject to certain forms of conciliatory considerations.

The authors themselves emphasize that the participants in the study saw AI as a valuable tool in mammography, but held that the tool cannot replace humans in the process. The authors also emphasize that the interviewees preferred that the AI tool identify possible tumors with high sensitivity, even if this leads to many false positive results and thus to unnecessary worry and fear. In order for patients to understand AI-assisted healthcare, effective communication efforts are required, the authors conclude.

It is difficult to summarize the rich material from interview studies. For more results, read the study here: Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Dembrower K, Strand F, et al. Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024;14:e084014. doi: 10.1136/bmjopen-2024-084014

This post in Swedish

Approaching future issues

Precision medicine algorithms and personal encounters

The characters in Franz Kafka’s novels go astray in the corridors of bureaucracy. Impersonal officials handle never-defined cases as if they were robots controlled by algorithms as obscure as they are relentless. Judgments are passed without the convicted receiving any comprehensible information about possible charges.

Please excuse this dramatic introduction, which, in a perhaps slightly extreme way, is only intended to highlight a point in an article about precision medicine. Namely, the importance of placing the methods of precision medicine within the framework of the meeting between patient and physician: the importance of “personalizing” precision medicine.

Precision medicine is the name for methods to optimize disease management on the basis of the patient’s individual genetic profile. A bit like in a dating app that is meant to identify the best potential partner for you. Algorithms are used to calculate how patients with different genetic variants are likely to respond to drug treatments for some disease. There are advantages to this. The most effective and safe treatment for the patient in question can be identified. It also means that you can avoid treatments from which a patient with a certain genetic profile has very serious side effects. Or from which the patient is unlikely to get any positive effect, but would only suffer the side effects.

Together with several co-authors, Åsa Grauman at CRB recently published an interview study on precision medicine. Patients with a form of blood cancer (AML) in Finland, Italy and Germany were interviewed about how they viewed precision medicine, and about their preferences for being involved in this new way of making treatment decisions. Something I found interesting was that several (not all) participants wanted and valued information, but not for the purpose of making decisions. They wanted information to prepare themselves mentally, to know what to expect and to understand why different measures were being taken. They wanted information to be able to make the transition to being patients, I would like to say.

Almost all participants were unfamiliar with precision medicine. When the interviewer described the concept to them, most of them felt that precision medicine made sense and they were hopeful that the methods could be useful in the future. For example, to avoid unnecessary treatments with severe side effects in patients with a certain genetic profile. But even if the participants had faith in the algorithms that may be used in precision medicine, they emphasized that the algorithms are only a tool for the physician. They said that the physician can see the human side of the patient and the disease, and that the physician should be able to go against the algorithm depending on factors in the patient other than those included in the algorithm. The algorithm must not replace the physician or run over the patient. Many participants thus seemed to hold the view that difficult treatment decisions can be left to the physician, if the physician has listened to both the algorithm and the patient. Participants also highlighted the problem of not fitting into the algorithm: being denied treatment because the algorithm does not consider one to be the right patient for the available treatment options.

In their discussion, the authors highlighted a particularly interesting aspect of the situation of making treatment decisions. Namely, that the patient can weigh benefits and risks differently than both the physician and the algorithm. Incorporating the patient’s own trade-offs is therefore fundamental, they write, for precision medicine to be considered personalized care. Read the thought-provoking interview study here: Personalizing precision medicine: Patients with AML perceptions about treatment decisions.

To summarize, one could say that patients need to meet not only their algorithmically optimized treatment. In order to understand and influence their situation as patients, they above all need to meet their physician. Even if the patients feel that the decisions are too difficult and are positive to the possibilities of precision medicine, they want to talk to the physician and they want their meeting to influence the decisions. Perhaps treatment in an important sense begins even before the treatment decision is made, when the patient first meets the physician and they begin to find their way together through the hospital corridors. Corresponding meaningful encounters were never experienced by the characters in Kafka’s novels.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Åsa Grauman, Mika Kontro, Karl Haller, et al. Personalizing precision medicine: Patients with AML perceptions about treatment decisions. Patient Education and Counseling 115, 2023, https://doi.org/10.1016/j.pec.2023.107883

This post in Swedish

In dialogue with patients