A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: digital health

Digitization of healthcare requires a national strategy to increase individuals’ ability to handle information digitally

There is consensus that the digitization of healthcare can make it easier to keep in touch with healthcare and get information that supports individual decision-making about one’s own health. However, the ability to understand and use health information digitally varies. The promising digitization therefore risks creating unequal care and health.

In this context, one usually speaks of digital health literacy. The term refers to the ability to retrieve, understand and use health information digitally to maintain or improve one’s health. This ability varies not only between individuals, but also within the same individual. Illness can, for example, reduce the ability to use a computer or a smartphone to maintain contact with healthcare and to understand and manage health information digitally. Your digital health literacy is dependent on your health.

How do Swedish policy makers think about the need for strategies to increase digital health literacy in Sweden? An article with Karin Schölin Bywall as the main author examines the question. Material was collected during three recorded focus group discussions (or workshops) with a total of 10 participants. The study is part of a European project to increase digital health literacy in Europe. What did Swedish decision-makers think of the need for a national strategy?

The participants in the study said that the issue of digital health literacy was not as much on the agenda in Sweden as in many other countries in Europe and that governmental agencies have limited knowledge of the problem. Digital services in healthcare also usually require that you identify yourself digitally, but a large group of adults in Sweden lack e-identification. The need for a national strategy is therefore great.

Participants further discussed how digital health literacy manifests itself in individuals’ ability to find the right website and reliable information on the internet. People with lower digital health literacy may not be able to identify appropriate keywords or may have difficulty assessing the credibility of the information source. The problem is not lessened by the fact that algorithms control where we end up when we search for information. Often the algorithms make companies more visible than government organizations.

The policy makers in the study also identified specific groups that are at risk of digital exclusion (digital divide) and that need different types of support. Among others, they mentioned people with intellectual disabilities and young people who do not sufficiently master source criticism (even though they are skilled users of the internet and various apps). Specific measures to counteract the digital divide in healthcare were discussed, such as regular mailings with information about good websites, adaptation of website content for people with special needs, and teaching in source criticism. It was also emphasized that individuals may have different combinations of conditions that affect the ability to manage health information digitally in different ways, and that a strategy to increase digital health literacy must therefore be nuanced.

In summary, the study emphasizes that the need for a national strategy for increased digital health literacy is great. While digital technologies have huge potential to improve public health, they also risk reinforcing already existing inequalities, the authors conclude. Read the study here: Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers.

Something that struck me was that the policy makers in the study, as far as I could see, did not emphasize the growing group of elderly people in the population. Elderly people may have a particularly broad combination of conditions that affect digital health literacy in many different ways. In addition, the elderly’s ability to handle information digitally not only varies from day to day, but the ability can be expected to have an increasingly steady tendency to deteriorate. Probably at the same rate as the need to use the ability increases.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bywall, K.S., Norgren, T., Avagnina, B. et al. Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers. BMC Public Health 24, 2666 (2024). https://doi.org/10.1186/s12889-024-20174-9

This post in Swedish

Ethics needs empirical input

Mobile apps to check symptoms and get recommendations: what do users say?

What will you do if you feel sick or discover a rash and wonder what it is? Is it something serious? If you do not immediately contact healthcare, a common first step is to search for information on the internet. But there are also applications for mobiles and online, where users can check their symptoms. A chatbot asks for information about the symptoms. The user then receives a list of possible causes as well as a recommendation, for example to see a doctor.

Because the interaction with the chatbot can bring to mind a visit to the doctor who makes a diagnosis and recommends action, these apps raise questions that may have more to do with these tempting associations than with reality. Will the apps in the future make visiting the doctor redundant and lead to the devaluing of medical professions? Or will they, on the contrary, cause more visits to healthcare because the apps often make such recommendations? Do they contribute to better diagnostic processes with fewer misdiagnoses, or do they, on the contrary, interfere with the procedure of making a diagnosis?

The questions are important, provided they are grounded in reality. Are they? What do users really expect from these symptom checker apps? What are their experiences as users of such digital aids? There are hardly any studies on this yet. German researchers therefore conducted an interview study with participants who themselves used apps to check their symptoms. What did they say when they were interviewed?

The participants’ experiences were not unequivocal but highly variable and sometimes contradictory. But there was agreement on one important point. Participants trusted their own and the doctor’s judgments more than they trusted the app. Although opinions differed on whether the app could be said to provide “diagnoses,” and regardless of whether or not the recommendations were followed, the information provided by the app was considered to be indicative only, not authoritative. The fear that these apps would replace healthcare professionals and contribute to a devaluation of medical professions is therefore not supported in the study. The interviewees did not consider the apps as a substitute for consulting healthcare. Many saw them rather as decision support before possible medical consultation.

Some participants used the apps to prepare for medical appointments. Others used them afterwards to reflect on the outcome of the visit. However, most wanted more collaboration with healthcare professionals about using the apps, and some used the apps because healthcare professionals recommended them. This has an interesting connection to a Swedish study that I recently blogged about, where the participants were patients with rheumatoid arthritis. Some participants in that study had prepared their visits to the doctor very carefully by using a similar app, where they kept logbook of their symptoms. They felt all the more disappointed when they experienced that the doctor showed no interest in their observations. Maybe better planning and collaboration between patient and healthcare is needed regarding the use of similar apps?

Interview studies can provide valuable support for ethical reasoning. By giving us insights into a reality that we otherwise risk simplifying in our thinking, they help us ask better questions and discuss them in a more nuanced way. That the results are varied and sometimes even contradictory is therefore not a weakness. On the contrary, we get a more faithful picture of a whole spectrum of experiences, which do not always correspond to our usually more one-sided expectations. The participants in the German study did not discuss algorithmic bias, which is otherwise a common theme in the ethical debate about AI. However, some were concerned that they themselves might accidentally lead the app astray by giving biased input that expressed their own assumptions about the symptoms. Read the study here: “That’s just Future Medicine” – a qualitative study on users’ experiences of symptom checker apps.

Another unexpected result of the interview study was that several participants discussed using these symptom checker apps not only for themselves, but also for friends, partners, children and parents. They raised their concerns about this, as they perceived health information from family and friends as private. They were also concerned about the responsibility they assumed by communicating the analyzes and recommendations produced by the app to others. The authors argue that this unexpected finding raises new questions about responsibility and that the debate about digital aids related to health and care should be more attentive to relational ethical issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Müller, R., Klemmt, M., Koch, R. et al. “That’s just Future Medicine” – a qualitative study on users’ experiences of symptom checker apps. BMC Med Ethics 25, 17 (2024). https://doi.org/10.1186/s12910-024-01011-5

This post in Swedish

We recommend readings