A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: jennifervibergjohansson

AI is the answer! But what is the question?

Many projects are underway in Sweden regarding AI systems in healthcare. The testing of AI solutions is in full swing. But many systems do not seem to be implemented and used. Why? Often it is a matter of poor preparatory work. Without a carefully considered strategy and clear goals, we risk scaling up AI systems that cannot cope with the complexity of healthcare.

The atmosphere around many AI ​​ventures can almost be a little religious. You must not be negative or ask critical questions. Then you are quickly branded as a cynic who slows down development and does not understand the signs of the times. You almost have to blind yourself to potential pitfalls and speak and act like a true believer. Many justify the eager testing of AI by saying that we must dare to try and then see which solutions turn out to be successful. It is fascinating how willingly we apply AI to all sorts of tasks. But are we doing it the right way, or do we risk rushing on without giving ourselves time to think?

There are indeed economical and practical challenges in healthcare. It is not only about a lack of financial resources, but also about a lack of personnel and specialists. Before we can allow technologies like AI to become part of our everyday lives, we need to ask ourselves some important questions: What problems are we trying to solve? How do our solutions affect the people involved? We may also need to clarify whether the purpose of the AI ​​system is to almost take over an entire work task or rather to facilitate our work in certain well-defined respects. The development of AI products should also pay extra attention to socially created categories of ethnicity and gender to avoid reinforcing existing inequalities through biased data selection. Ethically well-considered AI implementations probably lead to better clinical outcomes and more efficient care. It is easy to make hasty decisions that soon turn out to be wrong: accuracy should always be a priority. It is better to think right and slow than fast and wrong. Clinical studies should be conducted even on seemingly not so advanced AI products. In radiology, this tradition is well established, but it is not as common in primary care. If a way of working is to be changed with the help of AI, one should evaluate what effects it can have.

We must therefore not neglect three things: We must first of all define the need for an AI solution. We must then consider that the AI ​​tool is not trained with biased data. Finally, we need to evaluate the AI ​​solution before implementing it.

With the rapid data collection that apps and digital tools allow today, it is important not to get carried away, but to carefully consider the ethics of designing and implementing AI. Unfortunately, the mantra has become: “If we have data, we should develop an AI.” And that mantra makes anyone who asks “Why?” seem suspicious. But the question must be asked. It does not hinder the development of AI solutions, but contributes to it. Careful ethical considerations improve the quality of the AI ​​product and strengthens the credibility of the implementation.

I therefore want to warn against being seduced by the idea of ​​AI solutions for all sorts of tasks. Before we say AI is the answer, we need to ask ourselves: What is the question? Only if we can define a real issue or challenge can we ensure that the technology becomes a helping hand instead of a burden. We do not want to periodically end up in the situation where we suddenly have to pull the emergency brake, as in a recent major Swedish investment in AI in healthcare, called Millennium. We must not get stuck in the mindset that everything can be done faster and easier with AI. We must also not be driven by the fear of falling behind if we do not immediately introduce AI. Only a carefully considered evaluation of the need and the design of an AI solution can ensure appropriate care that is also effective. To get correct answers quickly, we must first give ourselves time to think.

Written by…

Jennifer Viberg Johansson, who is an Associate Professor in Medical Ethics at the Centre for Research Ethics & Bioethics.

This post in Swedish

We challenge habits of thought

Questions about evidence and guidelines in healthcare

Finding your way through the complex web of guidelines and requirements for evidence in healthcare can be challenging. It is easy to imagine that these guidelines are downloaded from above, like a collection of commandments, but the truth is that they are shaped and changed in a complex process of negotiation and deliberation.

My colleagues and I in prosthetics and orthotics in Region Uppsala in Sweden are involved in the procurement of orthopedic devices for patients, such as prostheses, orthoses, splints, sitting frames, medical corsets, orthopedic shoes and insoles. We often ask ourselves an important question: Who should receive tax-funded prosthetics and orthotics devices and how expensive should they be? Where do we find guidelines for our decisions? An example of a guiding document is the general guidelines for the prescription of assistive devices in the County Council of Uppsala (from 2015). This document is based on the laws and guidelines of the parliament, UN conventions and the Council’s own plans. It becomes clear that guidelines are not isolated rules, but rather an interweaving of different norms and values that guide healthcare decisions.

Despite clear priority levels and demands for individual assessment of health effects, we find that patients today are denied orthopedic devices with the argument that there is a lack of evidence that the aid works for the type of diagnosis in question. Is this argument as strong when it comes to orthopedic devices as it is when it comes to drug treatments? In the search for evidence in healthcare, randomized controlled trials (RCTs) are often required. But must all treatments be measured by the same yardstick? Applying an arm cast or using an assistive device that enables walking does not necessarily require the same level of evidence as more complex internal medicine treatments. Sometimes it should be enough to see with your own eyes and observe improvements, such as a better gait or reduced pain.

In addition to this possibly unfair situation, where a small patient group has to suffer from requirements that are reasonable for the majority but not for all patients, the availability and scope of assistive device prescription varies between different regions in Sweden. This variation raises questions about how guidelines and principles for prioritization in healthcare are interpreted in different regions. Although the overarching principles for priority setting are the same (the principle that all humans have equal value and the same right to care, the principle of need and solidarity, and of the principle of cost-effectiveness), the interpretation and application of these principles can apparently differ. Why is it like that? In some regions, a more comprehensive and individually adapted prescription of devices is given, while other regions are more restrictive. This variation raises important questions about fair and equal care. Providing fair and equal care does not just require following rules. It also requires that we deepen our understanding of how these rules are interpreted and applied in different parts of the country, as well as assess which requirements are reasonable in different practices. It is a complex balancing act between ensuring people’s equal value and right to health while managing resources efficiently. Prescription of assistive devices as a tool to support health and participation is emphasized in the guidelines in Uppsala, but it is important to reflect on how this tool is implemented in practice and what impact it has on people’s quality of life. A common basis in the WHO’s international classification of functional status, disability and health is a good starting point (as in the National Board of Health and Welfare’s support for prescribing assistive devices). But continued discussion and reflection is required to ensure that the patient’s individual health condition is taken into account (not just the patient group), and that devices are prescribed fairly across the country.

In my work, I reflect daily on guidelines and requirements for evidence. I think it is valuable if we who work with the prescription of orthopedic devices reflect on the origin of the guidelines and the requirements for evidence that we use in healthcare. Understanding the context around why the guidelines look the way they do is crucial for us to be able to understand and apply them in our practices. For example, how should we interpret the requirement for evidence when working with prosthetics and orthotics?

I will return to discuss possible answers to these questions in future blog posts. With this post I just wanted to raise the questions.

Written by…

Jennifer Viberg Johansson, Associate Professor in Medical Ethics at Uppsala University’s Centre for Research Ethics & Bioethics.

This post in Swedish

We want to be just

Empirical ethics nuances ethical issues

A few years ago, my colleague Pär Segerdahl published a blog post on why bioethicists do empirical studies. He pointed out that surveys and interview studies on what people think hardly provide evidence that can decide controversial ethical issues, for example whether euthanasia should be allowed. Empirical studies rather give us a better grasp of the problem itself. They help us see what is actually at stake for people. I agree with him that ethical issues are not decided by surveys and interview studies and that such studies rather help us to see more clearly the meaning of the issues.

In this post, I want to further exemplify how empirical methods can nuance ethical questions and help us see what is at stake for people: help us see what we need to consider in the ethical discussion. I have in mind how, through a well-considered choice of empirical method, one can better describe the relative importance of ethical difficulties, values and preferences among stakeholders, as well as conflicts between ethical views. How? I am thinking of methods where respondents do not just answer what they think on certain individual issues, but are faced with complex scenarios where several factors are simultaneously at stake. Even if you have the firm opinion that drugs should not have side effects, are you perhaps still prepared to choose such a drug if it is more effective against your symptoms than other drugs, or is cheaper, or easier to use? In such studies, we create a multidimensional world with nuances for respondents to make complex decisions in.

Here is my example: Soon, therapies based on human embryonic stem cells may become a reality for patients with Parkinson’s disease. But is it morally acceptable to use human embryonic stem cells (hESC) for drug therapy? This has long been a controversial issue, partly because the embryo is destroyed when the stem cells are harvested. Perhaps the question is about to become even more topical now, when countries are changing legislation in a direction that gives the embryo a higher status and more legal protection. It is therefore particularly important that research provides a nuanced picture of the issues. In light of the political landscape and the new possibilities for treating patients with Parkinson’s, a more complex empirical method can support a better contemporary discussion about what types of research and therapies are within the scope of what can be allowed to be done with an embryo. The discussion concerns both ethics and law and must also include scientific challenges to ensure that stem cell research and therapies are carried out in ethically acceptable ways.

A common way to empirically examine the ethical issue is to look at the ethical arguments for and against the destruction of human embryos: to examine how different actors think and feel about this. Undoubtedly, such studies help us see what is at stake. But they can also easily steer respondents towards a yes-or-no answer, a pro-or-against attitude. Therefore, it is important to choose an empirical method that elicits perceived benefits and risks and explores multiple dimensions of the problem. How do patients feel about taking a medicine based on leftover embryos that not only relieves their symptoms but also repairs the damage, while the level of knowledge is low? It is not easy to answer such a question, but reality often has this complexity.

One method that can stage such complex considerations is a choice-based survey called Discrete Choice Experiments (DCE). With that method, we can investigate ethically sensitive issues and use the results to describe more fully the relative importance of ethical difficulties, values and preferences among stakeholders, as well as conflicts between ethical views. DCE provides an understanding of the balance between factors involved in different situations. In a new article in BMC Medical Ethics, my colleagues and I have investigated which factors are associated with the preferences of patients with Parkinson’s disease regarding embryonic stem cell-based treatments for the disease in the future. We invited patients to participate in a web-based choice-based experiment to assess the importance of the following factors: (1) type of treatment, (2) purpose of the treatment, (3) available knowledge about different types of treatment, (4) effect on symptoms and (5) the risk of serious side effects. The results showed that the fourth factor, “effect on symptoms,” was the most important factor in the choice of treatment option. Patients’ previous experience with treatment, side effects and advanced treatment therapy, as well as religious beliefs were associated with what they thought was most important, but not their view of what an embryo is. If you want to read more, you can find the article here: Patients accept therapy using embryonic stem cells for Parkinson’s disease: a discrete choice experiment.

These kinds of results from DCE studies can, in my opinion, help us to understand and frame ethical questions in ways that reflect how people think when multiple factors are at stake simultaneously. I believe that the more realistic complexity of such studies can contribute to more informed ethical considerations. I believe that they could also strengthen democratic processes by giving public conversation a background of more nuanced empirical findings.

Written by…

Jennifer Viberg Johansson, Associate Professor in Medical Ethics at Uppsala University’s Centre for Research Ethics & Bioethics.

Bywall, K.S., Drevin, J., Groothuis-Oudshoorn, C. et al. Patients accept therapy using embryonic stem cells for Parkinson’s disease: a discrete choice experiment. BMC Med Ethics 24, 83 (2023). https://doi.org/10.1186/s12910-023-00966-1

This post in Swedish

Ethics needs empirical input

Research for responsible governance of our health data

Do you use your smartphone to collect and analyse your performance at the gym? This is one example of how new health-related technologies are being integrated into our lives. This development leads to a growing need to collect, use and share health data electronically. Healthcare, medical research, as well as technological and pharmaceutical companies are increasingly dependent on collecting and sharing electronic health data, to develop healthcare and new medical and technical products.

This trend towards more and more sharing of personal health information raises several privacy issues. Previous studies suggest that people are willing to share their health information if the overall purpose is improved health. However, they are less willing to share their information with commercial enterprises and insurance companies, whose purposes may be unclear or do not meet people’s expectations. It is therefore important to investigate how individuals’ perceptions and attitudes change depending on the context in which their health data is used, what type of information is collected and which control mechanisms are in place to govern data sharing. In addition, there is a difference between what people say is important and what is revealed in their actual behaviour. In surveys, individuals often indicate that they value their personal information. At the same time, individuals share their personal information online despite little or no benefit to them or society.

Do you recognise yourself, do you just click on the “I agree” button when installing a health app that you want to use? This behaviour may at first glance suggest that people do not value their personal information very much. Is that a correct conclusion? Previous studies may not have taken into account the complexity of decisions about integrity where context-specific factors play a major role. For example, people may value sharing health data via a physical activity app on the phone differently. We have therefore chosen to conduct a study that uses a sophisticated multi-method approach that takes context-specific factors into account. It is an advantage in cybersecurity and privacy research, we believe, to combine qualitative methods with a quantitative stated preference method, such a discrete choice experiment (DCE). Such a mixed method approach can contribute to ethically improved practices and governance mechanisms in the digital world, where people’s health data are shared for multiple purposes.

You can read more about our research if you visit the website of our research team. Currently, we are analysing survey data from 2,000 participants from Sweden, Norway, Iceland, and the UK. The research group has expertise in law, philosophy, ethics and social sciences. On this broad basis, we  explore people’s expectations and preferences, while identifying possible gaps within the ethical and legal frameworks. In this way, we want to contribute to making the growing use and sharing of electronic health data ethically informed, socially acceptable and in line with people’s expectations.  

Written by…

Jennifer Viberg Johansson, Postdoc researcher at the Centre for Research Ethics & Bioethics, working in the projects Governance of health data in cyberspace and PREFER.

This post in Swedish

Part of international collaborations

Letting people choose isn’t always the same as respecting them

Jennifer Viberg, PhD Student, Centre for Research Ethics & Bioethics (CRB)Sequencing the entire genome is cheaper and faster than ever. But when researchers look at people’s genetic code, they also find unexpected information in the process. Shouldn’t research participants have access to this incidental information? Especially if it is important information that could save a life if there is treatment to offer?

The personal benefits of knowing genetic information can vary from individual to individual. For one person, knowledge might just cause anxiety. For another, genetic risk information could create a sense of control in life. Since different people have different experiences, it could seem tempting to leave it for them to decide for themselves whether they want the information or not.

Offering participants in genetic research a choice to know or not to know is becoming more common. Another reason for giving a “freedom of choice” has to do with respecting people by allowing them to make choices in matters that concern them. By letting the participant choose, you acknowledge that he or she is a person with an ability to make his or her own choices.

But when researchers hand over the decision to participants they also transfer responsibility: A responsibility that could have consequences that we cannot determine today. I recently wrote an article together with colleagues at CRB about this in Bioethics. We argue that this freedom of choice could be problematic.

Looking at previous psychological research on how people respond to probabilities, it becomes clear that what they choose depends on how the choice situation is presented. People choose the “safe” outcome before taking a risk in cases where the outcome is phrased in a positive way. But they are more prone to taking a risk when the result is phrased in a negative way, despite the fact that the outcome is identical. If a participant is asked if he or she wants information that could save their life, there is a risk that they could be steered to answering “yes” without considering other important aspects, such as having to live with anxiety or subjecting themselves to medical procedures that might be unnecessary.

The benefit of incidental findings for individual participants is hard to estimate. Even for experienced and knowledgeable genetic researchers. If we know how difficult the choice situations are, even for them, and if we know how psychological processes probably will steer the participants’ choices, then it seems that it is hardly respectful to give the participants this choice.

There are good intentions behind giving participants freedom to choose, but it isn’t respectful if we can predict that the choices won’t be free and well grounded.

If you want to learn more, you find further reading on CRB’s web, and here is a link to our article: Freedom of choice about incidental findings can frustrate participants’ true preferences

Jennifer Viberg

We like real-life ethics : www.ethicsblog.crb.uu.se