Many projects are underway in Sweden regarding AI systems in healthcare. The testing of AI solutions is in full swing. But many systems do not seem to be implemented and used. Why? Often it is a matter of poor preparatory work. Without a carefully considered strategy and clear goals, we risk scaling up AI systems that cannot cope with the complexity of healthcare.
The atmosphere around many AI ventures can almost be a little religious. You must not be negative or ask critical questions. Then you are quickly branded as a cynic who slows down development and does not understand the signs of the times. You almost have to blind yourself to potential pitfalls and speak and act like a true believer. Many justify the eager testing of AI by saying that we must dare to try and then see which solutions turn out to be successful. It is fascinating how willingly we apply AI to all sorts of tasks. But are we doing it the right way, or do we risk rushing on without giving ourselves time to think?
There are indeed economical and practical challenges in healthcare. It is not only about a lack of financial resources, but also about a lack of personnel and specialists. Before we can allow technologies like AI to become part of our everyday lives, we need to ask ourselves some important questions: What problems are we trying to solve? How do our solutions affect the people involved? We may also need to clarify whether the purpose of the AI system is to almost take over an entire work task or rather to facilitate our work in certain well-defined respects. The development of AI products should also pay extra attention to socially created categories of ethnicity and gender to avoid reinforcing existing inequalities through biased data selection. Ethically well-considered AI implementations probably lead to better clinical outcomes and more efficient care. It is easy to make hasty decisions that soon turn out to be wrong: accuracy should always be a priority. It is better to think right and slow than fast and wrong. Clinical studies should be conducted even on seemingly not so advanced AI products. In radiology, this tradition is well established, but it is not as common in primary care. If a way of working is to be changed with the help of AI, one should evaluate what effects it can have.
We must therefore not neglect three things: We must first of all define the need for an AI solution. We must then consider that the AI tool is not trained with biased data. Finally, we need to evaluate the AI solution before implementing it.
With the rapid data collection that apps and digital tools allow today, it is important not to get carried away, but to carefully consider the ethics of designing and implementing AI. Unfortunately, the mantra has become: “If we have data, we should develop an AI.” And that mantra makes anyone who asks “Why?” seem suspicious. But the question must be asked. It does not hinder the development of AI solutions, but contributes to it. Careful ethical considerations improve the quality of the AI product and strengthens the credibility of the implementation.
I therefore want to warn against being seduced by the idea of AI solutions for all sorts of tasks. Before we say AI is the answer, we need to ask ourselves: What is the question? Only if we can define a real issue or challenge can we ensure that the technology becomes a helping hand instead of a burden. We do not want to periodically end up in the situation where we suddenly have to pull the emergency brake, as in a recent major Swedish investment in AI in healthcare, called Millennium. We must not get stuck in the mindset that everything can be done faster and easier with AI. We must also not be driven by the fear of falling behind if we do not immediately introduce AI. Only a carefully considered evaluation of the need and the design of an AI solution can ensure appropriate care that is also effective. To get correct answers quickly, we must first give ourselves time to think.
Written by…
Jennifer Viberg Johansson, who is an Associate Professor in Medical Ethics at the Centre for Research Ethics & Bioethics.
We challenge habits of thought
Recent Comments