A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: December 2024

Were many clinical trials during the COVID-19 pandemic unethical?

It is understandable if the COVID-19 pandemic spurred many researchers to conduct their own studies on patients with the disease. They wanted to help in a difficult situation by doing what they were competent to do, namely research. The question is whether the good will sometimes had problematic consequences in terms of research ethics.

For a clinical trial to have scientific and social value, a large number of participants is required. This is in order to be able to compare groups that are treated differently and with a sufficiently high probability demonstrate real connections between treatment and outcome. 20 years ago, small so-called underpowered trials were common, but the pandemic made them flourish again. Some COVID-19 studies had fewer than 50 participants.

Is it then not good that researchers do what they can in a difficult situation, even if it means that they do research on the smaller patient groups that they manage to recruit? The problem is that underpowered clinical trials do not provide valid scientific knowledge. Thus, they have hardly any value for society and it becomes doubtful whether the researchers are really doing what they feel they are doing, namely helping in a difficult situation.

You can read about this in a commentary in the Journal of the Royal Society of Medicine, written by Rafael Dal-Ré, Stefan Eriksson and Stephen Latham. They point out that researchers sometimes defend underpowered clinical trials with the argument that smaller studies are easier to complete and that data from small trials around the world can be pooled to achieve the required statistical power. This is correct if the studies used sufficiently similar research methods to make the data comparable, the authors comment. This is often not the case, but requires that researchers plan from the outset to pool data from their respective studies. Another problem is that underpowered clinical trials more often have negative results and that such studies are less often published. Pooled data from underpowered studies published in journals are therefore not representative. Data from such studies would therefore need to be posted on freely accessible platforms, the authors argue.

Exposing patients to the risks and inconveniences involved in participating in a clinical trial is unethical if the study cannot be judged to provide scientifically valid knowledge with social value. The authors’ conclusion is therefore that research ethics committees that review planned research must very carefully assess that the studies have a sufficiently large number of participants to achieve valid and useful knowledge. If underpowered studies are nevertheless planned, participants must be informed that the results may not be scientifically valid in themselves, but that they will be pooled with results from similar studies in order to achieve statistical power. If there is no agreement with other researchers to pool results, underpowered studies should not be approved by research ethics committees, the three authors conclude. Not even during a pandemic.

Read the commentary here: Underpowered trials at trial start and informed consent: action is needed beyond the COVID-19 pandemic.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Dal-Ré R, Eriksson S, Latham SR. Underpowered trials at trial start and informed consent: action is needed beyond the COVID-19 pandemic. Journal of the Royal Society of Medicine. 2024;0(0). doi:10.1177/01410768241290075

This post in Swedish

We want solid foundations

AI is the answer! But what is the question?

Many projects are underway in Sweden regarding AI systems in healthcare. The testing of AI solutions is in full swing. But many systems do not seem to be implemented and used. Why? Often it is a matter of poor preparatory work. Without a carefully considered strategy and clear goals, we risk scaling up AI systems that cannot cope with the complexity of healthcare.

The atmosphere around many AI ​​ventures can almost be a little religious. You must not be negative or ask critical questions. Then you are quickly branded as a cynic who slows down development and does not understand the signs of the times. You almost have to blind yourself to potential pitfalls and speak and act like a true believer. Many justify the eager testing of AI by saying that we must dare to try and then see which solutions turn out to be successful. It is fascinating how willingly we apply AI to all sorts of tasks. But are we doing it the right way, or do we risk rushing on without giving ourselves time to think?

There are indeed economical and practical challenges in healthcare. It is not only about a lack of financial resources, but also about a lack of personnel and specialists. Before we can allow technologies like AI to become part of our everyday lives, we need to ask ourselves some important questions: What problems are we trying to solve? How do our solutions affect the people involved? We may also need to clarify whether the purpose of the AI ​​system is to almost take over an entire work task or rather to facilitate our work in certain well-defined respects. The development of AI products should also pay extra attention to socially created categories of ethnicity and gender to avoid reinforcing existing inequalities through biased data selection. Ethically well-considered AI implementations probably lead to better clinical outcomes and more efficient care. It is easy to make hasty decisions that soon turn out to be wrong: accuracy should always be a priority. It is better to think right and slow than fast and wrong. Clinical studies should be conducted even on seemingly not so advanced AI products. In radiology, this tradition is well established, but it is not as common in primary care. If a way of working is to be changed with the help of AI, one should evaluate what effects it can have.

We must therefore not neglect three things: We must first of all define the need for an AI solution. We must then consider that the AI ​​tool is not trained with biased data. Finally, we need to evaluate the AI ​​solution before implementing it.

With the rapid data collection that apps and digital tools allow today, it is important not to get carried away, but to carefully consider the ethics of designing and implementing AI. Unfortunately, the mantra has become: “If we have data, we should develop an AI.” And that mantra makes anyone who asks “Why?” seem suspicious. But the question must be asked. It does not hinder the development of AI solutions, but contributes to it. Careful ethical considerations improve the quality of the AI ​​product and strengthens the credibility of the implementation.

I therefore want to warn against being seduced by the idea of ​​AI solutions for all sorts of tasks. Before we say AI is the answer, we need to ask ourselves: What is the question? Only if we can define a real issue or challenge can we ensure that the technology becomes a helping hand instead of a burden. We do not want to periodically end up in the situation where we suddenly have to pull the emergency brake, as in a recent major Swedish investment in AI in healthcare, called Millennium. We must not get stuck in the mindset that everything can be done faster and easier with AI. We must also not be driven by the fear of falling behind if we do not immediately introduce AI. Only a carefully considered evaluation of the need and the design of an AI solution can ensure appropriate care that is also effective. To get correct answers quickly, we must first give ourselves time to think.

Written by…

Jennifer Viberg Johansson, who is an Associate Professor in Medical Ethics at the Centre for Research Ethics & Bioethics.

This post in Swedish

We challenge habits of thought