It is understandable if the COVID-19 pandemic spurred many researchers to conduct their own studies on patients with the disease. They wanted to help in a difficult situation by doing what they were competent to do, namely research. The question is whether the good will sometimes had problematic consequences in terms of research ethics.
For a clinical trial to have scientific and social value, a large number of participants is required. This is in order to be able to compare groups that are treated differently and with a sufficiently high probability demonstrate real connections between treatment and outcome. 20 years ago, small so-called underpowered trials were common, but the pandemic made them flourish again. Some COVID-19 studies had fewer than 50 participants.
Is it then not good that researchers do what they can in a difficult situation, even if it means that they do research on the smaller patient groups that they manage to recruit? The problem is that underpowered clinical trials do not provide valid scientific knowledge. Thus, they have hardly any value for society and it becomes doubtful whether the researchers are really doing what they feel they are doing, namely helping in a difficult situation.
You can read about this in a commentary in the Journal of the Royal Society of Medicine, written by Rafael Dal-Ré, Stefan Eriksson and Stephen Latham. They point out that researchers sometimes defend underpowered clinical trials with the argument that smaller studies are easier to complete and that data from small trials around the world can be pooled to achieve the required statistical power. This is correct if the studies used sufficiently similar research methods to make the data comparable, the authors comment. This is often not the case, but requires that researchers plan from the outset to pool data from their respective studies. Another problem is that underpowered clinical trials more often have negative results and that such studies are less often published. Pooled data from underpowered studies published in journals are therefore not representative. Data from such studies would therefore need to be posted on freely accessible platforms, the authors argue.
Exposing patients to the risks and inconveniences involved in participating in a clinical trial is unethical if the study cannot be judged to provide scientifically valid knowledge with social value. The authors’ conclusion is therefore that research ethics committees that review planned research must very carefully assess that the studies have a sufficiently large number of participants to achieve valid and useful knowledge. If underpowered studies are nevertheless planned, participants must be informed that the results may not be scientifically valid in themselves, but that they will be pooled with results from similar studies in order to achieve statistical power. If there is no agreement with other researchers to pool results, underpowered studies should not be approved by research ethics committees, the three authors conclude. Not even during a pandemic.
Dal-Ré R, Eriksson S, Latham SR. Underpowered trials at trial start and informed consent: action is needed beyond the COVID-19 pandemic. Journal of the Royal Society of Medicine. 2024;0(0). doi:10.1177/01410768241290075
Human genomics has potential to improve the health of individuals and populations for generations to come. It also requires the collection, use and sharing of data from people all over the world. There is therefore an accompanying need for a globally fair distribution of genomic technology, data and results. As the databases and infrastructures will be in operation for a long time, ethical, legal, social and cultural issues need to be taken into account from the outset, considering the entire life cycle of the data.
To promote such an ethical, equitable and responsible use of genomic data, the World Health Organization (WHO) recently issued globally applicable guidelines for human genome data collection, access, use and sharing. The guidelines are formulated as 8 principles with associated practical recommendations. The principles were developed step by step, first through review of existing documents and virtual consultation with experts from different parts of the world, then through a workshop in Geneva where experts met on site. Finally, the draft was discussed through public consultations.
The purpose of the WHO document is to create globally applicable principles that can complement local legislation. This is to promote, among other things, social and cultural inclusiveness as well as justice in the use of human genome data.
A recurring theme on this blog is the question of who can be counted as an author of a research article. You might be thinking: how difficult can it be to determine if someone is the author of an article? But the criteria for academic authorship are challenged on several fronts and therefore need to be discussed. I recently blogged about a debate about two of these challenges: huge research projects where a large number of researchers and experts in different fields contribute to the studies, and the use of AI in research and academic writing (for example ChatGPT).
Today I want to recommend an article on publication ethics that discusses a third challenge to the authorship criteria. The challenge is called citizen science. Similar to the big research collaborations I mentioned above, a very large number of individuals often contribute to citizen science. The difference is that the professional researchers here collaborate with voluntary participants from the general public and not just with other researchers or experts. It may involve ordinary citizens reporting their observations of plant and animal life, helping astronomers categorize large amounts of photographed astronomical objects, contributing to solutions to mathematical problems or perhaps even discussing the design of research projects. Citizen science is important, for example, when data collection requires the efforts of so many observers in so many places, that the observations would otherwise be too expensive or time-consuming. Citizen science is also important because it gives citizens insight into research, increases trust in science and creates contacts between research and society.
The so-called Vancouver rules for authorship have been criticized for allegedly excluding citizen scientists from authorship, even though the voluntary contributions are sometimes so significant that they could merit such recognition. The rules state (slightly simplified) that to count as an author you must have made significant contributions to the research study (e.g., design, data collection, analysis, interpretation). But you must also have participated in the writing process, approved the final version of the article, and accepted responsibility for the research being carried out correctly.
An important point in the article that I recommend is that it is not necessarily the Vancouver rules that exclude citizen scientists from authorship. On the contrary, it may be that the researchers leading the projects do not follow the rules. In addition to the four criteria above, the Vancouver rules say that individuals who meet the first criterion should be given the opportunity to meet the other three as well. Citizen scientists who have made significant contributions to the study should therefore be given the opportunity to write or revise relevant sections of the text, approve the final version and accept responsibility for the accuracy of at least their own contribution to the study. In citizen science, it is also often the case that a small number of “superusers” account for the bulk of the work effort. It should be possible to treat these individuals in the same way as one treats professional researchers who have made significant contributions, that is, give them the opportunity to qualify for authorship.
A more difficult issue discussed in the article is group authorship. In citizen science, the collective contribution of the whole group is often significant, while the individual contributions are not. Would it be possible to give the group collective credit in the form of group authorship? Not doing so could give a false impression that the professional researchers made a greater effort in the study than they actually did, the four publication ethicists argue in the article. It can also be unfair. If individual researchers who fulfill the first criterion should be given the opportunity to fulfill all criteria, then groups should also be given this opportunity. In such cases, the group should (in some way) be given the opportunity to participate in the critical revision of the article and to approve the final version. But can a group of 2,000 volunteer bird watchers take responsibility for a research study being carried out properly? Perhaps the group can at least answer for the accuracy of its own observation efforts. Being credited for one’s contribution to a study through authorship and taking responsibility for the contribution are two sides of the same coin, according to the publication ethicists. That citizen scientists must accept responsibility in order to be counted as co-authors is perhaps also an opportunity to convey something about the nature of science, one could add.
The article concludes by proposing seven heuristic rules regarding who can be included as an author. For example, one should, as far as possible, respect existing guidelines (such as the Vancouver rules), apply a wide conception of contributions, and be open to new forms of authorship. Perhaps a group can sometimes be credited through authorship? The seventh and final heuristic rule is to be generous to citizen scientists in unclear cases by including rather than excluding.
The eating disorder anorexia (anorexia nervosa) is a mental disorder that can be life-threatening if it is not treated. It is characterized by fear of gaining weight: you starve yourself to lose weight and do not understand that being underweight is dangerous. Even if most recover, the disease is associated with increased mortality and the most severely ill may need to be hospitalized.
Hospital care can involve both psychotherapy and drug treatment, but not everyone wants or is able to participate in the treatment, which of course also involves eating. They may lack motivation to change or refuse to see that they need treatment. If the malnutrition becomes life-threatening, it may be necessary to decide on tube feeding as a compulsory measure. Liquid nutrition is then given via a thin tube that is inserted through one nostril and down into the stomach.
Tube-feeding an adult who does not want to eat is reasonably a challenge for the nurses who have to perform the procedure. What are their experiences of the measure like? One study investigated the issue by interviewing nurses at a Norwegian inpatient ward where adult patients with severe anorexia were cared for. What did the nurses have to say?
An important theme was that one strove to provide good care even during the coercive measure. It must be so good that the patient voluntarily wants to stay in the ward after tube feeding. For example, the measure is never taken until one has gradually tried to encourage the patient to eat, asked the patient about the situation and discussed whether to use the tube instead. If tube feeding becomes necessary, one still tries to give the patient options, one tries to respect the patient’s autonomy as much as possible, even if it is a coercive measure. The nurses also described difficulties in balancing kindness and firmness during the procedure, difficulties in combining the role of being a helper and being a controller.
Another theme was ethical concerns when the doctor decided on tube feeding even though the patient’s BMI was not so low that the condition was life-threatening. One nurse stated that she sometimes found such situations so problematic that she refused to take part in the procedure.
The third theme was concerns about calling in staff from another ward to help restrain the patient while the nurse performed the tube feeding. Some nurses were concerned about how this might be experienced by patients with a history of abuse. Others saw the tube feeding as a life-saving measure and experienced no ethical concerns. However, participants in the study emphasized that tube feeding affects the relationship with the patient and that restraint can disrupt the relationship. A nurse told how she once performed tube feeding on a patient she had never met before, and with whom she had therefore not established a relationship, and how this then prevented a good relationship with that patient.
Interview studies that capture human experience through the participants’ own stories often yield unexpectedly meaningful insights. Subtle details of human life that you would not otherwise have thought of appear in the interview material. One such insight from this study was how the nurses made great efforts so that tube feeding could be perceived as good care with respect for the patient’s autonomy and dignity, despite the fact that it is a coercive measure. It also became clear that there were tensions in the situation that the nurses had difficulty dealing with, such as first performing the coercive measure and then comforting the patient and re-establishing the relationship that had been disrupted. One of the conclusions in the article is therefore that even the nurses who perform tube feeding are vulnerable.
Brinchmann, B.S., Ludvigsen, M.S. & Godskesen, T. Nurses’ experience of nasogastric tube feeding under restraint for Anorexia Nervosa in a psychiatric hospital. BMC Medical Ethics 25, 111 (2024). https://doi.org/10.1186/s12910-024-01108-x
Psychological distress that ethnic minorities experience is an often overlooked problem. In France, the mental well-being of ethnic minorities, particularly those with North African immigrant backgrounds is an important issue to study. Both first- and second-generation immigrants face unique challenges that may make them more vulnerable to more general mental health issues, and psychological disorders. A fresh report from the European Fundamental Rights Association on being a Muslim in the EU (published on October 24, 2024) sheds some light on issues related to health and racial harassment and violence. The report did not study psychological issues specifically, but it is worth noting that race-related violence has psychological impact for 55 percent of the respondents (p. 21).
Vulnerability is frequently linked to ethnic minority status, leading to recurring experiences of discrimination and difficulties in reconciling cultural identity with a society that often prioritizes assimilation. In this context, assimilation tends to erase or disregard the original cultural heritage in favor of integration into the dominant culture. Such dynamics can lead to feelings of isolation, invalidation, and psychological distress among affected individuals.
Research on the mental health of French populations of North African descent remains largely neglected. In other regions, for example North America, mental health and immigration is much better studied. While the topic of discrimination has been explored in some areas, few studies have focused on the psychological effects of these experiences and the coping strategies adopted by these populations in France. Some research does indicate a rise in discrimination, but lack of comprehensive studies on this issue creates both a scientific and social void, keeping these topics largely invisible.
In other southern European countries such as Italy and Spain, the mental health problems of ethnic minorities are recognized, but do not yet receive the same attention as in North America. In Italy, studies on the mental health of minorities are mainly focused on recent migrants and refugees, not least because of the importance of migratory flows in the Mediterranean. Researchers are mainly interested in the traumas associated with exile and the precarious conditions of migrants, but issues of discrimination or systemic racism are less well explored.
In Spain, there is also research on the mental health of migrants, particularly from Latin America and North Africa. However, the framework remains focused on social integration and economic issues, and less on the dynamics of discrimination and ethnicity. Both countries are beginning to recognize the importance of these issues, but in-depth studies on the impact of racial discrimination on the mental health of ethnic minorities, as in all parts of Europe, are still limited.
One psychological phenomenon that is still underexplored in this context is “racial battle fatigue.” Introduced in the early 2000s by William A. Smith, this concept refers to the emotional and psychological stress accumulated by individuals who repeatedly face racism. It represents the emotional burden that ethnic minorities carry as a result of racial discrimination and societal expectations. This burden can drive individuals to minimize or suppress their own suffering to avoid being perceived as “weak” or “complaining.” These coping mechanisms can exacerbate psychological issues, creating a vicious cycle of untreated distress.
In academic and professional settings, there is often reluctance to openly discuss these challenges. Some individuals may regard these topics as taboo or controversial, limiting the opportunities for open dialogue and scientific advancement. This reflects a broader trend in the mental health field, where the specific needs of ethnic minorities, particularly in terms of tailored psychological care, are not adequately addressed.
If we are going to be able to provide concrete answers to these questions, we need to study this phenomenon and shed some light on the mechanisms underlying the psychological suffering of ethnic minorities. Research on the psychological distress experienced by ethnic minorities could also help develop therapeutic interventions better suited to these populations. A recent French pilot study can lead the way: in Rania Driouach’s sample of people with North African descent, 226 out of a total of 387 participants indicated heightened psychological distress on a transgenerational level. Her study is the first step towards a scientific framework that acknowledges the specific needs of these groups while promoting an inclusive and rigorous therapeutic approach. Perhaps such a framework can help pave the way for a better understanding of the effects of migration on psychological distress across generations, and provide better tools for the (mental) health care providers that provide both first and second line care.
This post is written by Rania Driouach (Nîmes University) and:
There is consensus that the digitization of healthcare can make it easier to keep in touch with healthcare and get information that supports individual decision-making about one’s own health. However, the ability to understand and use health information digitally varies. The promising digitization therefore risks creating unequal care and health.
In this context, one usually speaks of digital health literacy. The term refers to the ability to retrieve, understand and use health information digitally to maintain or improve one’s health. This ability varies not only between individuals, but also within the same individual. Illness can, for example, reduce the ability to use a computer or a smartphone to maintain contact with healthcare and to understand and manage health information digitally. Your digital health literacy is dependent on your health.
How do Swedish policy makers think about the need for strategies to increase digital health literacy in Sweden? An article with Karin Schölin Bywall as the main author examines the question. Material was collected during three recorded focus group discussions (or workshops) with a total of 10 participants. The study is part of a European project to increase digital health literacy in Europe. What did Swedish decision-makers think of the need for a national strategy?
The participants in the study said that the issue of digital health literacy was not as much on the agenda in Sweden as in many other countries in Europe and that governmental agencies have limited knowledge of the problem. Digital services in healthcare also usually require that you identify yourself digitally, but a large group of adults in Sweden lack e-identification. The need for a national strategy is therefore great.
Participants further discussed how digital health literacy manifests itself in individuals’ ability to find the right website and reliable information on the internet. People with lower digital health literacy may not be able to identify appropriate keywords or may have difficulty assessing the credibility of the information source. The problem is not lessened by the fact that algorithms control where we end up when we search for information. Often the algorithms make companies more visible than government organizations.
The policy makers in the study also identified specific groups that are at risk of digital exclusion (digital divide) and that need different types of support. Among others, they mentioned people with intellectual disabilities and young people who do not sufficiently master source criticism (even though they are skilled users of the internet and various apps). Specific measures to counteract the digital divide in healthcare were discussed, such as regular mailings with information about good websites, adaptation of website content for people with special needs, and teaching in source criticism. It was also emphasized that individuals may have different combinations of conditions that affect the ability to manage health information digitally in different ways, and that a strategy to increase digital health literacy must therefore be nuanced.
Something that struck me was that the policy makers in the study, as far as I could see, did not emphasize the growing group of elderly people in the population. Elderly people may have a particularly broad combination of conditions that affect digital health literacy in many different ways. In addition, the elderly’s ability to handle information digitally not only varies from day to day, but the ability can be expected to have an increasingly steady tendency to deteriorate. Probably at the same rate as the need to use the ability increases.
Bywall, K.S., Norgren, T., Avagnina, B. et al. Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers. BMC Public Health 24, 2666 (2024). https://doi.org/10.1186/s12889-024-20174-9
Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?
No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.
What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.
However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.
According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.
The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.
Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.
Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912
Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245
In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.
In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?
There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.
I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.
It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.
One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.
Antibiotic resistance is a growing global challenge, particularly for modern healthcare, which relies on antibiotics to prevent and treat infectious diseases. Multi-resistant bacteria are already present across the globe and without effective antibiotics, simple medical interventions will become risky in the future. Each year, several million deaths globally are associated with antibiotic resistance. With more and more drug-resistant microorganisms, one could expect an increase in research and development of new antibiotics or vaccines. However, in parallel with the growing global threat from antimicrobial resistance, or AMR as it is often called, the development rate of new antibiotics is instead decreasing. Reduced R&D also reduces the number of experts in the field, which in turn affects our society’s ability to develop new antibiotics.
Why is that so? One reason is that the return on investment is so low that many large pharmaceutical companies have scaled back or abandoned their development programs, resulting in a loss of expertise. The effort to slow down the development rate of antibiotic resistance requires us to save the most effective medicines for the most difficult cases, and this “stewardship” contributes to inhibiting the will to invest, as the companies are unable to count on any new “blockbuster” drugs.
The problem of access to effective treatment is global, and on September 26 this year, the UN General Assembly is organizing a high-level meeting on AMR. The political declaration published ahead of the meeting highlights, among other things, the need for mechanisms for funding research and development, the need for functioning collaborations between private and public actors, and the need for measures to deal with the growing lack of competence in the area.
However, the picture is not only dark. During the last decade, several investments have been made in collaborations to meet the challenges for research and development in the field. One such investment is the European AMR Accelerator program, running since 2019 with funding from the Innovative Medicines Initiative (IMI). The program consists of nine projects that bring different stakeholders together to collaborate on the development of new treatments, for example against multi-resistant tuberculosis.
In a short article recently published in Nature Reviews Drug Discovery, representatives of the program discuss some of the important values and challenges associated with collaborations between academia and industry. Antibiotic development is expensive and many drug candidates are discontinued already in the early stages of development. By sharing risks and costs between several organizations, the AMR Accelerator has so far been able to contribute to the development of a large portfolio of different antibiotics. In addition, the nine projects have developed research infrastructures for, among other things, modelling, data management, and clinical studies that can benefit the entire AMR research community. Moreover, the critical mass that is generated when 98 organizations collaborate, can generate new ideas and synergies in the work against AMR.
There are also challenges. Among the challenges is balancing the perspectives and needs of different actors in the program, not least in the collaborations between academia and industry, where cooperation agreements and regular meetings have been needed to manage differences in culture and approach. The AMR Accelerator program has also served as neutral ground for competing companies, which have been able to can collaborate within the framework of the projects.
According to the authors, the biggest challenge remains: what happens after the projects end? The Innovative Medicines Initiative has invested €479 million in the program. The question now is how the nine projects and partners will find long-term sustainability for the assets and infrastructures they have put in place. Some form of continued funding is needed so that the resources created within the AMR Accelerator can be used in the next phase of the work, where the end goal is providing access to drugs that can treat antibiotic-resistant infections.
The article concludes with a call to governments, research funders, pharmaceutical companies and other actors to invest in research and development of new medicines and research to support the fight against antibiotic resistance. To ensure that we can benefit from investments such as the AMR Accelerator in the long term, regular funding calls are needed to maintain expertise, infrastructures, data and networks.
Fernow J, Olliver M, Couet W, Lagrange S, Lamers MH, Olesen OF, Orrling K, Pieren M, Sloan DJ, Vaquero JJ, Miles TJ & Karlén A, The AMR Accelerator: from individual organizations to efficient antibiotic development partnerships, Nature Reviews Drug Discovery, first online 23 September, DOI: https://doi.org/10.1038/d41573-024-00138-9
During a clinical trial, large amounts of health data are generated that can be useful not only within the current study. If the trial data are made available for sharing, they can be reused within other research projects. Moreover, if the research participants’ individual health data are returned to them, this may benefit the patients in the study.
The opportunities to increase the usefulness of data from clinical trials in these two ways are not being exploited as well as today’s technology allows. The European project FACILITATE will therefore contribute to improved availability of data from clinical trials for other research purposes and strengthen the position of participating patients and their opportunity to gain access to their individual health data.
A policy brief article in Frontiers in Medicine presents the project’s work and recommendations regarding the position of patients in clinical studies and the possibility of communicating their health data back to them. The project develops an ethical framework that will put patients more at the center and increase their influence over the studies they participate in. For example, it tries to make it easier for patients to dynamically design and modify their consent, access information about the study and retrieve individual health data.
An extended number of ethical principles are identified within the project as essential for clinical trials. For example, one should not only respect the patients’ autonomy, but also strengthen their opportunities to make informed decisions about their own care on the basis of returned health data. Returned data must be judged to be of some kind of benefit to the individuals and the data must be communicated in such a way that they as effectively as possible strengthen the patients’ ability to make informed decisions about their care.
Ciara Staunton, Johanna M. C. Blom and Deborah Mascalzoni on behalf of the IMI FACILITATE Consortium. Ethical framework for FACILITATE: a foundation for the return of clinical trial data to participants. Frontiers in Medicine, 17 July 2024. https://doi.org/10.3389/fmed.2024.1408600
During the last phase of the Human Brain Project, the activities on this blog received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. HBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539). The views and opinions expressed on this blog are the sole responsibility of the author(s) and do not necessarily reflect the views of the European Commission.
Recent Comments