A blog from the Centre for Research Ethics & Bioethics (CRB)

Year: 2024 (Page 1 of 4)

Columbo in Athens

One of the most timeless TV crime series is probably Columbo. Peter Falk plays an inquisitive police lieutenant who sometimes seems so far beyond ordinary time reckoning that he can make Los Angeles resemble ancient Athens, where an equally inquisitive philosopher cared just as little about his appearance.

I hope you have seen a few Columbo episodes. I also take the liberty of opening this post by revealing why I want to write about him. Because he not only exposes the murderers but at the same time frees them from living entangled in their own brilliant plans. You might remember the unusual disposition of the episodes, that we immediately learn who the perpetrator is. The murderers in the series are distinguished not only by their high social and economic status, but also by their high intelligence (and their overconfidence in it). Before the murder takes place, we get to follow how ingeniously the killer plans the deed. The purpose is to give the appearance of having a watertight alibi, to avoid leaving unintended clues at the murder scene, and to leave those clues that clearly point to someone else. Everything is perfectly thought out: BANG! In the next act, Columbo enters the scene of the murder in his worn coat and with a cigar that has usually gone out. In one episode he arrives with a boiled egg in his pocket which he cracks against the murder weapon when he has not had time to eat breakfast.

The murder was just the prelude. Now the episode begins for real, the interaction between the absent-minded Columbo and the shrewd murderer who planned everything in detail and now feels invincible. Especially considering that the police lieutenant leading the investigation is clearly just a confused poor thing constantly fumbling for his notepad and pencil and asking irrelevant questions. I have soon dealt with this fellow, the killer thinks.

Columbo often immediately knows who the murderer is. He can reveal this in a final conversation with the murderer where both can unexpectedly find each other and speak openly, almost like old friends. Soon even the murderer begins to understand that Columbo knows, even though the lieutenant’s absent-minded demeanor at first made this unlikely. Usually, however, the murderer’s confidence is not shaken by knowing that Columbo knows, for everything is perfectly thought out: Columbo “knows” without being able to prove anything! Columbo spends many sleepless nights wondering about the murderer’s alibi and motive, or about seemingly irrelevant details at the murder scene: the “loose ends” that Columbo often talks about, without the murderer understanding why. They seem too trivial to touch the ingenious plan! The murderer almost seems to enjoy watching Columbo rack his brain with immaterial details that cannot possibly prove what both already “know.” Little does the killer know that Columbo’s uncertainty will soon bear fruit.

Finally, Columbo manages to tie up the loose ends that the murderer did not see the point of (they looked so plain compared to the elegant plan). When Columbo reveals how the alibi was only apparent, how the all-too-obvious clues were deliberately placed at the murder scene, and the murderer’s cheap selfish motive, the murderer expects to be arrested by Columbo. “No, others will come and arrest you later,” says Columbo, who suddenly seems uninterested in the whole matter. Columbo seems to have only wanted to expose the illusory reality the killer created to mislead everyone. The murderer is the one who walks into the trap first. To make everything look real, the murderer must live strictly according to the insidious plan from the very first act. Maybe that is why the murderer often seems to breathe a sigh of relief in the final act. Columbo not only exposes the criminal, but also frees the criminal mind from constantly living trapped in its own calculations.

In the conversation at the end, the otherwise active killer seems numbed by Columbo, calm and without a winning smile. Even the murderer is for the first time happily absent-minded.

How does Columbo manage to uncover the insidious plan? We like to think that Columbo succeeds in exposing the murderer because Columbo is even smarter. If Columbo switched sides and planned crimes, no one could expose him! He would be a super-intelligence that could satisfy every wish, like the genie in the lamp. Sometimes even the murderer seems to think along these lines and offers Columbo employment and a brilliant career. With Columbo as accomplice, the murderer would be invincible. But Columbo does not seem to care more about his future than about his appearance: “No, never, I couldn’t do that.” He loves his work, he explains, but hardly gives the impression of being a police lieutenant, but is sometimes mistaken for a vagrant who is kindly asked to remove himself from the scene of the murder. Nuns can offer him food and clothes. Is Columbo the one actually creating the false appearance? Is he the one with the most cunning plan? Is his absent-mindedness just a form of ironic pretense to lure the murderer into the trap?

Columbo probably benefits from his disarming simplicity and absent-minded demeanor. But although we sometimes see him setting traps for the killer, we never see him disguise himself as a vagrant. When his wife has given him a nicer coat, he seems genuinely bothered by it, as if he were dressed up. Is Columbo’s confusion sincere after all? Is it the confusion he loves about his work? Is it perhaps the confusion that eventually reveals the murderer’s watertight plan?

Columbo’s colleagues are not confused. They follow the rules of the game and soon have exactly the conviction the murderer planned for them according to the manual: the murderer has no motive, has a watertight alibi, and cannot be tied to the scene of the murder. Technical evidence, on the contrary, clearly points in a different direction. If the colleagues were leading the investigation, the murderer would have already been removed from the list of suspects. This is how a colleague complains when he feels that Columbo is slowing down the investigation by not following the plan of the criminal mastermind:

Sergeant Hoffman: Now what do you think Lieutenant, do you really think that Deschler didn’t shoot Galesko in the leg?

Columbo: I’ll tell you something, Sergeant, I don’t know what to think.

The injured Galesko is in fact the murderer. He shot himself in the leg after killing Deschler, to make the killing look like self-defense against “his wife’s kidnapper.” Galseko has already murdered his wife, having staged the kidnapping and planted the clues that point to Deschler. Why did Galesko murder his wife? Because he felt she was obscuring his bright future. The murderers in the TV series not only plan their deeds, but also their lives. Without ideas of bright futures, they would lack motive to plan murder.

Neither the killer nor the colleague suffers from uncertainty, they both sleep well. Only Columbo is awake: “I don’t know what to think.” Therefore, he tries to tie up loose ends. Like the philosopher Socrates in ancient Athens, Columbo knows that he does not know. Therefore, he torments the murderer (and the colleagues) with vexing questions that do not belong to the game, but rather revolve around it. Now you probably want to direct Columbo’s most famous line at me: “Oh, just one more thing!” For did I not say that Columbo immediately knows who the murderer is? Yes, I did. Columbo already “knows” who the murderer is. How? Does he know it through his superior intelligence that reveals the whole case in a flash? No, but because the murderer does not react like someone who does not know. When informed of the murder, the killer reacts strangely: like someone who already knows. Lack of confusion is the hallmark of the murderer.

When Columbo reveals the tangle of thoughts that already in the first act ensnared the murderer, the perpetrator goes to prison without complaint. Handcuffs are redundant when the self-made ones are finally unlocked. Columbo has calmed the criminal mind. The culprit is free from the murder plan that would secure the future plan. Suddenly everything is real, just real.

Just one more thing: Merry Christmas and do not plan too much!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

The dialogue between Hoffman and Columbo is from the episode Negative Reaction (1974). Columbo’s response to the career offer is from The Bye-Bye Sky-High I.Q. Murder Case (1977).

The image is AI-generated in Microsoft Designer by Ashkan Atry.

This post in Swedish

Thinking about thinking

Were many clinical trials during the COVID-19 pandemic unethical?

It is understandable if the COVID-19 pandemic spurred many researchers to conduct their own studies on patients with the disease. They wanted to help in a difficult situation by doing what they were competent to do, namely research. The question is whether the good will sometimes had problematic consequences in terms of research ethics.

For a clinical trial to have scientific and social value, a large number of participants is required. This is in order to be able to compare groups that are treated differently and with a sufficiently high probability demonstrate real connections between treatment and outcome. 20 years ago, small so-called underpowered trials were common, but the pandemic made them flourish again. Some COVID-19 studies had fewer than 50 participants.

Is it then not good that researchers do what they can in a difficult situation, even if it means that they do research on the smaller patient groups that they manage to recruit? The problem is that underpowered clinical trials do not provide valid scientific knowledge. Thus, they have hardly any value for society and it becomes doubtful whether the researchers are really doing what they feel they are doing, namely helping in a difficult situation.

You can read about this in a commentary in the Journal of the Royal Society of Medicine, written by Rafael Dal-Ré, Stefan Eriksson and Stephen Latham. They point out that researchers sometimes defend underpowered clinical trials with the argument that smaller studies are easier to complete and that data from small trials around the world can be pooled to achieve the required statistical power. This is correct if the studies used sufficiently similar research methods to make the data comparable, the authors comment. This is often not the case, but requires that researchers plan from the outset to pool data from their respective studies. Another problem is that underpowered clinical trials more often have negative results and that such studies are less often published. Pooled data from underpowered studies published in journals are therefore not representative. Data from such studies would therefore need to be posted on freely accessible platforms, the authors argue.

Exposing patients to the risks and inconveniences involved in participating in a clinical trial is unethical if the study cannot be judged to provide scientifically valid knowledge with social value. The authors’ conclusion is therefore that research ethics committees that review planned research must very carefully assess that the studies have a sufficiently large number of participants to achieve valid and useful knowledge. If underpowered studies are nevertheless planned, participants must be informed that the results may not be scientifically valid in themselves, but that they will be pooled with results from similar studies in order to achieve statistical power. If there is no agreement with other researchers to pool results, underpowered studies should not be approved by research ethics committees, the three authors conclude. Not even during a pandemic.

Read the commentary here: Underpowered trials at trial start and informed consent: action is needed beyond the COVID-19 pandemic.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Dal-Ré R, Eriksson S, Latham SR. Underpowered trials at trial start and informed consent: action is needed beyond the COVID-19 pandemic. Journal of the Royal Society of Medicine. 2024;0(0). doi:10.1177/01410768241290075

This post in Swedish

We want solid foundations

AI is the answer! But what is the question?

Many projects are underway in Sweden regarding AI systems in healthcare. The testing of AI solutions is in full swing. But many systems do not seem to be implemented and used. Why? Often it is a matter of poor preparatory work. Without a carefully considered strategy and clear goals, we risk scaling up AI systems that cannot cope with the complexity of healthcare.

The atmosphere around many AI ​​ventures can almost be a little religious. You must not be negative or ask critical questions. Then you are quickly branded as a cynic who slows down development and does not understand the signs of the times. You almost have to blind yourself to potential pitfalls and speak and act like a true believer. Many justify the eager testing of AI by saying that we must dare to try and then see which solutions turn out to be successful. It is fascinating how willingly we apply AI to all sorts of tasks. But are we doing it the right way, or do we risk rushing on without giving ourselves time to think?

There are indeed economical and practical challenges in healthcare. It is not only about a lack of financial resources, but also about a lack of personnel and specialists. Before we can allow technologies like AI to become part of our everyday lives, we need to ask ourselves some important questions: What problems are we trying to solve? How do our solutions affect the people involved? We may also need to clarify whether the purpose of the AI ​​system is to almost take over an entire work task or rather to facilitate our work in certain well-defined respects. The development of AI products should also pay extra attention to socially created categories of ethnicity and gender to avoid reinforcing existing inequalities through biased data selection. Ethically well-considered AI implementations probably lead to better clinical outcomes and more efficient care. It is easy to make hasty decisions that soon turn out to be wrong: accuracy should always be a priority. It is better to think right and slow than fast and wrong. Clinical studies should be conducted even on seemingly not so advanced AI products. In radiology, this tradition is well established, but it is not as common in primary care. If a way of working is to be changed with the help of AI, one should evaluate what effects it can have.

We must therefore not neglect three things: We must first of all define the need for an AI solution. We must then consider that the AI ​​tool is not trained with biased data. Finally, we need to evaluate the AI ​​solution before implementing it.

With the rapid data collection that apps and digital tools allow today, it is important not to get carried away, but to carefully consider the ethics of designing and implementing AI. Unfortunately, the mantra has become: “If we have data, we should develop an AI.” And that mantra makes anyone who asks “Why?” seem suspicious. But the question must be asked. It does not hinder the development of AI solutions, but contributes to it. Careful ethical considerations improve the quality of the AI ​​product and strengthens the credibility of the implementation.

I therefore want to warn against being seduced by the idea of ​​AI solutions for all sorts of tasks. Before we say AI is the answer, we need to ask ourselves: What is the question? Only if we can define a real issue or challenge can we ensure that the technology becomes a helping hand instead of a burden. We do not want to periodically end up in the situation where we suddenly have to pull the emergency brake, as in a recent major Swedish investment in AI in healthcare, called Millennium. We must not get stuck in the mindset that everything can be done faster and easier with AI. We must also not be driven by the fear of falling behind if we do not immediately introduce AI. Only a carefully considered evaluation of the need and the design of an AI solution can ensure appropriate care that is also effective. To get correct answers quickly, we must first give ourselves time to think.

Written by…

Jennifer Viberg Johansson, who is an Associate Professor in Medical Ethics at the Centre for Research Ethics & Bioethics.

This post in Swedish

We challenge habits of thought

World Health Organization outlines guidelines for the use of genomic data

Human genomics has potential to improve the health of individuals and populations for generations to come. It also requires the collection, use and sharing of data from people all over the world. There is therefore an accompanying need for a globally fair distribution of genomic technology, data and results. As the databases and infrastructures will be in operation for a long time, ethical, legal, social and cultural issues need to be taken into account from the outset, considering the entire life cycle of the data.

To promote such an ethical, equitable and responsible use of genomic data, the World Health Organization (WHO) recently issued globally applicable guidelines for human genome data collection, access, use and sharing. The guidelines are formulated as 8 principles with associated practical recommendations. The principles were developed step by step, first through review of existing documents and virtual consultation with experts from different parts of the world, then through a workshop in Geneva where experts met on site. Finally, the draft was discussed through public consultations.

The purpose of the WHO document is to create globally applicable principles that can complement local legislation. This is to promote, among other things, social and cultural inclusiveness as well as justice in the use of human genome data.

Read the important document here: Guidance for human genome data collection, access, use and sharing.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Approaching future issues

Citizen scientists as co-authors

A recurring theme on this blog is the question of who can be counted as an author of a research article. You might be thinking: how difficult can it be to determine if someone is the author of an article? But the criteria for academic authorship are challenged on several fronts and therefore need to be discussed. I recently blogged about a debate about two of these challenges: huge research projects where a large number of researchers and experts in different fields contribute to the studies, and the use of AI in research and academic writing (for example ChatGPT).

Today I want to recommend an article on publication ethics that discusses a third challenge to the authorship criteria. The challenge is called citizen science. Similar to the big research collaborations I mentioned above, a very large number of individuals often contribute to citizen science. The difference is that the professional researchers here collaborate with voluntary participants from the general public and not just with other researchers or experts. It may involve ordinary citizens reporting their observations of plant and animal life, helping astronomers categorize large amounts of photographed astronomical objects, contributing to solutions to mathematical problems or perhaps even discussing the design of research projects. Citizen science is important, for example, when data collection requires the efforts of so many observers in so many places, that the observations would otherwise be too expensive or time-consuming. Citizen science is also important because it gives citizens insight into research, increases trust in science and creates contacts between research and society.

The so-called Vancouver rules for authorship have been criticized for allegedly excluding citizen scientists from authorship, even though the voluntary contributions are sometimes so significant that they could merit such recognition. The rules state (slightly simplified) that to count as an author you must have made significant contributions to the research study (e.g., design, data collection, analysis, interpretation). But you must also have participated in the writing process, approved the final version of the article, and accepted responsibility for the research being carried out correctly.

An important point in the article that I recommend is that it is not necessarily the Vancouver rules that exclude citizen scientists from authorship. On the contrary, it may be that the researchers leading the projects do not follow the rules. In addition to the four criteria above, the Vancouver rules say that individuals who meet the first criterion should be given the opportunity to meet the other three as well. Citizen scientists who have made significant contributions to the study should therefore be given the opportunity to write or revise relevant sections of the text, approve the final version and accept responsibility for the accuracy of at least their own contribution to the study. In citizen science, it is also often the case that a small number of “superusers” account for the bulk of the work effort. It should be possible to treat these individuals in the same way as one treats professional researchers who have made significant contributions, that is, give them the opportunity to qualify for authorship.

A more difficult issue discussed in the article is group authorship. In citizen science, the collective contribution of the whole group is often significant, while the individual contributions are not. Would it be possible to give the group collective credit in the form of group authorship? Not doing so could give a false impression that the professional researchers made a greater effort in the study than they actually did, the four publication ethicists argue in the article. It can also be unfair. If individual researchers who fulfill the first criterion should be given the opportunity to fulfill all criteria, then groups should also be given this opportunity. In such cases, the group should (in some way) be given the opportunity to participate in the critical revision of the article and to approve the final version. But can a group of 2,000 volunteer bird watchers take responsibility for a research study being carried out properly? Perhaps the group can at least answer for the accuracy of its own observation efforts. Being credited for one’s contribution to a study through authorship and taking responsibility for the contribution are two sides of the same coin, according to the publication ethicists. That citizen scientists must accept responsibility in order to be counted as co-authors is perhaps also an opportunity to convey something about the nature of science, one could add.

The article concludes by proposing seven heuristic rules regarding who can be included as an author. For example, one should, as far as possible, respect existing guidelines (such as the Vancouver rules), apply a wide conception of contributions, and be open to new forms of authorship. Perhaps a group can sometimes be credited through authorship? The seventh and final heuristic rule is to be generous to citizen scientists in unclear cases by including rather than excluding.

Read the article on citizen scientists as authors here: Authorship and Citizen Science: Seven Heuristic Rules.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Sandin, P., Baard, P., Bülow, W., Helgesson, G. Authorship and Citizen Science: Seven Heuristic Rules. Science and Engineering Ethics 30, 53 (2024). https://doi.org/10.1007/s11948-024-00516-x

This post in Swedish

We recommend readings

Nurses’ experiences of tube feeding under restraint for anorexia

The eating disorder anorexia (anorexia nervosa) is a mental disorder that can be life-threatening if it is not treated. It is characterized by fear of gaining weight: you starve yourself to lose weight and do not understand that being underweight is dangerous. Even if most recover, the disease is associated with increased mortality and the most severely ill may need to be hospitalized.

Hospital care can involve both psychotherapy and drug treatment, but not everyone wants or is able to participate in the treatment, which of course also involves eating. They may lack motivation to change or refuse to see that they need treatment. If the malnutrition becomes life-threatening, it may be necessary to decide on tube feeding as a compulsory measure. Liquid nutrition is then given via a thin tube that is inserted through one nostril and down into the stomach.

Tube-feeding an adult who does not want to eat is reasonably a challenge for the nurses who have to perform the procedure. What are their experiences of the measure like? One study investigated the issue by interviewing nurses at a Norwegian inpatient ward where adult patients with severe anorexia were cared for. What did the nurses have to say?

An important theme was that one strove to provide good care even during the coercive measure. It must be so good that the patient voluntarily wants to stay in the ward after tube feeding. For example, the measure is never taken until one has gradually tried to encourage the patient to eat, asked the patient about the situation and discussed whether to use the tube instead. If tube feeding becomes necessary, one still tries to give the patient options, one tries to respect the patient’s autonomy as much as possible, even if it is a coercive measure. The nurses also described difficulties in balancing kindness and firmness during the procedure, difficulties in combining the role of being a helper and being a controller.

Another theme was ethical concerns when the doctor decided on tube feeding even though the patient’s BMI was not so low that the condition was life-threatening. One nurse stated that she sometimes found such situations so problematic that she refused to take part in the procedure.

The third theme was concerns about calling in staff from another ward to help restrain the patient while the nurse performed the tube feeding. Some nurses were concerned about how this might be experienced by patients with a history of abuse. Others saw the tube feeding as a life-saving measure and experienced no ethical concerns. However, participants in the study emphasized that tube feeding affects the relationship with the patient and that restraint can disrupt the relationship. A nurse told how she once performed tube feeding on a patient she had never met before, and with whom she had therefore not established a relationship, and how this then prevented a good relationship with that patient.

If you want to read for yourself what the nurses said and how the authors discussed their findings, read the study here: Nurses’ experience of nasogastric tube feeding under restraint for Anorexia Nervosa in a psychiatric hospital.

Interview studies that capture human experience through the participants’ own stories often yield unexpectedly meaningful insights. Subtle details of human life that you would not otherwise have thought of appear in the interview material. One such insight from this study was how the nurses made great efforts so that tube feeding could be perceived as good care with respect for the patient’s autonomy and dignity, despite the fact that it is a coercive measure. It also became clear that there were tensions in the situation that the nurses had difficulty dealing with, such as first performing the coercive measure and then comforting the patient and re-establishing the relationship that had been disrupted. One of the conclusions in the article is therefore that even the nurses who perform tube feeding are vulnerable.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Brinchmann, B.S., Ludvigsen, M.S. & Godskesen, T. Nurses’ experience of nasogastric tube feeding under restraint for Anorexia Nervosa in a psychiatric hospital. BMC Medical Ethics 25, 111 (2024). https://doi.org/10.1186/s12910-024-01108-x

This post in Swedish

Ethics needs empirical input

Psychological distress: an overlooked issue in immigrants

Psychological distress that ethnic minorities experience is an often overlooked problem. In France, the mental well-being of ethnic minorities, particularly those with North African immigrant backgrounds is an important issue to study. Both first- and second-generation immigrants face unique challenges that may make them more vulnerable to more general mental health issues, and psychological disorders. A fresh report from the European Fundamental Rights Association on being a Muslim in the EU (published on October 24, 2024) sheds some light on issues related to health and racial harassment and violence. The report did not study psychological issues specifically, but it is worth noting that race-related violence has psychological impact for 55 percent of the respondents (p. 21).

Vulnerability is frequently linked to ethnic minority status, leading to recurring experiences of discrimination and difficulties in reconciling cultural identity with a society that often prioritizes assimilation. In this context, assimilation tends to erase or disregard the original cultural heritage in favor of integration into the dominant culture. Such dynamics can lead to feelings of isolation, invalidation, and psychological distress among affected individuals.

Research on the mental health of French populations of North African descent remains largely neglected. In other regions, for example North America, mental health and immigration is much better studied. While the topic of discrimination has been explored in some areas, few studies have focused on the psychological effects of these experiences and the coping strategies adopted by these populations in France. Some research does indicate a rise in discrimination, but lack of comprehensive studies on this issue creates both a scientific and social void, keeping these topics largely invisible.

In other southern European countries such as Italy and Spain, the mental health problems of ethnic minorities are recognized, but do not yet receive the same attention as in North America. In Italy, studies on the mental health of minorities are mainly focused on recent migrants and refugees, not least because of the importance of migratory flows in the Mediterranean. Researchers are mainly interested in the traumas associated with exile and the precarious conditions of migrants, but issues of discrimination or systemic racism are less well explored.

In Spain, there is also research on the mental health of migrants, particularly from Latin America and North Africa. However, the framework remains focused on social integration and economic issues, and less on the dynamics of discrimination and ethnicity. Both countries are beginning to recognize the importance of these issues, but in-depth studies on the impact of racial discrimination on the mental health of ethnic minorities, as in all parts of Europe, are still limited.

One psychological phenomenon that is still underexplored in this context is “racial battle fatigue.” Introduced in the early 2000s by William A. Smith, this concept refers to the emotional and psychological stress accumulated by individuals who repeatedly face racism. It represents the emotional burden that ethnic minorities carry as a result of racial discrimination and societal expectations. This burden can drive individuals to minimize or suppress their own suffering to avoid being perceived as “weak” or “complaining.” These coping mechanisms can exacerbate psychological issues, creating a vicious cycle of untreated distress.

In academic and professional settings, there is often reluctance to openly discuss these challenges. Some individuals may regard these topics as taboo or controversial, limiting the opportunities for open dialogue and scientific advancement. This reflects a broader trend in the mental health field, where the specific needs of ethnic minorities, particularly in terms of tailored psychological care, are not adequately addressed.

If we are going to be able to provide concrete answers to these questions, we need to study this phenomenon and shed some light on the mechanisms underlying the psychological suffering of ethnic minorities. Research on the psychological distress experienced by ethnic minorities could also help develop therapeutic interventions better suited to these populations. A recent French pilot study can lead the way: in Rania Driouach’s sample of people with North African descent, 226 out of a total of 387 participants indicated heightened psychological distress on a transgenerational level. Her study is the first step towards a scientific framework that acknowledges the specific needs of these groups while promoting an inclusive and rigorous therapeutic approach. Perhaps such a framework can help pave the way for a better understanding of the effects of migration on psychological distress across generations, and provide better tools for the (mental) health care providers that provide both first and second line care.

This post is written by Rania Driouach (Nîmes University) and:

Sylvia Martin

Sylvia Martin, Clinical Psychologist and Senior Researcher at the Centre for Research Ethics & Bioethics (CRB)

We transcend disciplinary borders

Digitization of healthcare requires a national strategy to increase individuals’ ability to handle information digitally

There is consensus that the digitization of healthcare can make it easier to keep in touch with healthcare and get information that supports individual decision-making about one’s own health. However, the ability to understand and use health information digitally varies. The promising digitization therefore risks creating unequal care and health.

In this context, one usually speaks of digital health literacy. The term refers to the ability to retrieve, understand and use health information digitally to maintain or improve one’s health. This ability varies not only between individuals, but also within the same individual. Illness can, for example, reduce the ability to use a computer or a smartphone to maintain contact with healthcare and to understand and manage health information digitally. Your digital health literacy is dependent on your health.

How do Swedish policy makers think about the need for strategies to increase digital health literacy in Sweden? An article with Karin Schölin Bywall as the main author examines the question. Material was collected during three recorded focus group discussions (or workshops) with a total of 10 participants. The study is part of a European project to increase digital health literacy in Europe. What did Swedish decision-makers think of the need for a national strategy?

The participants in the study said that the issue of digital health literacy was not as much on the agenda in Sweden as in many other countries in Europe and that governmental agencies have limited knowledge of the problem. Digital services in healthcare also usually require that you identify yourself digitally, but a large group of adults in Sweden lack e-identification. The need for a national strategy is therefore great.

Participants further discussed how digital health literacy manifests itself in individuals’ ability to find the right website and reliable information on the internet. People with lower digital health literacy may not be able to identify appropriate keywords or may have difficulty assessing the credibility of the information source. The problem is not lessened by the fact that algorithms control where we end up when we search for information. Often the algorithms make companies more visible than government organizations.

The policy makers in the study also identified specific groups that are at risk of digital exclusion (digital divide) and that need different types of support. Among others, they mentioned people with intellectual disabilities and young people who do not sufficiently master source criticism (even though they are skilled users of the internet and various apps). Specific measures to counteract the digital divide in healthcare were discussed, such as regular mailings with information about good websites, adaptation of website content for people with special needs, and teaching in source criticism. It was also emphasized that individuals may have different combinations of conditions that affect the ability to manage health information digitally in different ways, and that a strategy to increase digital health literacy must therefore be nuanced.

In summary, the study emphasizes that the need for a national strategy for increased digital health literacy is great. While digital technologies have huge potential to improve public health, they also risk reinforcing already existing inequalities, the authors conclude. Read the study here: Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers.

Something that struck me was that the policy makers in the study, as far as I could see, did not emphasize the growing group of elderly people in the population. Elderly people may have a particularly broad combination of conditions that affect digital health literacy in many different ways. In addition, the elderly’s ability to handle information digitally not only varies from day to day, but the ability can be expected to have an increasingly steady tendency to deteriorate. Probably at the same rate as the need to use the ability increases.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bywall, K.S., Norgren, T., Avagnina, B. et al. Calling for allied efforts to strengthen digital health literacy in Sweden: perspectives of policy makers. BMC Public Health 24, 2666 (2024). https://doi.org/10.1186/s12889-024-20174-9

This post in Swedish

Ethics needs empirical input

Debate on responsibility and academic authorship

Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI ​​to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI ​​can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?

No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.

What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.

However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.

According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.

The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.

Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912

Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245

This post in Swedish

We participate in debates

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

« Older posts