A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 1 of 27)

An ethical strategy for improving the healthcare of brain-damaged patients

How can we improve the clinical care of brain-damaged patients? Individual clinicians, professional and patient associations, and other relevant stakeholders are struggling with this huge challenge.

A crucial step towards a better treatment of these very fragile patients is the elaboration and adoption of agreed-upon recommendations for their clinical treatment, both in emergency and intensive care settings. These recommendations should cover different aspects, from diagnosis to prognosis and rehabilitation plan. Both Europe and the US have issued relevant guidelines on Disorders of Consciousness (DoCs) in order to make clinical practice consistent and ultimately more beneficial to patients.

Nevertheless, these documents risk becoming ineffective or not having sufficient impact if they are not complemented with a clear strategy for operationalizing them. In other words, it is necessary to develop an adequate translation of the guidelines into actual clinical practice.

In a recent article that I wrote with Arleen Salles, we argue that ethics plays a crucial role in elaborating and implementing this strategy. The application of the guidelines is ethically very relevant, as it can directly impact the patients’ well-being, their right to the best possible care, communication between clinicians and family members, and overall shared decision-making. Failure to apply the guidelines in an ethically sound manner may inadvertently lead to unequal and unfair treatment of certain patients.

To illustrate, both documents recommend integrating behavioural and instrumental approaches to improve the diagnostic accuracy of DoCs (such as vegetative state/unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation). This recommendation is commendable, but not easy to follow because of a number of shortcomings and limitations in the actual clinical settings where patients with DoCs are diagnosed and treated. For instance, not all “ordinary,” non-research oriented hospitals have the necessary financial, human, and technical resources to afford the dual approach recommended by the guidelines. The implementation of the guidelines is arguably a complex process, involving several actors at different levels of action (from the administration to the clinical staff, from the finances to the therapy, etc.). Therefore, it is crucial to clearly identify “who is responsible for what” at each level of the implementation process.

For this reason, we propose that a strategy is built up to operationalize the guidelines, based on a clarification of the notion of responsibility. We introduce a Distributed Responsibility Model (DRM), which frames responsibility as multi-level and multi-dimensional. The main tenet of DRM is a shift from an individualistic to a modular understanding of responsibility, where several agents share professional and/or moral obligations across time. Moreover, specific responsibilities are assigned depending on the different areas of activity. In this way, each agent is assigned a specific autonomy in relation to their field of activity, and the mutual interaction between different agents is clearly defined. As a result, DRM promotes trust between the various agents.

Neither the European nor the US guidelines explicitly address the issue of implementation in terms of responsibility. We argue that this is a problem, because in situations of scarce resources and financial and technological constraints, it is important to explicitly conceptualize responsibility as a distributed ethical imperative that involves several actors. This will make it easier to identify possible failures at different levels and to implement adequate corrective action.

In short, we identify three main levels of responsibility: institutional, clinical, and interpersonal. At the institutional level, responsibility refers to the obligations of the relevant institution or organization (such as the hospital or the research centre). At the clinical level, responsibility refers to the obligations of the clinical staff. At the interpersonal level, responsibility refers to the involvement of different stakeholders with individual patients (more specifically, institutions, clinicians, and families/surrogates).

Our proposal in the article is thus to combine these three levels, as formalized in DRM, in order to operationalize the guidelines. This can help reduce the gap between the recommendations and actual clinical practice.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, Michele; Salles, Arleen. American and European Guidelines on Disorders of Consciousness: Ethical Challenges of Implementation, Journal of Head Trauma Rehabilitation: April 13, 2022. doi: 10.1097/HTR.0000000000000776

We want solid foundations

Safeguards when biobank research complies with the General Data Protection Regulation

The General Data Protection Regulation (GDPR) entails a tightening of EU data protection rules. These rules do not only apply to the processing of personal data by companies. They apply in general, also to scientific research, which in many cases could entail serious restrictions on research. However, the GDPR allows for several derogations and exemptions when it comes to research that would otherwise probably be made impossible or considerably more difficult.

Such derogations are allowed only if appropriate safeguards, which are in accordance with the regulation, are in place. But what safeguards may be required? Article 89 of the regulation mentions technical and organizational measures to ensure compliance with the principle of data minimization: personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. Otherwise, Article 89 does not specify what safeguards are required, or what it means that the safeguards must be in accordance with the GDPR.

Biobank and genetic research require large amounts of biological samples and health-related data. Personal data may need to be stored for a long time and reused by new research groups for new research purposes. This would not be possible if the regulation did not grant an exemption from the rule that personal data may not be stored longer than necessary and for purposes not specified at data collection. But the question remains, what safeguards may be required to grant exemption?

The issue is raised by Ciara Staunton and three co-authors in an article in Frontiers in Genetics. The article begins by discussing the regulation and how to interpret the requirement that the safeguards should be “in accordance with the GDPR.” Then six possible safeguards are proposed for biobank and genetic research. The proposal is based on a thorough review of a number of documents that regulate health research.

Here, I merely want to recommend reading to anyone working on the issue of appropriate safeguards in biobank and genetic research. Therefore, I mention only briefly that the proposed safeguards concern (1) consent, (2) independent review and oversight, (3) accountable processes, (4) clear and transparent policies and processes, (5) security, and (6) training and education.

If you want to know more about the proposed safeguards, you will find the article here: Appropriate Safeguards and Article 89 of the GDPR: Considerations for Biobank, Databank and Genetic Research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ciara Staunton, Santa Slokenberga, Andrea Parziale and Deborah Mascalzoni. Appropriate Safeguards and Article 89 of the GDPR: Considerations for Biobank, Databank and Genetic Research. Frontiers in Genetics. 18 February 2022 doi: 10.3389/fgene.2022.719317

This post in Swedish

We recommend readings

Using surplus embryos to treat Parkinson’s disease: perceptions among the Swedish public

The use of human embryos in stem cell research can create moral unease, as embryos are usually destroyed when researchers extract stem cells from them. If one considers the embryo as a potential life, this can be perceived as a human life opportunity being extinguished.

At the same time, stem cell research aims to support human life through the development of treatments for diseases that today lack effective treatment. Moreover, not everyone sees the embryo as a potential life. When stem cell research is regulated, policymakers can therefore benefit from current knowledge about the public’s attitudes to this research.

Åsa Grauman and Jennifer Drevin recently published an interview study of perceptions among the Swedish public about the use of donated embryos for the treatment of Parkinson’s disease. The focus in the interviews on a specific disease is interesting, as it emphasizes the human horizon of stem cell research. This can nuance the issues and invite more diverse reasoning.

The interviewees were generally positive about using donated surplus embryos from IVF treatment to develop stem cell treatment for Parkinson’s disease. This also applied to participants who saw the embryo as a potential life. However, this positive attitude presupposed a number of conditions. The participants emphasized, among other things, that informed consent must be obtained from both partners in the couple, and that the researchers must show respect and sensitivity in their work with embryos. The latter requirement was also made by participants who did not see the embryo as a potential life. They emphasized that people have different values and that researchers and the pharmaceutical industry should take note of this.

Many participants also considered that the use of embryos in research on Parkinson’s disease is justified because the surplus embryos would otherwise be discarded without benefit. Several also expressed a priority order, where surplus embryos should primarily be donated to other couples, secondarily to drug development, and lastly discarded.

If you want to see more results, read the study: Perceptions on using surplus embryos for the treatment of Parkinson’s disease among the Swedish population: a qualitative study.

I would like to mention that the complexity of the questions was also expressed in such a way that one and the same person could express different perceptions in different parts of the interview, and switch back and forth between different perspectives. This is not a defect, I would say, but a form of wisdom that is essential when difficult ethical issues are discussed.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Grauman, Å., Drevin, J. Perceptions on using surplus embryos for the treatment of Parkinson’s disease among the Swedish population: a qualitative study. BMC Med Ethics 23, 15 (2022). https://doi.org/10.1186/s12910-022-00759-y

This post in Swedish

Ethics needs empirical input

How can we detect consciousness in brain-damaged patients?

Detecting consciousness in brain-damaged patients can be a huge challenge and the results are often uncertain or misinterpreted. In a previous post on this blog I described six indicators of consciousness that I introduced together with a neuroscientist and another philosopher. Those indicators were originally elaborated targeting animals and AI systems. Our question was: what capacities (deducible from behavior and performance or relevant cerebral underpinnings) make it reasonable to attribute consciousness to these non-human agents? In the same post, I mentioned that we were engaged in a multidisciplinary exploration of the clinical relevance of selected indicators, specifically for testing them on patients with Disorders of Consciousness (DoCs, for instance, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State, Cognitive-Motor Dissociation). While this multidisciplinary work is still in progress, we recently published an ethical reflection on the clinical relevance of the indicators of consciousness, taking DoCs as a case study.

To recapitulate, indicators of consciousness are conceived as particular capacities that can be deduced from the behavior or cognitive performance of a subject and that serve as a basis for a reasonable inference about the level of consciousness of the subject in question. Importantly, also the neural correlates of the relevant behavior or cognitive performance may make possible deducing the indicators of consciousness.  This implies the relevance of the indicators to patients with DoCs, who are often unable to behave or to communicate overtly. Responses in the brain can be used to deduce the indicators of consciousness in these patients.

On the basis of this relevance, we illustrate how the different indicators of consciousness might be applied to patients with DoCs with the final goal of contributing to improve the assessment of their residual conscious activity. In fact, a still astonishing rate of misdiagnosis affects this clinical population. It is estimated that up to 40 % of patients with DoCs are wrongly diagnosed as being in Vegetative State/Unresponsive Wakefulness Syndrome, while they are actually in a Minimally Conscious State. The difference of these diagnoses is not minimal, since they have importantly different prognostic implications, which raises a huge ethical problem.

We also argue for the need to recognize and explore the specific quality of the consciousness possibly retained by patients with DoCs. Because of the devastating damages of their brain, it is likely that their residual consciousness is very different from that of healthy subjects, usually assumed as a reference standard in diagnostic classification. To illustrate, while consciousness in healthy subjects is characterized by several distinct sensory modalities (for example, seeing, hearing and smelling), it is possible that in patients with DoCs, conscious contents (if any) are very limited in sensory modalities. These limitations may be evaluated based on the extent of the brain damage and on the patients’ residual behaviors (for instance, sniffing for smelling). Also, consciousness in healthy subjects is characterized by both dynamics and stability: it includes both dynamic changes and short-term stabilization of contents. Again, in the case of patients with DoCs, it is likely that their residual consciousness is very unstable and flickering, without any capacity for stabilization. If we approach patients with DoCs without acknowledging that consciousness is like a spectrum that accommodates different possible shapes and grades, we exclude a priori the possibility of recognizing the peculiarity of consciousness possibly retained by these patients.

The indicators of consciousness we introduced offer a potential help to identify the specific conscious abilities of these patients. While in this paper we argue for the rationale behind the clinical use of these indicators, and for their relevance to patients with DoCs, we also acknowledge that they open up new lines of research with concrete application to patients with DoCs. As already mentioned, this more applied work is in progress and we are confident of being able to present relevant results in the weeks to come.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 2330 (2022). https://doi.org/10.1186/s12910-022-00770-3

We have a clinical perspective

Can consumers help counteract antimicrobial resistance?

Antimicrobial resistance (AMR) occurs when microorganisms (bacteria and viruses, etc.) survive treatments with antimicrobial drugs, such as antibiotics. However, the problem is not only caused by unwise use of such drugs on humans. Such drugs are also used on a large scale in animals in food production, which is a significant cause of AMR.

In an article in the journal Frontiers in Sustainable Food Systems, Mirko Ancillotti and three co-authors discuss the possibility that food consumers can contribute to counteracting AMR. This is a specific possibility that they argue is often overlooked when addressing the general public.

A difficulty that arises when AMR needs to be handled by several actors, such as authorities, food producers, consumers and retailers, is that the actors transfer the responsibility to the others. Consumers can claim that they would buy antibiotic-smart goods if they were offered in stores, while retailers can claim that they would sell such goods if consumers demanded them. Both parties can also blame how, for example, the market or legislation governs them. Another problem is that if one actor, for example the authorities, takes great responsibility, other actors feel less or no responsibility.

The authors of the article propose that one way out of the difficulty could be to influence consumers to take individual responsibility for AMR. Mirko Ancillotti has previously found evidence that people care about antibiotic resistance. Perhaps a combination of social pressure and empowerment could engage consumers to individually act more wisely from an AMR perspective?

The authors make comparisons with the climate movement and suggest digital innovations in stores and online, which can inform, exert pressure and support AMR-smarter food choices. One example could be apps that help consumers see their purchasing pattern, suggest product alternatives, and inform about what is gained from an AMR perspective by choosing the alternative.

Read the article with its constructive proposal to engage consumers against antimicrobial resistance: The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ancillotti, Mirko; Nilsson, Elin; Nordvall, Anna-Carin; Oljans, Emma. The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance. Frontiers in Sustainable Food Systems, 2022.

This post in Swedish

Approaching future issues

How can neuroethics and AI ethics join their forces?

As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.

In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.

In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.

Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.

In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?

This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.

In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0

We transcend disciplinary borders

Illness prevention needs to be adapted to people’s illness perceptions

Several factors increase the risk of cardiovascular disease. Many of these we can influence ourselves through changes in lifestyle or preventive drug treatment. But people’s attitudes to prevention vary with their perceptions of cardiovascular disease. Health communication to support preventive measures therefore needs to take into account people’s illness perceptions.

Åsa Grauman and three colleagues conducted an online survey with 423 randomly selected Swedes aged 40 to 70 years. Participants were asked to answer questions about themselves and about how they view cardiovascular disease. They then participated in an experiment designed to capture how they weighted their preferences regarding health check results.

The results showed a wide variety of perceptions about cardiovascular disease. Women more often cited stress as their most important risk factor while men more often cited overweight and obesity. An interesting result is that people who stated that they smoked, had hypertension, were overweight or lived sedentary, tended to downplay that factor as less risky for cardiovascular disease. On the other hand, people who stated that they experienced stress had a tendency to emphasize stress as a high risk of cardiovascular disease. People who reported family history as a personal risk of illness showed a greater reluctance to participate in health examinations.

Regarding preferences about health check results, it was found that the participants preferred to have their results presented in everyday words and with an overall assessment (rather than, for example, in numbers). They also preferred to get the results in a letter (rather than by logging in to a website) that included lifestyle recommendations, and they preferred 30 minutes of consultation (over no or only 15 minutes of consultation).

It is important to reach out with the message that the risk of cardiovascular disease can be affected by lifestyle changes, and that health checks can identify risk factors in people who are otherwise asymptomatic. Especially people with a family history of cardiovascular disease, who in the study were more reluctant to undergo health examinations, may need to be aware of this.

To reach out with the message, it needs to be adapted to how people perceive cardiovascular disease, and we need to become better at supporting correct perceptions, the authors conclude. I have mentioned only a small selection of results from the study. If you want to see the richness of results, read the article: Public perceptions of myocardial infarction: Do illness perceptions predict preferences for health check results.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Åsa Grauman, Jennifer Viberg Johansson, Marie Falahee, Jorien Veldwijk. 2022, Public perceptions of myocardial infarction: Do illness perceptions predict preferences for health check results. Preventive Medicine Reports 26, https://doi.org/10.1016/j.pmedr.2021.101683

This post in Swedish

Exploring preferences

Images of good and evil artificial intelligence

As Michele Farisco has pointed out on this blog, artificial intelligence (AI) often serves as a projection screen for our self-images as human beings. Sometimes also as a projection screen for our images of good and evil, as you will soon see.

In AI and robotics, autonomy is often sought in the sense that the artificial intelligence should be able to perform its tasks optimally without human guidance. Like a self-driving car, which safely takes you to your destination without you having to steer, accelerate or brake. Another form of autonomy that is often sought is that artificial intelligence should be self-learning and thus be able to improve itself and become more powerful without human guidance.

Philosophers have discussed whether AI can be autonomous even in another sense, which is associated with human reason. According to this picture, we can as autonomous human beings examine our final goals in life and revise them if we deem that new knowledge about the world motivates it. Some philosophers believe that AI cannot do this, because the final goal, or utility function, would make it irrational to change the goal. The goal is fixed. The idea of such stubbornly goal-oriented AI can evoke worrying images of evil AI running amok among us. But the idea can also evoke reassuring images of good AI that reliably supports us.

Worried philosophers have imagined an AI that has the ultimate goal of making ordinary paper clips. This AI is assumed to be self-improving. It is therefore becoming increasingly intelligent and powerful when it comes to its goal of manufacturing paper clips. When the raw materials run out, it learns new ways to turn the earth’s resources into paper clips, and when humans try to prevent it from destroying the planet, it learns to destroy humanity. When the planet is wiped out, it travels into space and turns the universe into paper clips.

Philosophers who issue warnings about “evil” super-intelligent AI also express hopes for “good” super-intelligent AI. Suppose we could give self-improving AI the goal of serving humanity. Without getting tired, it would develop increasingly intelligent and powerful ways of serving us, until the end of time. Unlike the god of religion, this artificial superintelligence would hear our prayers and take ever-smarter action to help us. It would probably sooner or later learn to prevent earthquakes and our climate problems would soon be gone. No theodicy in the world could undermine our faith in this artificial god, whose power to protect us from evil is ever-increasing. Of course, it is unclear how the goal of serving humanity can be defined. But given the opportunity to finally secure the future of humanity, some hopeful philosophers believe that the development of human-friendly self-improving AI should be one of the most essential tasks of our time.

I read all this in a well-written article by Wolfhart Totschnig, who questions the rigid goal orientation associated with autonomous AI in the scenarios above. His most important point is that rigidly goal-oriented AI, which runs amok in the universe or saves humanity from every predicament, is not even conceivable. Outside its domain, the goal loses its meaning. The goal of a self-driving car to safely take the user to the destination has no meaning outside the domain of road traffic. Domain-specific AI can therefore not be generalized to the world as a whole, because the utility function loses its meaning outside the domain, long before the universe is turned into paper clips or the future of humanity is secured by an artificially good god.

This is, of course, an important philosophical point about goals and meaning, about specific domains and the world as a whole. The critique helps us to more realistically assess the risks and opportunities of future AI, without being bewitched by our images. At the same time, I get the impression that Totschnig continues to use AI as a projection screen for human self-images. He argues that future AI may well revise its ultimate goals as it develops a general understanding of the world. The weakness of the above scenarios was that they projected today’s domain-specific AI, not the general intelligence of humans. We then do not see the possibility of a genuinely human-like AI that self-critically reconsiders its final goals when new knowledge about the world makes it necessary. Truly human-equivalent AI would have full autonomy.

Projecting human self-images on future AI is not just a tendency, as far as I can judge, but a norm that governs the discussion. According to this norm, the wrong image is projected in the scenarios above. An image of today’s machines, not of our general human intelligence. Projecting the right self-image on future AI thus appears as an overall goal. Is the goal meaningful or should it be reconsidered self-critically?

These are difficult issues and my impression of the philosophical discussion may be wrong. If you want to judge for yourself, read the article: Fully autonomous AI.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Totschnig, W. Fully Autonomous AI. Sci Eng Ethics 26, 2473–2485 (2020). https://doi.org/10.1007/s11948-020-00243-z

This post in Swedish

We like critical thinking

Individualized treatment from the patient’s perspective

Individualized treatment, where the right patient receives the right dose of the right drug at the right time, could be interpreted as a purely medical task. After genetic and other tests on the patient, the doctor assesses, from a medical point of view, which drug for the disease and which dose should work most effectively and most safely for the patient in question.

Individualization can also be interpreted to include the patient’s perceptions of the treatment, the patient’s own preferences. Rheumatoid arthritis is a disease with many different symptoms. Several drugs are available that have different effects on different symptoms, as well as different side effects. In addition, the drugs are administered in different ways and at different intervals. Of course, all of these drug attributes affect the patients’ daily lives differently. A drug may reduce pain effectively, but cause depression, and so on. In individualized treatment of rheumatoid arthritis, there are therefore good reasons to ask patients what they consider to be important drug attributes and what they want their treatment to aim for.

In a study in Clinical Rheumatology, Karin Schölin Byvall and five co-authors prepare for individualized treatment of rheumatoid arthritis from the patient’s perspective. Their hope is to facilitate not only joint decision-making with patients who have the disease, but also future quantitative studies of preferences in the patient group.

This is how the authors (very simplified) proceeded. A literature review was first performed to identify possible relevant drug attributes. Subsequently, patients in Sweden with rheumatoid arthritis ranked nine of these attributes. In a third step, some of the patients were interviewed in more detail about how they perceived the most important attributes.

In a final step, the interview results were structured in a framework with four particularly relevant drug attributes. The first two are about improved ability to function physically and psychosocially in everyday life. The latter two are about serious and mild side effects, respectively. In summary, the most important drug attributes, from the patients’ perspective, are about improved ability to function in everyday life and about acceptable side effects.

If you want to know more about the study, read the article: Functional capacity vs side effects: treatment attributes to consider when individualizing treatment for patients with rheumatoid arthritis.

The authors emphasize the importance of considering patients’ own treatment goals. Individualized treatment not only requires medical tests, but may also require studies of patient preferences.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bywall, K.S., Esbensen, B.A., Lason, M. et al. Functional capacity vs side effects: treatment attributes to consider when individualising treatment for patients with rheumatoid arthritis. Clin Rheumatol (2021). https://doi.org/10.1007/s10067-021-05961-8

This post in Swedish

In dialogue with patients

Digital twins, virtual brains and the dangers of language

A new computer simulation technology has begun to be introduced, for example, in the manufacturing industry. The computer simulation is called a digital twin, which challenges me to bring to life for the reader what something that sounds so imaginative can be in reality.

The most realistic explanation I can find actually comes from Harry Potter’s world. Do you remember the map of Hogwarts, which not only shows all the rooms and corridors, but also the steps in real time of those who sneak around the school? A similar map can be easily created in a computer environment by connecting the map in the computer to sensors in the floor of the building that the map depicts. Immediately you have an interactive digital map of the building that is automatically updated and shows people’s movements in it. Imagine further that the computer simulation can make calculations that predict crowds that exceed the authorities’ recommendations, and that it automatically sends out warning messages via a speaker system. As far as I understand, such an interactive digital map can be called a digital twin for an intelligent house.

Of course, this is a revolutionary technology. The architect’s drawing in a computer program gets extended life in both the production and maintenance of the building. The digital simulation is connected to sensors that update the simulation with current data on relevant factors in the construction process and thereafter in the finished building. The building gets a digital twin that during the entire life cycle of the building automatically contacts maintenance technicians when the sensors show that the washing machines are starting to wear out or that the air is not circulating properly.

The scope of use for digital twins is huge. The point of them, as I understand it, is not that they are “exact virtual copies of reality,” whatever that might mean. The point is that the computer simulation is linked to the simulated object in a practically relevant way. Sensors automatically update the simulation with relevant data, while the simulation automatically updates the simulated object in relevant ways. At the same time, users, manufacturers, maintenance technicians and other actors are updated, who easily can monitor the object’s current status, opportunities and risks, wherever they are in the world.

The European flagship project Human Brain Project plans to develop digital twins of human brains by building virtual brains in a computer environment. In a new article, the philosophers Kathinka Evers and Arleen Salles, who are both working in the project, examine the enormous challenges involved in developing digital twins of living human brains. Is it even conceivable?

The authors compare types of objects that can have digital twins. It can be artefacts such as buildings and cars, or natural inanimate phenomena such as the bedrock at a mine. But it could also be living things such as the heart or the brain. The comparisons in the article show that the brain stands out in several ways, all of which make it unclear whether it is reasonable to talk about digital twins of human brains. Would it be more appropriate to talk about digital cousins?

The brain is astronomically complex and despite new knowledge about it, it is highly opaque to our search for knowledge. How can we talk about a digital twin of something that is as complex as a galaxy and as unknown as a black hole? In addition, the brain is fundamentally dynamically interactive. It is connected not only with the body but also with culture, society and the world around it, with which it develops in uninterrupted interaction. The brain almost merges with its environment. Does that imply that a digital twin would have to be a twin of the brain-body-culture-society-world, that is, a digital twin of everything?

No, of course not. The aim of the project is to find specific medical applications of the new computer simulation technology. By developing digital twins of certain aspects of certain parts of patients’ brains, it is hoped that one can improve and individualize, for example, surgical procedures for diseases such as epilepsy. Just as the map from Harry Potter’s world shows people’s steps in real time, the digital twin of the brain could follow the spread of certain nerve impulses in certain parts of the patient’s brain. This can open up new opportunities to monitor, diagnose, predict and treat diseases such as epilepsy.

Should we avoid the term digital twin when talking about the brain? Yes, it would probably be wiser to talk about digital siblings or digital cousins, argue Kathinka Evers and Arleen Salles. Although experts in the field understand its technical use, the term “digital twin” is linguistically risky when we talk about human brains. It easily leads the mind astray. We imagine that the digital twin must be an exact copy of a human’s whole brain. This risks creating unrealistic expectations and unfounded fears about the development. History shows that language also contains other dangers. Words come with normative expectations that can have ethical and social consequences that may not have been intended. Talking about a digital twin of a mining drill is probably no major linguistic danger. But when it comes to the brains of individual people, the talk of digital twins can become a new linguistic arena where we reinforce prejudices and spread fears.

After reading some popular scientific explanations of digital twins, I would like to add that caution may be needed also in connection with industrial applications. After all, the digital twin of a mining drill is not an “exact virtual copy of the real drill” in some absolute sense, right down to the movements of individual atoms. The digital twin is a copy in the practical sense that the application makes relevant. Sometimes it is enough to copy where people put their feet down, as in Harry Potter’s world, whose magic unexpectedly helps us understand the concept of a digital twin more realistically than many verbal explanations do. Explaining words with the help of other words is not always clarifying, if all the words steer thought in the same direction. The words “copy” and “replica” lead our thinking just as right and just as wrong as the word “twin” does.

If you want to better understand the challenges of creating digital twins of human brains and the importance of conceptual clarity concerning the development, read the philosophically elucidatory article: Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers, Kathinka & Salles, Arleen. (2021). Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics. SCIO: Revista de Filosofía. 27-53. 10.46583 / scio_2021.21.846

This post in Swedish

Minding our language

« Older posts