A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: December 2020

Human enhancement: Time for ethical guidance!

Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.

We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function? 

There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance? 

The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance. 

The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies. 

A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments. 

And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.

The public consultation will launch on January 11, the deadline to submit a response is January 25, 2021. 

Josepine Fernow

Written by…

Josepine Fernow, Coordinator at the Centre for Research Ethics & Bioethics (CRB), and communications leader for the SIENNA project.

SIENNA project logo

This post in Swedish

Research for responsible governance of our health data

Do you use your smartphone to collect and analyse your performance at the gym? This is one example of how new health-related technologies are being integrated into our lives. This development leads to a growing need to collect, use and share health data electronically. Healthcare, medical research, as well as technological and pharmaceutical companies are increasingly dependent on collecting and sharing electronic health data, to develop healthcare and new medical and technical products.

This trend towards more and more sharing of personal health information raises several privacy issues. Previous studies suggest that people are willing to share their health information if the overall purpose is improved health. However, they are less willing to share their information with commercial enterprises and insurance companies, whose purposes may be unclear or do not meet people’s expectations. It is therefore important to investigate how individuals’ perceptions and attitudes change depending on the context in which their health data is used, what type of information is collected and which control mechanisms are in place to govern data sharing. In addition, there is a difference between what people say is important and what is revealed in their actual behaviour. In surveys, individuals often indicate that they value their personal information. At the same time, individuals share their personal information online despite little or no benefit to them or society.

Do you recognise yourself, do you just click on the “I agree” button when installing a health app that you want to use? This behaviour may at first glance suggest that people do not value their personal information very much. Is that a correct conclusion? Previous studies may not have taken into account the complexity of decisions about integrity where context-specific factors play a major role. For example, people may value sharing health data via a physical activity app on the phone differently. We have therefore chosen to conduct a study that uses a sophisticated multi-method approach that takes context-specific factors into account. It is an advantage in cybersecurity and privacy research, we believe, to combine qualitative methods with a quantitative stated preference method, such a discrete choice experiment (DCE). Such a mixed method approach can contribute to ethically improved practices and governance mechanisms in the digital world, where people’s health data are shared for multiple purposes.

You can read more about our research if you visit the website of our research team. Currently, we are analysing survey data from 2,000 participants from Sweden, Norway, Iceland, and the UK. The research group has expertise in law, philosophy, ethics and social sciences. On this broad basis, we  explore people’s expectations and preferences, while identifying possible gaps within the ethical and legal frameworks. In this way, we want to contribute to making the growing use and sharing of electronic health data ethically informed, socially acceptable and in line with people’s expectations.  

Written by…

Jennifer Viberg Johansson, Postdoc researcher at the Centre for Research Ethics & Bioethics, working in the projects Governance of health data in cyberspace and PREFER.

This post in Swedish

Part of international collaborations

People care about antibiotic resistance

The rise of antibiotic-resistant bacteria is a global threat to public health. In Europe alone, antibiotic resistance (AR) causes around 33,000 deaths each year and burdens healthcare costs by around € 1.5 billion. What then causes AR? Mainly our misuse and overuse of antibiotics. Therefore, in order to reduce AR, we must reduce the use of antibiotics.

Several factors drive the prescribing of antibiotics. Patients can contribute to increased prescriptions by expecting antibiotics when they visit the physician. Physicians, in turn, can contribute by assuming that their patients expect antibiotics.

In an article in the International Journal of Antimicrobial Agents, Mirko Ancillotti from CRB presents what might be the first study of its kind on the public’s attitude to AR when choosing between antibiotic treatments. In a so-called Discrete Choice Experiment, participants from the Swedish public were asked to choose between two treatments. The choice situation was repeated several times while five attributes of the treatments varied: (1) the treatment’s contribution to AR, (2) cost, (3) risk of side effects, (4) risk of failed treatment effect, and (5) treatment duration. In this way, one got an idea of ​​which attributes drive the use of antibiotics. One also got an idea of ​​how much people care about AR when choosing antibiotics, relative to other attributes of the treatments.

It turned out that all five attributes influenced the participants’ choice of treatment. It also turned out that for the majority, AR was the most important attribute. People thus care about AR and are willing to pay more to get a treatment that causes less antibiotic resistance. (Note that participants were informed that antibiotic resistance is a collective threat rather than a problem for the individual.)

Because people care about antibiotic resistance when given the opportunity to consider it, Mirko Ancillotti suggests that a path to reducing antibiotic use may be better information in healthcare and other contexts, emphasizing our individual responsibility for the collective threat. People who understand their responsibility for AR may be less pushy when they see a physician. This can also influence physicians to change their assumptions about patients’ expectations regarding antibiotics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

M. Ancillotti, S. Eriksson, D.I. Andersson, T. Godskesen, J. Nihlén Fahlquist, J. Veldwijk, Preferences regarding antibiotic treatment and the role of antibiotic resistance: A discrete choice experiment, International Journal of Antimicrobial Agents, Volume 56, Issue 6, 2020. doi.org/10.1016/j.ijantimicag.2020.106198

This post in Swedish

Exploring preferences

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transcend disciplinary borders