A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: October 2017

Ethics, human rights and responsible innovation

josepine-fernow2It is difficult to predict the consequences of developing and using new technologies. We interact with smart devices and intelligent software on an almost daily basis. Some of us use prosthetics and implants to go about our business and most of us will likely live to see self-driving cars. In the meantime, Swedish research shows that petting robot cats looks promising in the care of patients with dementia. Genetic tests are cheaper than ever, and available to both patients and consumers. If you spit in a tube and mail it to a US company, they will tell you where your ancestors are from. Who knows? You could be part sub Saharan African, and part Scandinavian at the same time, and (likely) still be you.

Technologies, new and old, have both ethical and human rights impact. Today, we are closer to scenarios we only pictured in science fiction a few decades ago. Technology develops fast and it is difficult to predict what is on the horizon. The legislation, regulation and ethical guidance we have today was developed for a different future. Policy makers struggle to assess the ethical, legal and human rights impact of new and emerging technologies. These frameworks are challenged when a country like Saudi Arabia, criticized for not giving equal rights to women, offers a robot honorary citizenship. This autumn marks the start of a research initiative that will look at some of these questions. A group of researchers from Europe, Asia, Africa and the Americas join forces to help improve the ethical and legal frameworks we have today.

The SIENNA project (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact) will deliver proposals for professional ethics codes, guidelines for research ethics committees and better regulation in three areas: human genetics and genomics, human enhancement, and artificial intelligence & robotics. The proposals will build on input from stakeholders, experts and citizens. SIENNA will also look at some of the more philosophical questions these technologies raise: Where do we draw the line between health and illness, normality and abnormality? Can we expect intelligent software to be moral? Do we accept giving up some of our privacy to screen our genome for genetic disorders? And if giving up some of our personal liberty is the price we have to pay to interact with machines, are we willing to pay it?

 The project is co-ordinated by the University of Twente. Uppsala University’s Centre for Research Ethics & Bioethics contributes expertise on the ethical, legal and social issues of genetics and genomics, and experience of communicating European research. Visit the SIENNA website at www.sienna-project.eu to find out more about the project and our partners!

Josepine Fernow

The SIENNA projectStakeholder-informed ethics for new technologies with high socio-economic and human rights impact – has received just under € 4 million for a 3,5 year project under the European Union’s H2020 research and innovation programme, grant agreement No 741716.

Disclaimer: This text and its contents reflects only SIENNA’s view. The Commission is not responsible for any use that may be made of the information it contains.

SIENNA project

This post in Swedish

Approaching future issues - the Ethics Blog

Beyond awareness: the need for a more comprehensive ethics of disorders of consciousness

Michele FariscoDisorders of consciousness like coma, unresponsive wakefulness syndrome, and what is known as minimally conscious state, are among the most challenging issues in current ethical debates. Ethical analyses of these states usually focus on the ‘residual’ awareness that these patients might still have. Such awareness is taken to have bearing on other factors that are usually considered ethically central, like the patients’ well-being.

Yet, when we take a look at recent scientific investigations of mental activity it appears that things are much more complicated than usually thought. Cognitive science provides empirical evidence that the unconscious brain is able to perform almost all the activities that we (wrongly) think are exclusive of consciousness, including enjoying positive emotions and disregarding negative ones. To illustrate, people that are subliminally exposed to drawings of happy or sad faces are emotionally conditioned in their evaluation of unknown objects, like Chinese characters for people who don’t know Chinese. If preceded by subliminal happy faces, these characters are more likely to elicit positive feelings when consciously perceived. This means that unconscious emotions exist, and these emotions are (plausibly) positive or negative. This in turn suggests that consciousness is not required to have emotions.

Accordingly, people with disorders of consciousness could also have unconscious emotions. Even though they are not capable of external behavior from which we could infer the presence of positive or negative emotional life, we cannot rule out the possibility that these patients’ residual brain activity is related to a residual unaware emotional life, which can be either positive or negative.

We should try to avoid becoming biased by the sort of “consciousness-centrism” that impedes us from seeing the total landscape: there is a lot going on behind (and beyond) the eyes of our awareness.

What does this imply for the ethics of caring for and interacting with people affected by severe disorders of consciousness? Well, as previously said, the ethical discourse surrounding the care for and the relationship with these people has usually focused on their residual awareness, scrutinizing whether and to what extent these people could consciously experience good and bad feelings. Yet if it is possible to have these experiences at the unaware level, shouldn’t this be a relevant consideration when engaging in an ethical analysis of patients with disorders of consciousness? In other words, shouldn’t we take care of their residual unconsciousness in addition to their residual consciousness?

I believe we need to enlarge the scope of our ethical analyses of patients with disorders of consciousness, or at least acknowledge that focusing on residual consciousness is not all we should do, even if it is all we presently can do.

Michele Farisco

Winkielman P., Berridge K.C. Unconscious emotion. Current Directions in Psychological Science. 2004;13(3):120-3

We challenge habits of thought : the Ethics Blog

Acknowledging the biobank and the people who built it

Pär SegerdahlBiomedical research increasingly often uses biological material and information collected in biobanks. In order for a biobank to work efficiently, it is important not only that the biological material is stored well. The material must also be made available to science so that researchers easily and responsibly can share samples and information.

Creating such a biobank is a huge effort. Researchers and clinicians who collect bioresources might even be reluctant to make the biobank openly available. Why make it easy for others to access to your biobank if they do not give you any recognition?

In an article in the Journal of Community Genetics, Heidi C. Howard and Deborah Mascalzoni, among others, discuss a system that would make it more attractive to develop well-functioning biobanks. It is a system for rewarding researchers and clinicians who create high quality bioresources by making their work properly acknowledged.

The system, presented in the article, is called the Bioresource Research Impact Factor (BRIF). If I understand it, the system may work the following way. A biobank is described in a permanent “marker” article published in a specific bioresource journal. Researchers who use the biobank then quote the article in their publications and funding grants. In this way, you can count citations of bioresources as you count citations of research articles.

The article also describes the results of a study of stakeholders’ awareness of BRIF, as well as an ethical analysis of how BRIF can contribute to more responsible biobanking.

If you are building a biobank, read the article and learn more about BRIF!

Pär Segerdahl

Howard, H.C., Mascalzoni, D., Mabile, L. et al. “How to responsibly acknowledge research work in the era of big data and biobanks: ethical aspects of the Bioresource Research Impact Factor (BRIF).” J Community Genet (2017). https://doi.org/10.1007/s12687-017-0332-6

This post in Swedish

We want to be just - the Ethics Blog

Communicating risk in human terms

Pär SegerdahlThe concept of risk used in genetics is a technical term. For the specialist, risk is the probability of an undesired event, for example, that an individual develops some form of cancer. Risk is usually stated as a percentage.

It is well known that patients have difficulties to access the probability notion of risk. What do their difficulties mean?

Technical notions, which experts use in their specialist fields, usually have high status. The attitude is: this is what risk really is. Based on such an attitude, people’s difficulties mean: they have difficulties to understand risk. Therefore, we have to help them understand, by using educational tools that explain to them what we mean (we who know what risk is).

We could speak of communicating risk in the experts’ terms (and on their terms). Of course, one tries to communicate risk as simply and accessibly as possible. However, the notion of ​​what to communicate is fixed. Anything else would disturb the attitude that the expert knows what risk really is.

In an article in Patient Education and Counseling, Jennifer Viberg Johansson (along with Pär Segerdahl, Ulrika Hösterey Ugander, Mats G. Hansson and Sophie Langenskiöld) makes an inquiry that departs from this pattern. She explores how people themselves make sense of genetic risk.

How does Viberg’s study depart from the pattern? She does not use the technical notion of risk as the norm for understanding risk.

Viberg interviewed healthy participants in a large research project. She found that they avoided the technical, probability notion of genetic risk. Instead, they used a binary concept of risk. Genetic risk (e.g., for breast cancer) is something that you have or do not have.

Furthermore, they interpreted risk in three ways in terms of time. Past: The risk has been in my genome for a long time. When symptoms arise, the genetic risk is the cause of the disease. Present: The risk is in my genome now, making me a person who is at risk. Future: The risk will be in my genome my entire life, but maybe I can control it through preventive measures.

These temporal dimensions are not surprising. People try to understand risk in the midst of their lives, which evolve in time.

It is not the case, then, that people “fail” to understand. They do understand, but in their own terms. They think of genetic risk as something that one has or does not have. They understand genetic risk in terms of how life evolves in time. A practical conclusion that Viberg draws is that we should try to adapt genetic risk communication to these “lay” conceptions of risk, which probably help people make difficult decisions.

We could speak of communicating risk in human terms (and on human terms). What does genetic risk mean in terms of someone’s past, present and future life?

When you talk with people with lives to live, that is probably what the risk really is.

Pär Segerdahl

J. Viberg Johansson, et al., Making sense of genetic risk: A qualitative focus-group study of healthy participants in genomic research, Patient Educ Couns (2017), http://dx.doi.org/10.1016/j.pec.2017.09.009

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se