Communicating risk in human terms

October 4, 2017

Pär SegerdahlThe concept of risk used in genetics is a technical term. For the specialist, risk is the probability of an undesired event, for example, that an individual develops some form of cancer. Risk is usually stated as a percentage.

It is well known that patients have difficulties to access the probability notion of risk. What do their difficulties mean?

Technical notions, which experts use in their specialist fields, usually have high status. The attitude is: this is what risk really is. Based on such an attitude, people’s difficulties mean: they have difficulties to understand risk. Therefore, we have to help them understand, by using educational tools that explain to them what we mean (we who know what risk is).

We could speak of communicating risk in the experts’ terms (and on their terms). Of course, one tries to communicate risk as simply and accessibly as possible. However, the notion of ​​what to communicate is fixed. Anything else would disturb the attitude that the expert knows what risk really is.

In an article in Patient Education and Counseling, Jennifer Viberg Johansson (along with Pär Segerdahl, Ulrika Hösterey Ugander, Mats G. Hansson and Sophie Langenskiöld) makes an inquiry that departs from this pattern. She explores how people themselves make sense of genetic risk.

How does Viberg’s study depart from the pattern? She does not use the technical notion of risk as the norm for understanding risk.

Viberg interviewed healthy participants in a large research project. She found that they avoided the technical, probability notion of genetic risk. Instead, they used a binary concept of risk. Genetic risk (e.g., for breast cancer) is something that you have or do not have.

Furthermore, they interpreted risk in three ways in terms of time. Past: The risk has been in my genome for a long time. When symptoms arise, the genetic risk is the cause of the disease. Present: The risk is in my genome now, making me a person who is at risk. Future: The risk will be in my genome my entire life, but maybe I can control it through preventive measures.

These temporal dimensions are not surprising. People try to understand risk in the midst of their lives, which evolve in time.

It is not the case, then, that people “fail” to understand. They do understand, but in their own terms. They think of genetic risk as something that one has or does not have. They understand genetic risk in terms of how life evolves in time. A practical conclusion that Viberg draws is that we should try to adapt genetic risk communication to these “lay” conceptions of risk, which probably help people make difficult decisions.

We could speak of communicating risk in human terms (and on human terms). What does genetic risk mean in terms of someone’s past, present and future life?

When you talk with people with lives to live, that is probably what the risk really is.

Pär Segerdahl

J. Viberg Johansson, et al., Making sense of genetic risk: A qualitative focus-group study of healthy participants in genomic research, Patient Educ Couns (2017), http://dx.doi.org/10.1016/j.pec.2017.09.009

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


Openness as an ethical ritual

August 3, 2015

Pär SegerdahlBarbara A. Koenig wrote last year about how informed consent has acquired a “liturgical feel” in biomedical research ethics. Each time the protection of research participants is challenged by new forms of research, the answer is: more consent!

The procedure of informing and asking for consent may feel like assuming a priestly guise and performing an ethical ritual with the research participant.

The ritual is moreover sometimes practically impossible to implement. For example, if one is to inform participants in genetic research about incidental findings that might be made about them, so that they can decide whether they want to be re-contacted if researchers happen to discover “something” about them.

If it takes one hour to inform a patient about his or her actual genetic disease, how long would it take to inform a research participant of all possible kinds of genetic disease risks that might be discovered? Sorry, not just one participant, but hundreds of thousands.

How then can research participants be respected as humans, if informed consent has become like an empty ritual with the poor participant? (A ritual that in genetic research sometimes is impracticable.)

In the August issue of Nature, Misha Angrist suggests a solution: we treat participants as partners in the research process, by being open to them. How are we open to them? By offering them the researchers’ genetic raw data, which can be handed over to them as an electronic file.

Here we are not talking about interpreted genetic disease risks, but of heaps of genetic raw data that are utterly meaningless for research participants.

Openness often has important functions. Making scientific articles openly accessible so that everyone can read them has a function. Making researchers’ data available to other researchers so that they can critically review research, or use already collected data in new research, has a function.

But offering files with genetic raw data to research participants, what is its function? Is it really the beginning of a beautiful partnership?

Openness and partnership seem here to become yet another ethical ritual; yet another universal solution to ethical difficulties.

Pär Segerdahl

We think about bioethics : www.ethicsblog.crb.uu.se


Second issue of our newsletter about biobanks

June 2, 2015

Pär SegerdahlNow you can read the second newsletter this year from CRB and BBMRI.se:

The newsletter contains four news items:

1. Anna-Sara Lind presents a new book, Information and Law in Transition, and the contributions to the book by CRB researchers.

2. Anna-Sara Lind describes the situation for the temporary Swedish law on research registries.

3. Mats G. Hansson reports on a modified version of broad consent for future research.

4. Josepine Fernow presents a new article by Jennifer Viberg on the proposal to give research participants freedom of choice about incidental findings.

(Link to PDF version of the newsletter)

Pär Segerdahl

We recommend readings - the Ethics Blog


Letting people choose isn’t always the same as respecting them

May 5, 2015

Jennifer Viberg, PhD Student, Centre for Research Ethics & Bioethics (CRB)Sequencing the entire genome is cheaper and faster than ever. But when researchers look at people’s genetic code, they also find unexpected information in the process. Shouldn’t research participants have access to this incidental information? Especially if it is important information that could save a life if there is treatment to offer?

The personal benefits of knowing genetic information can vary from individual to individual. For one person, knowledge might just cause anxiety. For another, genetic risk information could create a sense of control in life. Since different people have different experiences, it could seem tempting to leave it for them to decide for themselves whether they want the information or not.

Offering participants in genetic research a choice to know or not to know is becoming more common. Another reason for giving a “freedom of choice” has to do with respecting people by allowing them to make choices in matters that concern them. By letting the participant choose, you acknowledge that he or she is a person with an ability to make his or her own choices.

But when researchers hand over the decision to participants they also transfer responsibility: A responsibility that could have consequences that we cannot determine today. I recently wrote an article together with colleagues at CRB about this in Bioethics. We argue that this freedom of choice could be problematic.

Looking at previous psychological research on how people respond to probabilities, it becomes clear that what they choose depends on how the choice situation is presented. People choose the “safe” outcome before taking a risk in cases where the outcome is phrased in a positive way. But they are more prone to taking a risk when the result is phrased in a negative way, despite the fact that the outcome is identical. If a participant is asked if he or she wants information that could save their life, there is a risk that they could be steered to answering “yes” without considering other important aspects, such as having to live with anxiety or subjecting themselves to medical procedures that might be unnecessary.

The benefit of incidental findings for individual participants is hard to estimate. Even for experienced and knowledgeable genetic researchers. If we know how difficult the choice situations are, even for them, and if we know how psychological processes probably will steer the participants’ choices, then it seems that it is hardly respectful to give the participants this choice.

There are good intentions behind giving participants freedom to choose, but it isn’t respectful if we can predict that the choices won’t be free and well grounded.

If you want to learn more, you find further reading on CRB’s web, and here is a link to our article: Freedom of choice about incidental findings can frustrate participants’ true preferences

Jennifer Viberg

We like real-life ethics : www.ethicsblog.crb.uu.se


Intellectualizing morality

June 4, 2014

There is a prevalent idea that moral considerations presuppose ethical principles. But how does it arise? It makes our ways of talking about difficult issues resemble consultations between states at the negotiating table, invoking various solemn declarations:

  1. “Under the principle of happy consequences, you should lie here; otherwise, many will be hurt.”
  2. “According to the principle of always telling the truth, it is right to tell; even if many will be hurt.”

This is not how we talk, but maybe:

  1. “I don’t like to lie, but I have to, otherwise many will be hurt.”
  2. “It’s terrible that many will suffer, but the truth must be told.”

As we actually talk, without invoking principles, we ourselves take responsibility for how we decide to act. Lying, or telling the truth, is a burden even when we see it as the right thing to do. But if moral considerations presuppose ethical principles of moral rightness, there is no responsibility to carry. We refer to the principles!

The principles give us the right to lie, or to speak the truth, and we can live on with a self-righteous smile. But how does the idea of moral principles arise?

My answer: Through the need to intellectually control how we debate and reach conclusions about important societal issues in the public sphere.

Just as Indian grammarians made rules for the correct pronunciation of holy words, ethicists make principles of correct moral reasoning. According to the first principle, the first person reasons correctly; the other one incorrectly. According to the second principle, it’s the other way round.

But no one would even dream of formulating these principles, if we didn’t already talk as we do about important matters. The principles are second-rate goods, reconstructions, scaffolding on life, which subsequently can have a certain social and intellectual control function.

Moral principles may thus play a significant role in the public sphere, like grammatical rules codifying how to write and speak correctly. We agree on the principles that should govern public negotiations; the kind of concerns that should be considered in good arguments.

The problem is that the principles are ingeniously expounded as the essence and foundation of morality more generally, in treatises that are revered as intellectual bibles.

The truth must be told: it’s the other way round. The principles are auxiliary constructions that codify how we already bear the words and the responsibility. Don’t let the principles’ function in the public sphere distort this fact.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog


The risk with knowing the risk

March 5, 2014

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogInforming individuals about their genetic risks of disease can be viewed as empowering them to make autonomous decisions about their future health.

But we respond to risk information not only as rational decision makers, but also with our bodies, feelings and attitudes.

An American study investigated elderly people whose genetic test results showed a predisposition for Alzheimer’s disease. One group was informed about the risk; the other group was not.

In subsequent memory tests, those who were informed about the risk performed markedly worse than those who weren’t informed.

Knowing the genetic risk thus increased the risk of a false positive diagnosis of dementia. The informed participants performed as if they already were on the verge of developing Alzheimer’s.

The risk with knowing the risk is thus a further complication to take into consideration when discussing biobank researchers’ obligation to return incidental genetic findings to individual participants.

Returning information about genetic risks cannot be viewed only as empowering participants, or as giving them valuable information in exchange for contributing to research.

It can also make people worse, it can distort research results, and it can lead to false diagnoses in clinical care.

Pär Segerdahl

We like challenging findings - The ethics blog


Idling biobank policy?

October 9, 2013

If you allow researchers to do brain imaging on you for some research purpose, and they incidentally discover a tumor, or a blood vessel with thin walls, you probably want them to inform you about this finding. There are no doubts about the finding; the risks are well-known; it is actionable.

Suppose instead that you donate a blood sample to a biobank. Suppose that researchers studying the sample discover a genetic variant that, depending on a number of interacting factors, might result in disease in three years’ time, or in thirty years, or not at all. It is difficult to predict! Do you still want to know?

How should these incidental findings be handled that increasingly often will be made in genetic biobank research? We are all different, so finding variants with some statistical relation to disease is more or less expected.

A common approach to this question within attempts to develop a policy for incidental biobank findings is to formulate general conditions for when researchers should inform participants. Like: if the finding is analytically valid; if it has clinical significance; if it is actionable – then participants should be informed.

The problem is: we already knew that. We know what these conditions mean in imaging studies when a tumor or a damaged blood vessel is discovered. In these cases, the conditions can be assessed and they make it reasonable to inform. But what about genetic risk information, which often is more multidimensional and has unclear predictive value?

This question is discussed in a recent article in the European Journal of Human Genetics, written by Jennifer Viberg together with Mats G. Hansson, Sophie Langenskiöld, and me:

Viberg argues when we enter this new and more complex domain, we cannot rely on analogies to what is already known in a simpler domain. Nor can we rely on surveys of participants’ preferences, if these surveys employ the same analogies and describe the findings in terms of the same general conditions.

Time is not yet ripe for a policy for incidental genetic findings, Viberg and colleagues conclude. Formulating a policy through analogies to what is already known is to cover up what we do not know. The issue requires a different form of elucidation.

That form of elucidation remains to be developed.

Pär Segerdahl

We participate in debates - the Ethics Blog


%d bloggers like this: