A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: April 2013

Revised European data protection will make data about rare diseases even rarer

EU is currently discussing changes to the European privacy laws. The intention is to strengthen the protection of privacy and to give people more control over their data.

The problem, which I highlighted on The Ethics Blog, is that the new proposal applies also to research. Presently there is an exception for scientific research about health and disease. The proposed revision of the privacy regulation, however, allows no exceptions.

Every person who has given data to a register must according to the new proposal be asked for consent each time researchers want to study some new disease pattern. Patient data can never be used in research without specific consent, and not even historical registers and data from diseased persons are given exception in the new proposal.

A recent article in Nature Reviews Genetics by Deborah Mascalzoni et al. highlights a patient group that is especially vulnerable to the proposed revision: patients suffering from rare diseases. In Sweden a disease is defined as rare if it affects less than a hundred persons in a million.

Data on rare diseases are, as a matter of course, rare. We therefore know little about these diseases and it is difficult to develop effective medical treatments. To achieve statistically significant analyses, researchers must typically share data over national borders. Every lost piece of data about rare diseases can mean dramatically impaired prospects of new drugs and treatments for these patient groups.

Rare diseases are thus a further strong reason for maintaining the current exception for scientific research in the data protection legislation. Read more on the CRB website.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Unhappy approach behind policy for incidental findings

Should individual research participants be informed if biobank researchers incidentally discover increased genetic disease risks through analysis of their samples?

At a seminar, Jennifer Viberg recently discussed a well-known recommendation for when participants should be informed about incidental findings:

During the seminar it became increasingly clear how the authors of the recommendation were proceeding. They started out from how one already handles incidental findings in a more familiar field, namely, imaging studies of the internal organs of the human body. They then generalized that policy to the less familiar case of genomic biobank research.

When researchers produce images of the internal organs of the human body they may accidentally discover, for example, tumors in individual research participants. It is obvious that participants should be contacted about such findings so that action can be taken.

The problem when one generalizes from a field with developed policy to a less familiar field, however, is the risk that false analogies govern the generalized policy. By treating imaging studies as paradigm case of individual findings, it might look as if biobank researchers produce images; images of the genome that incidentally reveal individual divergences against which action can be taken – like when a tumor is operated.

The article does not emphasize the fact that incidental findings in biobank research more typically would concern highly complex and difficult to interpret information about increased individual genetic disease risks.

If I have a tumor, it exists within my body and it can be surgically removed. But if I have an increased genetic disease risk, what do I have and in what sense can it be removed? Does “actionability” have the same meaning for diseases and for increased disease risks?

These and related questions about differences are not emphasized in the article. On the contrary, one seems to be in a hurry to generalize a familiar routine to a new field.

Transferring lessons from familiar to less familiar fields seems reasonable. If one neglects the one-way nature of the approach, however, it easily inflicts blindness to essential differences. In her dissertation work, Jennifer Viberg wants to avoid this pitfall.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Morality as a problem

Friedrich Nietzsche made this enigmatic remark about moral philosophy:

  • “In all ‘science of morals’ so far one thing was lacking, strange as it may sound: the problem of morality itself; what was lacking was any suspicion that there was something problematic here.”

What did Nietzsche mean? He seems to have been thinking of a very human tendency, namely, that of assuming that we already know what morality demands, at least roughly. The tendency, then, is to treat morality as given. Every sane person knows it intuitively!

The task of moral philosophy, identified on the basis of this tendency, becomes the following: dig deep enough to find the ultimate foundation of morality; or, fly high enough to catch sight of the ultimate moral principles.

How could Nietzsche view this daring work of digging and flying as being naïve to morality as a problem? People generally don’t ask these ultimate questions about morality. They don’t venture uncertain digging and flying expeditions. Asking the ultimate questions about morality seems anything but naïve.

Although daring on the assumption that morality is given, these ethical expeditions come too late, Nietzsche suggests. If we had been digging and flying a little bit earlier in the research process, we would have discovered that morality isn’t given:

  • “Just because our moral philosophers… were poorly informed and not even very curious about different peoples, times, and past ages – they never laid eyes on the real problems of morality; for these emerge only when we compare many moralities.”

We don’t live in a lukewarm condition of moral unity and certainty. There are different forms of moral sensitivity and we occasionally experience crises of uncertainty. We change our firmest certainties and even view each other’s (and our own earlier) certainties as absurd.

You may think what you like about Nietzsche’s own moral tendency, but he helps us identify morality as a philosophical problem in a more comprehensive way than if we defined the problem on the basis of the human tendency of moral introversion described above.

Morality has two faces. It consists not only of familiar certainties apparently in need of foundations. It consists also of uncertainty, change, and diversity. Certainty turns into uncertainty; and uncertainty into certainty. There is a dynamics here that we fail to see when we give in to the temptation to assume that morality already is given as a set of intuitive certainties.

I want to change Nietzsche’s notion of the task on one point. “Comparing many moralities” may not be the most useful ethical expedition if it is not combined with other investigations, since it may overemphasize facts that make all expressions of moral certainty seem idle; as a deceitful facade that we ought to get rid of once and for all.

The work we need to do rather is describing the two faces of morality simultaneously: achieving an overview of the movements back and forth between certainty and uncertainty.

Morality is stability and certainty and it is change and uncertainty.

Pär Segerdahl

We like challenging questions - the ethics blog

Don’t shoot at the patient (or at the messenger)

The newly proposed European Data Protection Directive overprotects research participants and exposes patients to greater risks of contracting illness and dying.

Thus dramatically a recent article in The Lancet Oncology can be summarized, written by Mats G. Hansson at CRB together with Gert Jan van Ommen, Ruth Chadwick and Joakim Dillner.

People who provide data to research registers are not exposed to physical risks, like participants in interventional research. The risks associated with register-based research are informational: unauthorized release of information about participants. One might ask if it even makes sense to say that people “participate in research” when researchers process large data sets.

Patients (and people in general) have significant protection from disease thanks to register-based research. For example, it is estimated that the HPV vaccine will save about 200 women from dying in cervical cancer each year, in Sweden alone. This cancer-preventive treatment became possible because researchers had access to samples dating back to the 1960s providing evidence for a causal connection between a certain virus infection and cervical cancer later in life.

  • Despite this vital value in biobanks and registers,
  • despite the fact that risks are only informational,
  • despite rigorous safety routines to prevent unauthorized spread of information,
  • despite the fact that researchers don’t study individuals but statistical patterns, and
  • despite the question if people really are “participants” in register-based research,

the EU committee proposing the new directive treats the integrity of “research participants” as so pivotal that researchers who process data not only must be subjected to the same ethical review process as for invasive research, but also must obtain informed consent from each and every one who once gave their data to the register, whenever the researchers want to study a new disease pattern.

Data protection efforts easily lose their sense of proportions, it seems, at least concerning register-based research. Not only is one prepared to expose patients to greater physical risks in order to protect research participants from (already rigorously controlled) informational risks.

One also is prepared to disturb data providers who hardly can be described as “participating” in research, by forcing researchers to recontact them about informed consent. Not only on one occasion, but time and again, year after year, each time a new disease pattern is explored in the registers. That’s what I call privacy intrusion!

Pär Segerdahl

We participate in debates - the Ethics Blog