Fear of the unknown produces ghosts

April 26, 2017

Pär SegerdahlWhat really can start feverish thought activity is facing an unclear threat. We do not really see what it is, so we fill the contours ourselves. At the seminar this week, we discussed what I think was such a case. A woman decided to test if she possibly had calcium deficiency. To her surprise, the doctor informed her that she suffered from a disease, osteoporosis, characterized by increased risk of bone fractures.

She already had experienced the problem. A hug could hurt her ribs and she had broken a shoulder when pushing the car. However, she felt no fear until she was informed that she suffered from a disease that meant increased risk of bone fracture.

I do not mean she had no reason to be worried. However, her worries seem to have become nightmarish.

Presumably, she already understood that she had to be careful in some situations. However, she interpreted the “risk factor” that she was informed about as an invisible threat. It is like a ghost, she says. She began to compare her body with a house where the foundation dissolves; a house which might therefore collapse. She began to experience great danger in every activity.

Many who are diagnosed with osteoporosis do not get fractures. If you get fractures, they do not have to be serious. However, the risk of fractures is greater in this group and if you get a hip fracture, that is a big problem. The woman in the example, however, imagined her “risk factor” as a ghost that constantly haunted her.

I now wonder: Are ethical debates sometimes are about similar ghost images? Most of us do not really know what embryo research is, for example, it seems vaguely uncanny. When we hear about it, we fill the contours: the embryo is a small human. Immediately, the research appears nightmarish and absolute limits must be drawn. Otherwise, we end up on a slippery slope where human life might degenerate, as the woman imagined her body might collapse.

I also wonder: If debates sometimes are about feverishly produced ghost images, how should we handle these ghosts? With information? But it was information that produced the ghosts. With persistent logical counter arguments? But the ghosts are in the feverish reasoning. Should we really continue to fill the contours of these images, as if we corrected bad sketches? Is it not taking ghosts too seriously? Is it not like trying to wake up yourself in a dream?

Everything started with the unclear threat. The rest were dreamlike consequences. We probably need to reflect more cautiously on the original situation where we experienced the first vague threat. Why did we react as did? We need to treat the problem in its more moderate beginning, before it developed its nightmarish dimensions.

This is not to say that we have no reason to be concerned.

Pär Segerdahl

Reventlow, S., Hvas, A. C., Tulinius, C. 2001. “In really great danger.” The concept of risk in general practice. Scandinavian Journal of Primary Health Care 19: 71-75

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


Sliding down along the slippery slope

April 11, 2017

Pär SegerdahlDebates on euthanasia, abortion or embryonic stem cell research frequently invoke slippery slope arguments. Here is an example of such reasoning:

Legalizing physician-assisted suicide (PAS) at the end of life pushes healthcare morality in a dangerous direction. Soon, PAS may be practiced even on people who are not at the end of life and who do not request it. Even if this does not happen, the general population’s trust in healthcare will erode. Therefore, PAS must be forbidden.

Reasoning about the future is important. We need to assess consequences of allowing new practices. However, how do we assess the future in a credible way?

In an article in Medicine, Health Care and Philosophy, Gert Helgesson, Niels Lynøe and Niklas Juth argue that many slippery slope arguments are not empirically substantiated, but are based on value-impregnated factual assumptions. Anyone who considers PAS absolutely wrong considers it as a fatal step in a dangerous direction. Therefore, it is assumed that taking such a step will be followed by further steps in the same dangerous direction. If you chose the wrong path, you end up further and further away in the wrong direction. It seems inevitable that a first step is followed by a second step…

The problem is that this prophesying is based on the original moral interpretation. Anyone who is not convinced of the fatality of a “first” step does not have a tendency to see it as a “first step” with an inherent tendency to lead to a “second step” and finally to disaster.

Thinking in terms of the slippery slope can sometimes be experienced as if you yourself were on the slippery slope. Your thoughts slide toward the daunting precipice. Perhaps the article by Helgesson, Lynøe and Juth contains an analysis of this phenomenon. The slippery slope has become a vicious circle where the prophesying of disastrous consequences is steered by the moral interpretation that one defends with reference to the slippery slope.

Slippery slope arguments are not wrong in themselves. Sometimes development is on a slippery slope. However, this form of reasoning requires caution, for sometimes it is our thoughts that slide down along the slippery slope.

And that can have consequences.

Pär Segerdahl

Helgesson, G., Lynøe, N., Juth, N. 2017. Value-impregnated factual Claims and slippery slope arguments. Medicine, Health Care and Philosophy 20: 147-150.

This post in Swedish

Approaching future issues - the Ethics Blog


Consent based on trust rather than information?

March 21, 2017

Pär SegerdahlConsent to research participation has two dimensions. On the one hand, the researcher wants to do something with the participant: we don’t know what until the researcher tells. To obtain consent, the researcher must provide information about what will be done, what the purpose is, what the risks and benefits are – so that potential participants can decide whether to consent or not.

On the other hand, potential participants would hardly believe the information and consider consenting, if they didn’t trust the researcher or the research institution. If trust is strong, they might consent even without considering the information. Presumably, this occurs often.

The fact that consent can be given based on trust has led to a discussion of trust-based consent as more or less a separate form of consent, next to informed consent. An article in the journal Bioethics, for example, argues that consent based on trust is not morally inferior to consent based on information. Consent based on trust supports autonomy, voluntariness, non-manipulation and non-exploitation as much as consent based on information does, the authors argue.

I think it is important to highlight trust as a dimension of consent to research participation. Consent based on trust need not be morally inferior to consent based on careful study of information.

However, I get puzzled over the tendency to speak of trust-based consent as almost a separate form of consent, next to informed consent. That researchers consider ethical aspects of planned research and tell about them seems to be a concrete way of manifesting responsibility, respect and trustworthiness.

Carefully planning and going through the consent procedure is an ethical practice that can make us better humans: we change through what we do. It also opens up for respondents to say, “Thank you, I trust you, I don’t need to know more, I will participate.” Information and trust go hand in hand. There is dynamic interplay between them.

I guess that a background to talk of trust-based consent as almost a separate form of consent is another tendency: the tendency to purify “information” as cognitive and to idealize humans as rational decision makers. In addition, there is a tendency to regiment the information that “must” be provided.

This tendency to abstract and regulate “information” has made informed consent into what sometimes is perceived as an empty, bureaucratic procedure. Nothing that makes us better humans, in other words!

It would be unfortunate if we established two one-dimensional forms of consent instead of seeing information and trust as two dimensions of consent to research.

Another article in Bioethics presents a concrete model of trust-based consent to biobank research. Happily, the model includes willingly telling participants about biobank research. Among other things, one explains why one cannot specify which research projects will use the donated biological samples, as this lies in the future. Instead, one gives broad information about what kind of research the biobank supports, and one informs participants that they can limit the use of the material they donate if they want to. And one tells about much more.

Information and trust seem here to go hand in hand.

Pär Segerdahl

Halmsted Kongsholm, N. C., Kappel, K. 2017. Is consent based on trust morally inferior to consent based on information? Bioethics. doi: 10.1111/bioe.12342

Sanchini, V. et al. 2016. A trust-based pact in research biobanks. From theory to practice. Bioethics 4: 260-271. doi: 10.1111/bioe.12184

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


The apparent academy

November 29, 2016

Pär SegerdahlWhat can we believe in? The question acquires new urgency when the IT revolution makes it easier to spread information through channels that obey other laws than those hitherto characterizing journalism and academic publishing.

The free flow of information online requires a critical stance. That critical stance, however, requires a certain division of labor. It requires access to reliable sources: knowledge institutions like the academy and probing institutions like journalism.

But what happens to the trustworthiness of these institutions if they drown in the sea of impressively designed websites? What if IT entrepreneurs start what appear to be academic journals, but publish manuscripts without serious peer review as long as the researchers are paying for the service?

This false (or apparent) academy is already here. In fact, just as I write this, I get by email an offer from one of these new actors. The email begins, “Hello Professor,” and then promises unlikely quick review of manuscripts and friendly, responsive staff.

What can we do? Countermeasures are needed if what we call critical reflection and knowledge should retain their meaning, rather than serve as masks for something utterly different.

One action was taken on The Ethics Blog. Stefan Eriksson and Gert Helgesson published a post where they tried to make researchers more aware of the false academy. Apart from discussing the phenomenon, they listed deceptive academic journals to which unsuspecting bioethicists may submit papers (deceived by appearances). They also listed journals that take academic publishing seriously. The lists will be updated annually.

In an article in Medicine, Health Care and Philosophy (published by Springer), Eriksson and Helgesson deepen their examination of the false academy. Several committed researchers have studied the phenomenon and the article describes and discusses what we know about these questionable activities. It also proposes a list of characteristics of problematic journals, like unspecified editorial board, non-academic advertisement on the website, and spamming researchers with offers to submit manuscripts (like the email I received).

Another worrying trend, discussed in the article, is that even some traditional publishers begin to embrace some of the apparent academy’s practices (for they are profitable). Such as publishing limited editions of very expensive anthologies (which libraries must buy), or issuing journals that appear to be peer reviewed medical journals, but which (secretly) are sponsored by drug companies.

The article concludes with tentative suggestions on countermeasures, ranging from the formation of committees that keep track of these actors to stricter legislation and development of software that quickly identifies questionable publications in researchers’ publication lists.

The Internet is not just a fast information channel, but also a place where digital appearance gets followers and becomes social reality.

Pär Segerdahl

Eriksson, S. & Helgesson, G. 2016. “The false academy: predatory publishing in science and bioethics.” Medicine, Health Care and Philosophy, DOI 10.1007/s11019-016-9740-3

This post in Swedish

Approaching future issues - the Ethics Blog


Trust, responsibility and the Volkswagen scandal

December 15, 2015

Jessica Nihlén FahlquistVolkswagen’s cheating with carbon emissions attracted a lot of attention this autumn. It has been suggested that the cheating will lead to a decrease in trust for the company, but also for the industry at large. That is probably true. But, we need to reflect on the value of trust, what it is and why it is needed. Is trust a means or a result?

It would seem that trust has a strong instrumental value since it is usually discussed in business-related contexts. Volkswagen allegedly needs people’s trust to avoid losing money. If customers abandon the brand due to distrust, fewer cars will be sold.

This discussion potentially hides the real issue. Trust is not merely a means to create or maintain a brand name, or to make sure that money keeps coming in. Trust is the result of ethically responsible behaviour. The only companies that deserve our trust are the ones that behave responsibly. Trust, in this sense, is closely related to responsibility.

What is responsibility then? One important distinction to make is the one between backward-looking and forward-looking responsibility. We are now looking for the one who caused the problem, who is to blame and therefore responsible for what happened. But responsibility is not only about blame. It is also a matter of looking ahead, preventing wrongful actions in the future and doing one’s utmost to make sure the organisation, of which one is a member, behaves responsibly.

One problem in our time is that so many activities take place in such large contexts. Organisations are global and complex and it is hard to pinpoint who is responsible for what. All the individuals involved only do a small part, like cogs in a wheel. When a gigantic actor like Volkswagen causes damage to health or the environment, it is almost impossible to know who caused what and who should have acted otherwise. In order to avoid this, we need individuals who take responsibility and feel responsible. We should not conceive of people as powerless cogs in a wheel. The only companies who deserve our trust are the ones in which individuals at all levels take responsibility.

What is most important now is not that the company regains trust. Instead, we should demand that the individuals at Volkswagen raise their ethical awareness and start acting responsibly towards people, society and the environment. If they do that, trust will eventually be a result of their responsible behaviour.

Jessica Nihlén Fahlquist

(This text was originally published in Swedish, in the magazine, Unionen, industri och teknik, December 2015.)

Further reading:

Nihlén Fahlquist, J. 2015. “Responsibility as a virtue and the problem of many hands,” In: Ibo van de Poel, Lambèr Royakkers, Sjoerd Zwart. Moral Responsibility in Innovation Networks. Routledge.

Nihlén Fahlquist J. 2006. “Responsibility ascriptions and Vision Zero,” Accident Analysis and Prevention 38, pp. 1113-1118.

Van de Poel, I. and Nihlén Fahlquist J. 2012. “Risk and responsibility.” In: Sabine Roeser, Rafaela Hillerbrand, Martin Peterson, Per Sandin Handbook of Risk Theory, 2012, Springer, Dordrecht.

Nihlén Fahlquist J. 2009. “Moral responsibility for environmental problems – individual or institutional?” Journal of Agricultural and Environmental Ethics 22(2), pp. 109-124.

This post in Swedish

We challenge habits of thought : the Ethics Blog


Biobank news: ethics and law

April 23, 2014

The second issue of the newsletter from CRB and BBMRI.se is now available:

This April issue contains four interesting news items about:

  1. New international research cooperation on genetic risk information.
  2. The new Swedish law on registers for research on heritage, environment and health.
  3. The legislative process of developing a European data protection regulation.
  4. A new article on trust and ethical regulation.

You’ll also find a link to a two-page PDF-version of the newsletter.

Pär Segerdahl

We recommend readings - the Ethics Blog


Research ethics as moral assurance system

March 19, 2014

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogModern society seems to be driven by skepticism. As philosophers systematically doubted the senses by enumerating optical and other illusions, our human ability to think for ourselves and take responsibility for our professional activities is doubted by enumerating scandals and cases of misconduct in the past.

The logic is simple: Since human practices have a notorious tendency to slide into the ditch – just think of scandals x, y and z! – we must introduce assurance systems that guarantee that the practices remain safely on the road.

In such a spirit of systematic doubt, research ethics developed into what resembles a moral assurance system for research. With reference to past scandals and atrocities, an extra-legal regulatory system emerged with detailed steering documents (ethical guidelines), overseeing bodies (research ethics committees), and formal procedures (informed consent).

The system is meant to secure ethical trustworthiness.

The trustwortiness of the assurance system is questioned in a new article in Research Ethics, written by Linus Johansson together with Stefan Eriksson, Gert Helgesson and Mats G. Hansson.

Guidelines, review and consent aren’t questioned as such, however. (There are those who want to abolish the system altogether.) The problem is rather the institutionalized distrust that makes the system more and more formalized, like following a checklist in a mindless bureaucracy.

The logic of distrust demands a system that does not rely on the human abilities that are doubted. That would be self-contradictory. But thereby the system does not support human abilities to think for ourselves and take responsibility.

The logic demands a system where humans become what they are feared being.

The cold logic of distrust is what needs to be overcome. Can we abstain from demanding more detailed guidelines and more thorough control, next time we hear about a scandal?

The logic of skepticism is not easily overcome.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog


%d bloggers like this: