A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Pär Segerdahl (Page 20 of 42)

Beyond awareness: the need for a more comprehensive ethics of disorders of consciousness

Michele FariscoDisorders of consciousness like coma, unresponsive wakefulness syndrome, and what is known as minimally conscious state, are among the most challenging issues in current ethical debates. Ethical analyses of these states usually focus on the ‘residual’ awareness that these patients might still have. Such awareness is taken to have bearing on other factors that are usually considered ethically central, like the patients’ well-being.

Yet, when we take a look at recent scientific investigations of mental activity it appears that things are much more complicated than usually thought. Cognitive science provides empirical evidence that the unconscious brain is able to perform almost all the activities that we (wrongly) think are exclusive of consciousness, including enjoying positive emotions and disregarding negative ones. To illustrate, people that are subliminally exposed to drawings of happy or sad faces are emotionally conditioned in their evaluation of unknown objects, like Chinese characters for people who don’t know Chinese. If preceded by subliminal happy faces, these characters are more likely to elicit positive feelings when consciously perceived. This means that unconscious emotions exist, and these emotions are (plausibly) positive or negative. This in turn suggests that consciousness is not required to have emotions.

Accordingly, people with disorders of consciousness could also have unconscious emotions. Even though they are not capable of external behavior from which we could infer the presence of positive or negative emotional life, we cannot rule out the possibility that these patients’ residual brain activity is related to a residual unaware emotional life, which can be either positive or negative.

We should try to avoid becoming biased by the sort of “consciousness-centrism” that impedes us from seeing the total landscape: there is a lot going on behind (and beyond) the eyes of our awareness.

What does this imply for the ethics of caring for and interacting with people affected by severe disorders of consciousness? Well, as previously said, the ethical discourse surrounding the care for and the relationship with these people has usually focused on their residual awareness, scrutinizing whether and to what extent these people could consciously experience good and bad feelings. Yet if it is possible to have these experiences at the unaware level, shouldn’t this be a relevant consideration when engaging in an ethical analysis of patients with disorders of consciousness? In other words, shouldn’t we take care of their residual unconsciousness in addition to their residual consciousness?

I believe we need to enlarge the scope of our ethical analyses of patients with disorders of consciousness, or at least acknowledge that focusing on residual consciousness is not all we should do, even if it is all we presently can do.

Michele Farisco

Winkielman P., Berridge K.C. Unconscious emotion. Current Directions in Psychological Science. 2004;13(3):120-3

We challenge habits of thought : the Ethics Blog

Acknowledging the biobank and the people who built it

Pär SegerdahlBiomedical research increasingly often uses biological material and information collected in biobanks. In order for a biobank to work efficiently, it is important not only that the biological material is stored well. The material must also be made available to science so that researchers easily and responsibly can share samples and information.

Creating such a biobank is a huge effort. Researchers and clinicians who collect bioresources might even be reluctant to make the biobank openly available. Why make it easy for others to access to your biobank if they do not give you any recognition?

In an article in the Journal of Community Genetics, Heidi C. Howard and Deborah Mascalzoni, among others, discuss a system that would make it more attractive to develop well-functioning biobanks. It is a system for rewarding researchers and clinicians who create high quality bioresources by making their work properly acknowledged.

The system, presented in the article, is called the Bioresource Research Impact Factor (BRIF). If I understand it, the system may work the following way. A biobank is described in a permanent “marker” article published in a specific bioresource journal. Researchers who use the biobank then quote the article in their publications and funding grants. In this way, you can count citations of bioresources as you count citations of research articles.

The article also describes the results of a study of stakeholders’ awareness of BRIF, as well as an ethical analysis of how BRIF can contribute to more responsible biobanking.

If you are building a biobank, read the article and learn more about BRIF!

Pär Segerdahl

Howard, H.C., Mascalzoni, D., Mabile, L. et al. “How to responsibly acknowledge research work in the era of big data and biobanks: ethical aspects of the Bioresource Research Impact Factor (BRIF).” J Community Genet (2017). https://doi.org/10.1007/s12687-017-0332-6

This post in Swedish

We want to be just - the Ethics Blog

Communicating risk in human terms

Pär SegerdahlThe concept of risk used in genetics is a technical term. For the specialist, risk is the probability of an undesired event, for example, that an individual develops some form of cancer. Risk is usually stated as a percentage.

It is well known that patients have difficulties to access the probability notion of risk. What do their difficulties mean?

Technical notions, which experts use in their specialist fields, usually have high status. The attitude is: this is what risk really is. Based on such an attitude, people’s difficulties mean: they have difficulties to understand risk. Therefore, we have to help them understand, by using educational tools that explain to them what we mean (we who know what risk is).

We could speak of communicating risk in the experts’ terms (and on their terms). Of course, one tries to communicate risk as simply and accessibly as possible. However, the notion of ​​what to communicate is fixed. Anything else would disturb the attitude that the expert knows what risk really is.

In an article in Patient Education and Counseling, Jennifer Viberg Johansson (along with Pär Segerdahl, Ulrika Hösterey Ugander, Mats G. Hansson and Sophie Langenskiöld) makes an inquiry that departs from this pattern. She explores how people themselves make sense of genetic risk.

How does Viberg’s study depart from the pattern? She does not use the technical notion of risk as the norm for understanding risk.

Viberg interviewed healthy participants in a large research project. She found that they avoided the technical, probability notion of genetic risk. Instead, they used a binary concept of risk. Genetic risk (e.g., for breast cancer) is something that you have or do not have.

Furthermore, they interpreted risk in three ways in terms of time. Past: The risk has been in my genome for a long time. When symptoms arise, the genetic risk is the cause of the disease. Present: The risk is in my genome now, making me a person who is at risk. Future: The risk will be in my genome my entire life, but maybe I can control it through preventive measures.

These temporal dimensions are not surprising. People try to understand risk in the midst of their lives, which evolve in time.

It is not the case, then, that people “fail” to understand. They do understand, but in their own terms. They think of genetic risk as something that one has or does not have. They understand genetic risk in terms of how life evolves in time. A practical conclusion that Viberg draws is that we should try to adapt genetic risk communication to these “lay” conceptions of risk, which probably help people make difficult decisions.

We could speak of communicating risk in human terms (and on human terms). What does genetic risk mean in terms of someone’s past, present and future life?

When you talk with people with lives to live, that is probably what the risk really is.

Pär Segerdahl

J. Viberg Johansson, et al., Making sense of genetic risk: A qualitative focus-group study of healthy participants in genomic research, Patient Educ Couns (2017), http://dx.doi.org/10.1016/j.pec.2017.09.009

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

Taking people’s moral concerns seriously

Pär SegerdahlI recently published a post on how anxiety can take possession of the intellect: how anxiety, when it is interpreted by thoughts that rationalize it, can cause moral panic.

A common way of dealing with people’s moral concerns in bioethics is to take the concerns intellectually seriously. One tries to find logical reasons for or against the “correctness” of the anxiety. Is the embryo already a person? If it is, then it is correct to be morally concerned about embryonic stem cell research. Persons are then killed by researchers, who are almost murderers. However, if the embryo is not a person, but just an accumulation of cells, then there is at least one reason less to worry.

Bioethicists therefore set out to conclude the metaphysical issue about “the status of the embryo.” So that we will know whether it is intellectually correct to worry or not! One reason for this intellectualized approach is probably society’s need for foundations for decision-making. Should embryo research be allowed and, if so, in what forms? Decision-makers need to be able to motivate their decisions by citing intellectually appropriate reasons.

Bioethicists thus interpret people’s moral concerns as if they were motivated by intuitive folk-metaphysical thinking. This thinking may not always be perfectly logical or scientifically informed, but it should be possible to straighten out. That would satisfy society’s need for intellectually well-founded decisions that “take people’s concerns seriously.”

The problem with this way of taking people’s concerns seriously is that their worries are intellectualized. Do we worry on the basis of logic? Are children afraid of ghosts because they cherish a metaphysical principle that assigns a dangerous status to ghosts? Can their fear be dealt with by demonstrating that their metaphysical principle is untenable? Or by pointing out to them that there is no evidence of the existence of beings with the horrible characteristics their principle assigns to “ghosts”?

Why are many people hesitant about research with human embryos? I have no definitive answer, but doubt that it is due to some folk-metaphysical doctrines about the status of the embryo. Perhaps it is more related to the fact that the embryo is associated with so much that is significant to us. It is associated with pregnancy, birth, children, family life, life and death. The connection to these intimate aspects of life means that we, without necessarily having the view that embryo research is wrong, can feel hesitant.

The question is: How do we take such moral hesitation seriously? How do we reject delusions and calm ourselves down when the intellect starts to present us with horrible scenarios that certainly would motivate anxiety? How do we do it without smoothing things over or acting like faultfinders?

I believe that bioethics should above all avoid intellectualizing people’s moral concerns; stop representing moral hesitation as the outcome of metaphysical reasoning. If people do not worry because of folk-metaphysical doctrines about the embryo, then we have no reason to debate the status of the embryo. Instead, we should begin by asking ourselves: Where does our hesitation come from?

That would mean taking ourselves seriously.

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

 

Moral panic in the intellect

Pär SegerdahlMoral panic develops intellectually. It is our thoughts that are racing. Certain mental images make such a deep impression on us that we take them for Reality, for Truth, for Facts. Do not believe that the intellect is cold and objective. It can boil over with agitated thoughts.

This is evident in bioethics, where many issues are filled with anguish. Research information about cloned animals, about new techniques for editing in the genome, or about embryonic stem cell research, evoke scary images of subversive forms of research, threatening human morality. The panic requires a sensitive intellect. There, the images of the research acquire such dimensions that they no longer fit into ordinary life. The images take over the intellect as the metaphysical horizon of Truth. Commonplace remarks that could calm down the agitated intellect appear to the intellect as naive.

A science news in National Geographic occasions these musings. It is about the first attempt in the United States to edit human embryos genetically. Using so-called CRISPR-Cas9 technique, the researchers removed a mutation associated with a common inherited heart disease. After the successful editing, the embryos were destroyed. (You find the scientific article reporting the research in Nature.)

Reading such research information, you might feel anxiety; anxiety that soon takes possession of your intellect: What will they do next? Develop “better” humans who look down on us as a lower species? Can we permit science to change human nature? NO, we must immediately introduce new legislation that bans all genetic editing of human embryos!

If the intellect can boil over with such agitated thoughts, and if moral panic legislation is imprudent, then I believe that bioethics needs to develop its therapeutic skills. Some bioethical issues need to be treated as affections of the intellect. Bioethical anxiety often arises, I believe, when research communication presents science as the metaphysical horizon of truth, instead of giving science an ordinary human horizon.

It may seem as if I took a stand for science by representing critics as blinded by moral panic. That is not the case, for the other side of moral panic is megalomania. Hyped notions of great breakthroughs and miraculous cures can drive entire research fields. Mental images that worry most people stimulate other personalities. Perhaps Paolo Macchiarini was such a personality, and perhaps he was promoted by a scientific culture of insane mental expectations on research and its heroes.

We need a therapeutic bioethics that can calm down the easily agitated intellect.

Pär Segerdahl

This post in Swedish

We think about bioethics : www.ethicsblog.crb.uu.se

Are you a person or an animal?

Pär SegerdahlThe question in the title may sound like an insult. That is, not as a question, but as something one might say in anger to reprimand someone who misbehaves.

In philosophy, the question is asked seriously, without intention of insulting. A philosopher who misbehaves at a party and is reprimanded by another guest – “Are you a person or an animal?” – could answer, shamelessly: Eh, I really don’t know, philosophers have contemplated that question for hundreds of years.

What then is the philosophical question? It is usually described as the problem of personal identity. What are we, essentially? What constitutes “me”? What holds the self together? When does it arise and when does it disappear?

According to proponents of a psychological view, we (human beings) are persons with certain psychological capacities, such as self-awareness. That psychology holds the self together. If an unusual disease made my body deteriorate, but doctors managed to transplant my mental contents (self-awareness, memories, etc.) into another body, then I would survive in the other body. According to proponents of the rival, animalist view, however, we are animals with a certain biology. An animalist would probably deny that I could survive in a foreign body.

The difference between the two views can be illustrated by their consequences for a bioethical question: Is it permissible to harvest organs from brain-dead bodies to use as transplants? If we are essentially persons with self-awareness, then we cease to exist when the brain dies. Then it should be permissible to harvest organs; it would not violate personal autonomy. If we are animals with a certain biology, however, harvesting organs may appear as using citizens as mere means in healthcare.

In an article in Ethics, Medicine and Public Health, Elisabeth Furberg at CRB questions these views on identity. She argues that both views are anthropocentric. This is easy to see when it comes to the view that we are essentially persons. The psychological view exaggerates the importance of certain supposedly unique human psychological capacities (such as the capacity for a first-person perspective), and underestimates the psychological capacities of non-human animals. According to Furberg, however, even the animalist view is anthropocentric. How!?

How can an outlook where we are essentially animals be anthropocentric? Well, because the very concept “animal” is anthropocentric, Furberg argues. It originated as a contrast to the concept “human.” It distinguishes us (morally advanced beings) from them (less worthy creatures of nature). The animalist view is unaware of its own anthropocentric bias, which comes with the concept “animal.”

At the end of the article, Furberg proposes a less anthropocentric view on identity, a hybrid view that combines the psychological and animalistic answers to the question in the title. The hybrid view is open to the possibility that even animals other than humans can have psychological identity, such as chimpanzees. If I understand Furberg correctly, she would say that many animals are just animals. A snail is identical to the snail. It has no psychological identity that could survive in another snail body. Nevertheless, a number of animals (not just humans) have an identity that goes beyond their animality. Chimpanzees and humans, and probably some other species, are such animals.

I cannot resist mentioning that I have written an article about similar issues: Being human when we are animals. There, I do not purify a metaphysical question from an insult, but investigate the insult, the reprimand.

Pär Segerdahl

Furberg, E. 2017. “Are we persons or animals? Exposing an anthropocentric bias and suggesting a hybrid view.” Ethics, Medicine and Public Health (3): 279-287

This post in Swedish

We philosophize when we do not know how to think

Pär SegerdahlPhilosophers are also called thinkers. We easily believe that philosophers are specialists in thinking, as linguists are specialists in speech and writing. If someone knows how to think, it must be a philosopher, we think.

I believe we are wrong to think philosophers know how to think. Rather, they are people who know when we do not know how to think. They acknowledge (for all of us) when we do not know how to think (although we thought we knew). Such confessions probably need to me made more often!

If you think you know how to think about immigration, or about stem cell research, then you have an opinion. The opinion may be substantiated, but it hardly makes you a thinker, but rather a molder of public opinion. Since you already know how to think, you do not have to think. You only need to keep on talking, according to what you believe you know.

“I need more time to think about it; I don’t know how I should think.” We fail to notice that there is a way of thinking that begins the very moment we do not know how to think. At that moment, the philosophical dimension of thinking opens up.

When you know how to think, you no longer think. Not in the philosophical sense. If you meet an argumentative chatterbox, or a schoolmasterly specialist in thinking, you can be sure it is not a philosopher.

Pär Segerdahl

This post in Swedish

The Ethics Blog - Thinking about thinking

Nudging people in the right direction

Pär SegerdahlBehavioral scientist study how environments can be designed so that people are pushed towards better decisions. By placing the vegetables first at the buffet, people may choose more vegetables than they would otherwise do. They choose themselves, but the environment is designed to support the “right” choice.

Nudging people to behave more rationally may, of course, seem self-contradictory, perhaps even unethical. Shouldn’t a rational person be allowed to make completely autonomous decisions, instead of being pushed in the “right” direction by the placement of salad bowls? Influencing people by designing their environments might support better habits, but it insults Rationality!

As a philosopher, I do, of course, appreciate independent thinking. However, I do not demand that every daily decision should be the outcome of reasoning. On the contrary, the majority of decisions should not require too much arguing with oneself. It saves time and energy for matters that deserve contemplation. A nudge from a salad bowl at the right place supports my independent thinking.

Linnea Wickström Östervall, former researcher at CRB, has tried to nudge people to a more restrained use of antibiotics. It is important to reduce antibiotics use, because overuse causes antibiotic resistance: a major challenge to manage.

In her study, she embedded a brief reminder of antibiotic resistance in the questionnaire that patients answer before visiting the doctor. This reminder reached not only the patients, then, but also the doctors who went through the questionnaire with the patients. The effect was clear at the clinic level. In the clinics where the reminder was included in the questionnaire, antibiotics use decreased by 12.6 percent compared to the clinics used as control.

If you want to know more about the study, read Linnea’s article in the Journal of Economic Behavior & Organization, where the interesting results are presented in detail. For example, the nudge appears to affect the interaction between doctors and patients, rather than the individual patients.

Can you arrange your everyday environment so that you live wisely without making rational choices?

Pär Segerdahl

Wickström Östervall, L. 2017. “Nudging to prudence? The effect of reminders on antibiotics prescriptions.” Journal of Economic Behavior & Organization 135: 39-52.

This post in Swedish

Approaching future issues - the Ethics Blog

When “neuro” met “ethics”

Pär SegerdahlTwo short words increasingly often appear in combination with names of professional fields and scientific disciplines: neuro and ethics. Here are some examples: Neuromusicology, neurolaw, neuropedagogy. Bioethics, nursing ethics, business ethics.

Neuro… typically signifies that neuroscience sheds light on the subject matter of the discipline with which it combines. It can illuminate what happens in the brain when we listen to music (neuromusicology). What happens in the brain when witnesses recall events or when judges evaluate the evidence (neurolaw). What happens in children’s brains when they study mathematics (neuropedagogy).

…ethics (sometimes, ethics of…) typically signifies that the discipline it combines with gives rise to its own ethical problems, requiring ethical reflection and unique ethical guidelines. Even war is said to require its own ethics of war!

In the 1970s, these two words, neuro and ethics, finally met and formed neuroethics. The result is an ambiguous meeting between two short but very expansive words. Which of the two words made the advance? Where is the emphasis? What sheds light on what?

At first, ethics got the emphasis. Neuroethics was, simply, the ethics of neuroscience, just as nursing ethics is the ethics of nursing. Soon, however, neuro demonstrated its expansive power. Today, neuroethics is not only the “ethics of neuroscience,” but also the “neuroscience of ethics”: neuroscience can illuminate what happens in the brain when we face ethical dilemmas. The emphasis thus changes back and forth between neuroethics and neuroethics.

The advances of these two words, and their final meeting in neuroethics, reflects, of course, the expansive power of neuroscience and ethics. Why are these research areas so expansive? Partly because the brain is involved in everything we do. And because all we do can give rise to ethical issues. The meeting between neuro… and …ethics was almost inevitable.

What did the meeting result in? In a single discipline, neuroethics? Or in two distinct disciplines, neuroethics and neuroethics, which just happen to be spelt the same way, but should be kept separate?

As far as I understand, the aim is to keep neuroethics together as one interdisciplinary field, with a two-way dialogue between an “ethics of neuroscience” and a “neuroscience of ethics.” This seems wise. It would be difficult to keep apart what was almost predetermined to meet and combine. Neuroethics would immediately try to shed its neuroscientific light on neuroethics. And neuroethics would be just as quick to develop ethical views on neuroethics. The wisest option appears to be dialogue, accepting a meeting that appears inevitable.

An interesting article in Bioethics, authored by Eric Racine together with, among others, Michele Farisco at CRB, occasions my thoughts in this post. The subject matter of the article is neuroethics: the neuroscience of ethics. Neuroethics is associated with rather grandiose claims. It has been claimed that neuroscience can support a better theory of ethics. That it can provide the basis for a universal ethical theory that transcends political and cultural divides. That it can develop a brain-based ethics. That it can reveal the mechanisms underlying moral judgments. Perhaps neuroscience will soon solve moral dilemmas and transform ethics!

These pretentions have stimulated careless over-interpretation of neuroscientific experiments. They have also provoked rash dismissal of neuroethics and its relevance to ethics. The purpose of the article is to support a more moderate and deliberate approach, through a number of methodological guideposts for the neuroscience of ethics. These include conceptual and normative transparency, scientific validity, interdisciplinary methods, and balanced interpretation of results.

In view of this critical perspective on hyped neuroscientific claims, one could define the article as a neuroethical article on neuroethics. Following the linguistic pattern that I described above, the article is an example of neuroethics-ethics. No, this will not do! We cannot use these two expansive words to specify in neurotic detail who currently happens to advance into whose field.

I choose to describe the article, simply, as a neuroethical paper on neuroethics. I want to see it as an example of the dialogue that can unite neuroethics as an interdisciplinary field.

Pär Segerdahl

Racine, E., Dubljevic´, V., Jox, R. J., Baertschi, B., Christensen, J. F., Farisco, M., Jotterand, F., Kahane, G., Müller, S. (2017). “Can neuroscience contribute to practical ethics? A critical review and discussion of the methodological and translational challenges of the neuroscience of ethics.” Bioethics 31: 328-337.

This post in Swedish

We transgress disciplinary borders - the Ethics Blog

« Older posts Newer posts »