Dangers of moral words

December 11, 2018

Pär SegerdahlThe philosopher Bernard Williams distinguished between thick ethical concepts such as “brave” and “brutal,” which have both descriptive and evaluative content, and thin ethical concepts such as “right” and “wrong,” which are purely evaluative. “Murder” and “exploitation” are thick ethical concepts that sometimes play a central role in ethical debate. They have descriptive content combined with a negative evaluation: murder and exploitation are wrong.

This duality of thick moral words, their descriptive/normative Janus face, makes them an impelling part of the vocabulary of most, if not all, ideological movements. If you oppose X, and can demonstrate that X, in fact, involves murder or exploitation (descriptive aspect), then you have immediately demonstrated that X must be opposed (normative aspect). Thick ethical concepts are often used in conflictual situations to legitimize violent actions against people who are described as intriguing, murderous, exploitive, and much else. Since the words are taken to describe reality as it is, such bad individuals must be watched over and, if necessary, acted against.

Thick moral words thus easily lend themselves to functioning as ideological firearms. Their descriptive aspect allows taking aim. Their evaluative aspect says, “Fire!” I want to mention three further dangers of thick ethical concepts.

Dogmatism. The first is that it is difficult to raise questions about their applicability, since it can appear as if you questioned the evaluative component. Let us say that you raise the question if embryo destruction really constitutes murder. In the eyes of those who take this description for reality, you appear like a treacherous person who shrewdly argues that murder might be right! Simply raising the question, no matter how open-mindedly you do it, places you in the firing line. Your very open-mindedness speaks against you: “Murder is not something to be open-minded about!”

Righteousness. A second troublesome feature is that thick ethical concepts produce instant goodness in any ideological movement. Any ideology is on the right side, regardless of which side it is on, since it fights for what its moral vocabulary unites with the good, and fights against what its vocabulary unites with the bad. Any ideology has the right and the duty to act resolutely against what its dualistic vocabulary picks out as impermissible features of reality. – Which side for peace are you on?

Suffering. A third problem is that thick moral words produce suffering in the form of gnawing suspicions and fears. Since we are not omniscient, there is much we do not know, for example, about embryonic stem cell research. Thick ethical concepts here tend to appear in our heads as stand-ins for reality. They appear in the form of an inner voice that tells us what stem cell research is. This is not a purely descriptive “is,” but a double-edged one, for what the voice in the head says the research is can be a nightmarish, “It is murder.” Since we are ignorant of much, but not of our anxiety, we cannot shake off the worrying double-edged concepts that spin in the head. They seem validated by the gnawing anxiety they produce, and we suffer without end, caught in a whirlpool of thick descriptive/normative moral language.

In pointing out dangers of thick moral words, I am not questioning their descriptive or evaluative content. Murder is a reality and it is a serious crime; the same is true of exploitation. I am just pointing out that the dual nature of thick moral words can turn our heads. Moral language can make us violent, dogmatic, righteous, and anxious about issues that perhaps exist mainly in our descriptions of reality.

I think most of us have fallen into such dark pits.

Pär Segerdahl

This post in Swedish

The Ethics Blog - Thinking about thinking


Contemplative conversations

November 19, 2018

Pär SegerdahlWhen we face new sensitive and worrying issues, there is an instinctive reaction: this must be debated! But is debate always the right way, if we want to take human concerns seriously?

That some are worried about new research and technology, is a fact. That others are not worried, is also a fact. Suppose these people handle their differences by debating with each other. What happens?

What happens is that they leave the actual world, which varies as much as people are different, and end up in a universal world of rational reasons. Those who worry must argue for their concerns: All sensible people should feel worried! Those who are not worried must provide weighty counter-arguments: No sensible person should feel worried!

Debate thus creates an either/or conflict from what was only a difference. Polarization increases the fear, which amplifies the desire to be absolutely right. Everyone wants to own the uniquely compelling reason that everyone should obey. But since we are different, the debate becomes a vertiginous hall of mirrors. It multiplies exaggerated world images in which we lose ourselves and each other.

The worry itself, as trembling human fact, is forgotten. The only thing that engages us is the weighty reason for, or against, being worried. The only thing that interests us is what everyone should feel. Is that taking human concerns seriously? Is it taking ourselves seriously?

If a child is worried, we do not ask the child to argue for its worries, and we do not comfort the child by refuting it. We take care of the child; we take care of its worries, as compassionate parents.

I play with the idea that we and our societies would be in better shape if we more often avoided the absolute world of reasons. Through its universality, it appears, of course, like a utopia of peace and unity among rational beings. In fact, it often creates polarization and perplexes us with its exaggerated images of the world. Arguing for the right cause in debate is perhaps not always as noble as we take it to be.

We are, more often than we think, like children. That is, we are human. Therefore, we need, more often than we think, to take care of ourselves. As compassionate parents. That is another instinct, which could characterize conversations about sensitive issues.

We need to take care of ourselves. But how? What is the alternative to debate? For want of better words: contemplative conversations. Or, if you want: considerate conversations. Rather than polarizing, such an open spirit welcomes us all, with our actual differences.

Perhaps that is how we become adults with regard to the task of living well with each other. By tenderly taking care of ourselves as children.

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog


Speaking to 5-year-olds about research

October 23, 2018

How should we talk to children about research? And how should we go about recruiting them to studies? For children to become research participants, their parents must consent. Regulation states children should also give assent themselves, to as great extent as possible. Our ethics committees require us to provide them with age-appropriate information. Health care providers and researchers think the system works well and is ethically “correct.”

From recruiting numerous children for various research projects, I have some thoughts on the subject. I have put together countless information letters for children of various ages; all reviewed and approved by the ethics committee. But what, exactly, is “age-appropriate information”? With support from developmental psychology and some paediatric research, the ambitious paediatric researcher can get it right. On a group level, that is. We can estimate what the average kid of a certain age group understands. But how appropriate is the “age-appropriate” information for individual children? In his poem Till eftertanke, Søren Kirkegard wrote “To help someone, I must indeed understand more than they do, but first and foremost understand what they understand.”

Today, I value a slow and calm recruiting process. I talk to the children about what research is, most 5-year-olds actually have an idea. We speak about what the project is about, and what we want them to contribute. Perhaps we draw or look at pictures. I tell them that it is absolutely fine to change your mind and leave at any time, and that no one will be angry or upset with them if they do. And then we talk some more… Lastly, and most importantly, I ask the child to tell me what we talked about, and what we agreed upon. It takes some time to understand their understanding. Give yourself that time.

Not until I understand that the child has understood do I ask them to sign the consent form.

Sara Frygner-Holm

This post in Swedish

We care about communication - the Ethics Blog


Supporting clinicians to trust themselves

October 3, 2018

Pär SegerdahlSuppose that you want to learn to speak a language, but the course is overloaded by grammatical terminology. During the lessons, you hardly hear any of the words that belong to the language you want to learn. They drown in technical, grammatical terms. It is as if you had come to a course on general linguistic theory, not German.

When clinicians encounter healthcare ethics as a subject of education, they may have similar experiences. As adult humans they already can feel when everything is alright in a situation. Or when there is a problem; when attention is needed and action must be taken. (We do it every day.) However, to handle the specific challenges that may arise in healthcare, clinicians may need support to further develop this already existing human ability.

Unfortunately, healthcare ethics is typically not presented as development of abilities we already have as human beings. Instead, it is presented as a new subject. Being ethical is presented as having the specific knowledge of this subject. Ethics then seems to be about reasoning in terms of abstract ethical concepts and principles. It is as if you had come to a course on general moral theory, not healthcare ethics. And since most of us do not know a thing about moral theory, we feel ethically stupid and powerless, and lose our self-confidence.

However, just as you don’t need linguistic theory to speak a language, you don’t need moral theory to function ethically. Rather, it is the other way around. It is because we already speak and function ethically that there can be such intellectual activities as grammar and moral theory. Can healthcare ethics be taught without putting the cart before the horse?

A new (free to download) book discusses the issue: Rethinking Health Care Ethics. The book is a lucid critique of healthcare ethics as a specific subject; a critique that naturally leads into constructive suggestions for an alternative pedagogy. The book should be of high interest to teachers in healthcare ethics, to ethicists, and to anyone who finds that ethics often is presented in ways that make us estranged from ourselves.

What most impresses me in this book is its trust in the human. The foundation of ethics is in the human self, not in moral theory. Any adult human already carries ethics in the self, without verbalizing it as specific ethical concepts and principles.

Certainly, clinicians need education in healthcare ethics. But what is specific in the teaching is the unique ethical challenges that may arise in healthcare. Ethics itself is already in place, in the living humans who are entering healthcare as a profession.

Ethics should not be imposed, then, as if it were a new subject. It rather needs support to grow in humans, and to mature for the specific challenges that arise in healthcare.

This trust in the human is unusual. Distrust, feeding the demand for control, is so much more common.

Pär Segerdahl

Scher, S. & Kozlowska, K. 2018. Rethinking Health Care Ethics. Palgrave

This post in Swedish

We recommend readings - the Ethics Blog


Philosophy in responsible research and innovation

August 22, 2018

Pär SegerdahlThe honorable discipline philosophy is hardly anything we associate with groundbreaking research and innovation. Perhaps it is time we began to see a connection.

To begin with, we can let go of the image of philosophy as an honorable discipline. Instead, let us talk about the spirit of philosophy. People who think for themselves, as philosophers do, rarely find themselves at home within the narrow bounds of disciplines and fields. Not even if they are called philosophical. On the contrary, if such a person encounters boundaries that restrict her thought, she investigates the boundaries. And removes them, if necessary.

Forget the reverent representation of philosophy as an honorable discipline.

The spirit of philosophy is to avoid discipline, submission, tradition and all forms of dependence. Someone who functions as a loyal representative of a philosophical school is hardly a genuine thinker. A philosopher is someone who, in a spirit of absolute independence, questions everything that makes a pretense of being true, right and correct. Therefore, it has been said that one cannot learn philosophy, only to philosophize. As soon as a philosophy crystallizes, the philosophical spirit awakens and investigates the boundaries of what usually turns out to be a fad that attracts insecure intellects who shun independent thinking. No system of thought restricts a freely investigating thinker. Especially not the philosophy that is in fashion.

How does this spirit of philosophy connect to research and innovation? The connection I see is different than you probably guess. It is not about boosting the development by removing all boundaries, but about taking responsibility for the development. Philosophical thinking does not resemble an overheated research field’s fast flow of ideas, or an entrepreneur’s grandiose visions for the future. On the contrary, a philosopher takes a step back to calmly investigate the flow of ideas and visions.

Philosophy’s freedom is basically a responsibility.

Responsible Research and Innovation has become an important political theme for the European Commission. This responsibility is understood as an interactive process that engages social actors, researchers and innovators. Together, they are supposed to work towards ethically permissible research activities and products. This presupposes addressing also underlying societal visions, norms and priorities.

For this to work, however, separate actors cannot propagate separate interests. You need to take a step back and make yourself independent of your own special interests. You need to make yourself independent of yourself! Reflect more open-mindedly than you were disciplined to function, and see beyond the bounds of your fragmentary little field (and self). This spacious spirit of philosophy needs to be awakened: the freedom of thought that is basically the responsibility of thought.

Concrete examples of what this means are given in the journal, Neuroethics. In an article, Arleen Salles, Kathinka Evers and Michele Farisco describe the role that philosophical reflection currently plays in the European Flagship, the Human Brain Project. Here, philosophy and neuroethics are no mere appendages of neuroscientific research. On the contrary, by reflecting on central concepts in the project, philosophers contribute to the overall self-understanding in the project. Not by imposing philosophy as a special interest, or as a competing discipline with its own concepts, but by open-mindedly reflecting on neuroscientific concepts, clarifying the questions they give rise to.

The authors describe three areas where philosophy contributes within the Human Brain Project, precisely through awakening the spirit of philosophy. First, conceptual questions about connections between the brain and human identity. Secondly, conceptual questions about connections between the brain and consciousness; and between consciousness and unconsciousness. Thirdly, conceptual questions about links between neuroscientific research and political initiatives, such as poverty reduction.

Let us drop the image of philosophy as a discipline. For we need the spirit of philosophy.

Pär Segerdahl

Salles, A., Evers, K. & Farisco, M. Neuroethics (2018). https://doi.org/10.1007/s12152-018-9372-9

(By the way, anyone can philosophize. If you have the spirit, you are a philosopher. A demanding education in philosophy as a separate discipline can actually be an obstacle that you have to overcome.)

This post in Swedish

We transgress disciplinary borders - the Ethics Blog


Intellectual asceticism

August 7, 2018

Pär SegerdahlWe dismiss the magician’s claim to be in touch with the spirit world. We dismiss the priest’s claim to be in touch with the divine. We do not believe in supernatural contact with a purer world beyond this one!

Nevertheless, similar claims permeate our enlightened rationalist tradition. Even philosophers promised contact with a purer sphere. The difference is that they described the pure sphere in intellectual terms. The promised control of “concepts,” “categories,” “principles” and so on. They lived, like monks and magicians, as ascetics. They sought power over life itself, but they did it through intellectual self-discipline.

If you want to think about asceticism as a trait of our philosophical tradition, you may want to take a look at an article I wrote: Intellectual asceticism and hatred of the human, the animal, and the material.

In the article, I try to show that philosophy’s infamous anthropocentrism is illusory. Philosophers never idealized the human. They idealized something much more exclusive. They idealized the ascetically purified intellect.

Pär Segerdahl

Segerdahl, P. 2018. Intellectual asceticism and hatred of the human, the animal, and the material. Nordic Wittgenstein Review 7 (1): 43-58. DOI 10.15845/nwr.v7i1.3494

This post in Swedish

We recommend readings - the Ethics Blog


Driverless car ethics

June 20, 2018

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


%d bloggers like this: