Driverless car ethics

June 20, 2018

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics :

Can a robot learn to speak?

May 29, 2018

Pär SegerdahlThere are self-modifying computer programs that “learn” from success and failure. Chess-playing computers, for example, become better through repeated games against humans.

Could a similar robot also learn to speak? If the robot gets the same input as a child gets when it learns to speak, should it not be possible in principle?

Notice how the question zigzags between child and machine. We say that the robot learns. We say that the child gets input. We speak of the robot as if it were a child. We speak of the child as if it were a robot. Finally, we take this linguistic zigzagging seriously as a fascinating question, perhaps even a great research task.

An AI expert and prospective father who dreamed of this great research task took the following ambitious measures. He equipped his whole house with cameras and microphones, to document all parent-child interactions during the child’s first years. Why? He wanted to know exactly what kind of linguistic input a child gets when it learns to speak. At a later stage, he might be able to give a self-modifying robot the same input and test if it also learns to speak.

How did the project turn out? The personal experience of raising the child led the AI ​​expert to question the whole project of teaching a robot to speak. How could a personal experience lead to the questioning of a seemingly serious scientific project?

Here, I could start babbling about how amiably social children are compared to cold machines. How they learn in close relationships with their parents. How they curiously and joyfully take the initiative, rather than calculatingly await input.

The problem is that such babbling on my part would make it seem as if the AI ​​expert simply was wrong about robots and children. That he did not know the facts, but now is more well-informed. It is not that simple. For the idea behind ​​the project presupposed unnoticed linguistic zigzagging. Already in asking the question, the boundaries between robots and children are blurred. Already in the question, we have half answered it!

We cannot be content with responding to the question in the headline with a simple, “No, it cannot.” We must reject the question as nonsense. Deceitful zigzagging creates the illusion that we are dealing with a serious question, worthy of scientific study.

This does not exclude, however, that computational linguistics increasingly uses self-modifying programs, and with great success. But that is another question.

Pär Segerdahl

Beard, Alex. How babies learn – and why robots can’t compete. The Guardian, 3 April 2018

This post in Swedish

We like critical thinking :

Bioethics dissolving misdirected worldliness

May 16, 2018

Pär SegerdahlWhen we feel low, we often make the mistake of scanning the external environment to find the cause of our state of mind out there. One could speak of the depressed person’s misdirected worldliness. We are convinced that something in the world makes us depressed. We exclude that we ourselves play a role in the drama: “I am depressed because he/she/they/society is so damned…”

The depressed person naturally believes that the way to happiness lies in eliminating the external cause of the depression: “If I just could be spared from dealing with him/her/them/society, I would feel a lot better.” That is what the depressed person’s worldliness looks like. We are unable to turn around and see (and treat) the emergence of the problem within ourselves.

Xenophobia might be a manifestation of the depressed person’s misunderstanding of life. We could speak of the insecure person’s misdirected worldliness. One scans the external environment to find the cause of one’s insecurity in the world. When one “finds” it, one apparently “proves” it beyond doubt. The moment one thinks about immigration, one is attacked by strong feelings of insecurity: no doubt, that’s the cause! The alternative possibility that one carries the insecurity within oneself is excluded: “I’m suffering because society is becoming increasingly insecure.”

Finally, one makes politics of the difficulty of scrutinizing oneself. One wants to eliminate the external cause of the insecurity one feels: “If we stop immigration, society will become safer and I will feel more secure!” That is what the insecure person’s misdirected worldliness looks like.

You might be surprised that even anti-xenophobia can exhibit the depressed person’s misunderstanding of life. If we lack a deep understanding of how xenophobia can arise within a human being, we will believe that there are evil people who in their stupidity spread fake statistics about increasing social insecurity. These groups must be eliminated, we think: “If there were no xenophobic groups in society, then I would feel much better.” That is what the good activist’s worldliness can look like.

Like that we go on and on, in our misdirected worldliness, because we fail to see our own role in the drama. We make politics of our inner states, which flood the world as if they were facts that should appear in the statistics. (Therefore, we see them in the statistics.)

Now you may be surprised again, because even bioethics can exhibit the depressed person’s misunderstanding of life. I am thinking of the tendency to make ethics an institution that maintains moral order in society. Certainly, biomedical research needs regulation, but sometimes regulation runs the errands of a misdirected worldliness.

A person who feels moral unease towards certain forms of research may think, “If researchers did not kill human embryos, I would feel a lot better.” Should we make policy of this internal state by banning embryonic stem cell research? Or would that be misdirected projection of an inner state on the world?

I cannot answer the question in this post; it requires more attention. All I dare to say is that we, more often than we think, are like depressed people who seek the cause of our inner states in the world. Just being able to ask if we manifest the depressed person’s misunderstanding of life is radical enough.

I imagine a bioethics that can ask the self-searching question and seek practical ways to handle it within ourselves. So that our inner states do not flood the world.

Pär Segerdahl

This post in Swedish

We think about bioethics :

Read this interview with Kathinka Evers!

April 26, 2018

Through philosophical analysis and development of concepts, Uppsala University contributes significantly to the European Flagship, the Human Brain Project. New ways of thinking about the brain and about consciousness are suggested, which take us beyond oppositions between consciousness and unconsciousness, and between consciousness and matter.

Do you want to know more? Read the fascinating interview with Kathinka Evers: A continuum of consciousness: The Intrinsic Consciousness Theory

Kathinka Evers at CRB in Uppsala leads the work on neuroethics and neurophilosophy in the Human Brain Project.

Pär Segerdahl

We recommend readings - the Ethics Blog

When fear of obscurity produces obscurity

April 25, 2018

Pär SegerdahlObscurely written texts make us angry. First, we get annoyed because we do not understand. Then comes the fear, the fear of being duped by a cheat. Our fear is so strong that we do not dare to acknowledge it. Instead, we seriously suspect that there are madcaps who for some inscrutable reason write tons of nonsense. We had better take shelter in the house of reason!

Certainly, there are chatterboxes who talk nonsense. My own fear in this post is that fear of the obscure will make us shallow. Insightfulness easy appears as obscurity. It takes time to understand insightful texts. We often reread them; we age with them. If we do not give us that time, but demand immediate gratification, we might reject insightful texts as obscure and perhaps even dangerous.

There is an ideal of eradicating all obscurity: Write verbally explicitly, without any holes in the chain of reasons! The works of great thinkers are often scrutinized according to this ideal: Are there overlooked holes in their arguments through which truth might slip out? Can the holes be repaired, or will the ship sink with its cargo of truth claims?

A problem with this ideal of reason is that it can undermine our own literacy. The ideal can make even plain texts seem obscure, which reinforces the fear of being duped by cheats; hordes of them. Suddenly, one wants to correct all humanity, who apparently has not yet learned to be reasonable.

The ideal of reason becomes a demand for a small circle of intellectual ascetics who write intricately argued texts to each other: texts that, however, become incomprehensible to the rest of humanity. Like impregnable walls, protecting the house of reason.

Fear of obscurity risks making us both shallow and obscure. Therefore, take care of your fear! That is also a way of being reasonable. Perhaps a more insightful way.

Pär Segerdahl

This post in Swedish

Minding our language - the Ethics Blog

To become aware of something

March 29, 2018

Pär SegerdahlThe phenomenon I want to highlight in this post has many descriptions. Here are a few of them: To become conscious of something; to notice; to observe; to realize; to see; to become aware of something.

We all experience it. Every now and then, what these descriptions indicate occurs in us. We realize something; we become aware of something. It can be elementary, such as being struck (another similar description) by how blue the sky is. It may be painful, such as realizing how self-absorbed we are or how ungenerously we treat someone.

What is the point of living if we do not occasionally become aware of living?

Insights can also be philosophical, such as becoming aware of what it means to forgive someone. We cannot order someone to forgive, just as we cannot order someone to be happy. The words “I forgive you” may resemble an act of volition that a person can be ordered to perform; but only deceitfully empty words will obey the order. Genuine forgiveness comes spontaneously, or not at all. I say, “I forgive you,” when I notice, with relief, that I already have forgiven you; that I no longer harbor unforgiving thoughts about you, etc.

What would human life be without these insights into how we live? What would ethics be?

Just as forgiveness cannot be enforced, awareness cannot be demanded. “Realize this!” is not an order, but sheer desperation. Awareness is as shy as forgiveness. It comes spontaneously, or not at all. As soon as a certain form of awareness is required, enforced, or presumed, it contracts to a mere norm of thought. That is how communities of ideas arise, or churches, or philosophical schools: through narrowing consciousness. Loyal members will confirm each other while they deride “the others” who supposedly lack insight and must be rejected.

Considering how awareness does not obey orders, it can be seen as radical, as revolutionary. It takes us beyond all norms of thought and all communities of ideas! Suddenly we realize something that surpasses everything we thought we knew. However, if we try to force our insights onto others by proving them as facts, we reduce our spacious awareness to narrow binding norms. Our radical freedom is unnoticeable on the surface; we cannot display it without losing it.

If awareness is free and impossible to catch as a fact, do we have to remain quiet about these shy insights? No, philosophical work aims precisely at attracting shy insights into the light. By using fresh examples, considerations, similes and striking words, we try to entice what does not obey orders. This is the secret of a genuine philosophical investigation. It does not prove truths, but attracts truths. Whether the investigation succeeds, each one must assess for him- or herself. In philosophy, we cannot say, “Elementary, my dear Watson”. Nevertheless, many professional thinkers dream of saying it. They dream of the pure authority of binding norms of thought. Faith in reason is sheer desperation.

This post may seem to contain quasi-oracular pronouncements about forgiveness (and other matters). However, the intention is not that you should believe me or use the post as a norm of thought. Ultimately, my statements are queries from human to human: Do you also see the features I see in forgiveness and awareness? Otherwise, we continue the investigation together. For in philosophy we can never enforce the truth, we can only attract it. It comes spontaneously, or not at all.

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog

Intellectual habits prevent self-examination

March 21, 2018

Pär SegerdahlThe intellect is worldly-minded and extrovert. It is busy with the facts of the world. Even when it turns inwards, towards our own consciousness, of which it is a part, the intellect interprets consciousness as another object in the world.

The intellect can never become aware of itself. It can only expand towards something other than itself.

The Chinese philosopher Confucius gave a wonderful image of a self-examining person: “When the archer misses the center of the target, he turns around and seeks the cause of his failure within himself.”

The intellect is like an archer who cannot turn around. If the intellect were to examine itself, it would interpret itself as another target in the world to hit with its pointed arrows! The intellect is incapable of wisdom and knows nothing about self-knowledge. The intellect can only shoot projectiles at the world; it can only expand and conquer.

I am writing philosophy. That means I always turn around to seek the cause of our failures within ourselves. I rarely shoot arrows, and certainly not at external targets.

At the same time, this inner work meets obstacles in academic habits and ideals, which are largely intellectual and aim at the facts of the world. For example, I cannot examine our ways of thinking without citing literature that supports that these ways of thinking actually occur in the world (in authors x, y, and z, for example).

Such referencing transforms ways of thinking into worldly targets at which I am supposed to shoot. But I wanted to turn around and seek the cause of our failures within ourselves!

What do we truly need today? Something else than just more facts! We need to learn the art of turning around. We need to learn to seek the cause of our failures within ourselves. The persistent shooting of projectiles at the world has become humanity’s most common disease – virtually the human condition.

Do you think that the intellect can shoot itself out of the crises that its own trigger-happiness causes? Do you think it can expand out of the problems that its own expansions produce?

If Elon Musk takes us to Mars, surely he will solve all our problems!

Pär Segerdahl

This post in Swedish

The Ethics Blog - Thinking about thinking

%d bloggers like this: