Driverless car ethics

June 20, 2018

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


Can neuroscience and moral education be united?

June 4, 2018

Daniel Pallarés DomínguezPeople have started to talk about neuroeducation, but what is it? Is it just another example of the fashion of adding the prefix neuro- to the social sciences, like neuroethics, neuropolitics, neuromarketing and neurolaw?

Those who remain sceptical consider it a mistake to link neuroscience with education. However, for some authors, neuroscience can provide useful knowledge about the brain, and they see neuroeducation as a young field of study with many possibilities.

From its birth in the decade of the brain (1990), neuroeducation has been understood as an interdisciplinary field that studies developmental learning processes in the human brain. It is one of the last social neurosciences to be born. It has the progressive aim of improving learning-teaching methodologies by applying the results of neuroscientific research.

Neuroscientific research already plays an important role in education. Taking into account the neural bases of human learning, neuroeducation looks not only for theoretical knowledge but also for practical implications, such as new teaching methodologies, and it reviews classical assumptions about learning and studies disorders of learning. Neuroeducation studies offer possibilities such as early detection of special learning needs or even monitoring and comparing different teaching methodologies implemented in school.

Although neuroeducation primarily focuses on disorders of learning, especially in mathematics and language (dyscalculia and dyslexia), can it be extended to other areas? If neuroscience can shed light on the development of ethics in the brain, can such explorations form the basis of a new form of neuroeducation, moral neuroeducation, which studies the learning or development of ethics?

Before introducing a new term (moral neuroeducation), prudence and critical discussion are needed. First, what would the goal of moral neuroeducation be? Should it consider moral disorders in the brain or just immoral behaviours? Second, neuroscientific knowledge is used in neuroeducation to help design practices that allow more efficient teaching to better develop students’ intellectual potentials throughout their training process. Should this be the goal also of moral neuroeducation? Should we strive for greater efficiency in teaching ethics? If so, what is the ethical competence we should try to develop in students?

It seems that we still need a critical and philosophical approach to the promising union of neuroscience and moral education. In my postdoctoral project, Neuroethical Bases for Moral Neuroeducation, I will contribute to developing such an approach.

Daniel Pallarés Domínguez

My postdoctoral research at the Centre for Research Ethics and Bioethics (CRB) is linked to a research project funded by the Ministry of Economy and Competitiveness in Spain. That project is entitled, Moral Neuroeducation for Applied Ethics [FFI2016-76753-C2-2-P], and is led by Domingo García-Marzá.

We care about education


%d bloggers like this: