Driverless car ethics

June 20, 2018

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


Can neuroscience and moral education be united?

June 4, 2018

Daniel Pallarés DomínguezPeople have started to talk about neuroeducation, but what is it? Is it just another example of the fashion of adding the prefix neuro- to the social sciences, like neuroethics, neuropolitics, neuromarketing and neurolaw?

Those who remain sceptical consider it a mistake to link neuroscience with education. However, for some authors, neuroscience can provide useful knowledge about the brain, and they see neuroeducation as a young field of study with many possibilities.

From its birth in the decade of the brain (1990), neuroeducation has been understood as an interdisciplinary field that studies developmental learning processes in the human brain. It is one of the last social neurosciences to be born. It has the progressive aim of improving learning-teaching methodologies by applying the results of neuroscientific research.

Neuroscientific research already plays an important role in education. Taking into account the neural bases of human learning, neuroeducation looks not only for theoretical knowledge but also for practical implications, such as new teaching methodologies, and it reviews classical assumptions about learning and studies disorders of learning. Neuroeducation studies offer possibilities such as early detection of special learning needs or even monitoring and comparing different teaching methodologies implemented in school.

Although neuroeducation primarily focuses on disorders of learning, especially in mathematics and language (dyscalculia and dyslexia), can it be extended to other areas? If neuroscience can shed light on the development of ethics in the brain, can such explorations form the basis of a new form of neuroeducation, moral neuroeducation, which studies the learning or development of ethics?

Before introducing a new term (moral neuroeducation), prudence and critical discussion are needed. First, what would the goal of moral neuroeducation be? Should it consider moral disorders in the brain or just immoral behaviours? Second, neuroscientific knowledge is used in neuroeducation to help design practices that allow more efficient teaching to better develop students’ intellectual potentials throughout their training process. Should this be the goal also of moral neuroeducation? Should we strive for greater efficiency in teaching ethics? If so, what is the ethical competence we should try to develop in students?

It seems that we still need a critical and philosophical approach to the promising union of neuroscience and moral education. In my postdoctoral project, Neuroethical Bases for Moral Neuroeducation, I will contribute to developing such an approach.

Daniel Pallarés Domínguez

My postdoctoral research at the Centre for Research Ethics and Bioethics (CRB) is linked to a research project funded by the Ministry of Economy and Competitiveness in Spain. That project is entitled, Moral Neuroeducation for Applied Ethics [FFI2016-76753-C2-2-P], and is led by Domingo García-Marzá.

We care about education


Can a robot learn to speak?

May 29, 2018

Pär SegerdahlThere are self-modifying computer programs that “learn” from success and failure. Chess-playing computers, for example, become better through repeated games against humans.

Could a similar robot also learn to speak? If the robot gets the same input as a child gets when it learns to speak, should it not be possible in principle?

Notice how the question zigzags between child and machine. We say that the robot learns. We say that the child gets input. We speak of the robot as if it were a child. We speak of the child as if it were a robot. Finally, we take this linguistic zigzagging seriously as a fascinating question, perhaps even a great research task.

An AI expert and prospective father who dreamed of this great research task took the following ambitious measures. He equipped his whole house with cameras and microphones, to document all parent-child interactions during the child’s first years. Why? He wanted to know exactly what kind of linguistic input a child gets when it learns to speak. At a later stage, he might be able to give a self-modifying robot the same input and test if it also learns to speak.

How did the project turn out? The personal experience of raising the child led the AI ​​expert to question the whole project of teaching a robot to speak. How could a personal experience lead to the questioning of a seemingly serious scientific project?

Here, I could start babbling about how amiably social children are compared to cold machines. How they learn in close relationships with their parents. How they curiously and joyfully take the initiative, rather than calculatingly await input.

The problem is that such babbling on my part would make it seem as if the AI ​​expert simply was wrong about robots and children. That he did not know the facts, but now is more well-informed. It is not that simple. For the idea behind ​​the project presupposed unnoticed linguistic zigzagging. Already in asking the question, the boundaries between robots and children are blurred. Already in the question, we have half answered it!

We cannot be content with responding to the question in the headline with a simple, “No, it cannot.” We must reject the question as nonsense. Deceitful zigzagging creates the illusion that we are dealing with a serious question, worthy of scientific study.

This does not exclude, however, that computational linguistics increasingly uses self-modifying programs, and with great success. But that is another question.

Pär Segerdahl

Beard, Alex. How babies learn – and why robots can’t compete. The Guardian, 3 April 2018

This post in Swedish

We like critical thinking : www.ethicsblog.crb.uu.se


Read this interview with Kathinka Evers!

April 26, 2018

Through philosophical analysis and development of concepts, Uppsala University contributes significantly to the European Flagship, the Human Brain Project. New ways of thinking about the brain and about consciousness are suggested, which take us beyond oppositions between consciousness and unconsciousness, and between consciousness and matter.

Do you want to know more? Read the fascinating interview with Kathinka Evers: A continuum of consciousness: The Intrinsic Consciousness Theory

Kathinka Evers at CRB in Uppsala leads the work on neuroethics and neurophilosophy in the Human Brain Project.

Pär Segerdahl

We recommend readings - the Ethics Blog


Prepare for robot nonsense

February 26, 2018

Pär SegerdahlAs computers and robots take over tasks that so far only humans could carry out, such as driving a car, we are likely to experience increasingly insidious uses of language by the technology’s intellectual clergy.

The idea of ​​intelligent computers and conscious robots is for some reason terribly fascinating. We see ourselves as intelligent and conscious beings. Imagine if also robots could be intelligent and aware! In fact, we have already seen them (almost): on the movie screen. Soon we may see them in reality too!

Imagine that artifacts that we always considered dead and mechanical one day acquired the enigmatic character of life! Imagine that we created intelligent life! Do we have enough exclamation marks for such a miracle?

The idea of ​​intelligent life in supercomputers often comes with the idea of a test that can determine if a supercomputer is intelligent. It is as if I wanted to make the idea of ​​perpetual motion machines credible by talking about a perpetuum mobile test, invented by a super-smart mathematician in the 17th century. The question if something is a perpetuum mobile is determinable and therefore worth considering! Soon they may function as engines in our intelligent, robot-driven cars!

There is a famous idea of ​​an intelligence test for computers, invented by the British mathematician, Alan Turing. The test allegedly can determine whether a machine “has what we have”: intelligence. How does the test work? Roughly, it is about whether you can distinguish a computer from a human – or cannot do it.

But distinguishing a computer from a human being surely is no great matter! Oh, I forgot to mention that there is a smoke screen in the test. You neither see, hear, feel, taste nor smell anything! In principle, you send written questions into the thick smoke. Out of the smoke comes written responses. But who wrote/generated the answers? Human or computer? If you cannot distinguish the computer-generated answers from human answers – well, then you had better take protection, because an intelligent supercomputer hides behind the smoke screen!

The test is thus adapted to the computer, which cannot have intelligent facial expressions or look perplexed, and cannot groan, “Oh no, what a stupid question!” The test is adapted to an engineer’s concept of intelligent handling of written symbol sequences. The fact that the test subject is a poor human being who cannot always say who/what “generated” the written answers hides this conceptual fact.

These insidious linguistic shifts are unusually obvious in an article I encountered through a rather smart search engine. The article asks if machines can be aware. And it responds: Yes, and a new Turing test can prove it.

The article begins with celebrating our amazing consciousness as “the ineffable and enigmatic inner life of the mind.” Consciousness is then exemplified by the whirl of thought and sensation that blossoms within us when we finally meet a loved one again, hear an exquisite violin solo, or relish an incredible meal.

After this ecstatic celebration of consciousness, the concept begins to be adapted to computer engineering so that finally it is merely a concept of information processing. The authors “show” that consciousness does not require interaction with the environment. Neither does it require memories. Consciousness does not require any emotions like anger, fear or joy. It does not require attention, self-reflection, language or ability to act in the world.

What then remains of consciousness, which the authors initially made it seem so amazing to possess? The answer in the article is that consciousness has to do with “the amount of integrated information that an organism, or a machine, can generate.”

The concept of consciousness is gradually adapted to what was to be proven. Finally, it becomes a feature that unsurprisingly can characterize a computer. After we swallowed the adaptation, the idea is that we, at the Grand Finale of the article, should once again marvel, and be amazed that a machine can have this “mysterious inner life” that we have, consciousness: “Oh, what an exquisite violin solo, not to mention the snails, how lovely to meet again like this!”

The new Turing test that the authors imagine is, as far as I understand, a kind of picture recognition test: Can a computer identify the content of a picture as “a robbery”? A conscious computer should be able to identify pictorial content as well as a human being can do it. I guess the idea is that the task requires very, very much integrated information. No simple rule of thumb, man + gun + building + terrified customer = robbery, will do the trick. It has to be such an enormous amount of integrated information that the computer simply “gets it” and understands that it is a robbery (and not a five-year-old who plays with a toy gun).

Believing in the test thus assumes that we swallowed the adapted concept of consciousness and are ecstatically amazed by super-large amounts of integrated information as: “the ineffable and enigmatic inner life of the mind.”

These kinds of insidious linguistic shifts will attract us even more deeply as robotics develop. Imagine an android with facial expression and voice that can express intelligence or groan at stupid questions. Then surely, we are dealing an intelligent and conscious machine!

Or just another deceitful smoke screen; a walking, interactive movie screen?

Pär Segerdahl

This post in Swedish

The temptation of rhetoric - the ethics blog


New concept of consciousness challenges language

January 31, 2018

Pär SegerdahlA few weeks ago, I recommended an exciting article by Michele Farisco. Now I wish to recommend another article, where Farisco (together with Steven Laureys and Kathinka Evers) argues even more thoroughly for a new concept of consciousness.

The article in Mind & Matter is complex and I doubt that I can do it justice. I have to start out from my own experience. For when Farisco challenges the opposition between consciousness and the unconscious, it resembles something I have written about: the opposition between human and animal.

Oppositions that work perfectly in everyday language often become inapplicable for scientific purposes. In everyday life, the opposition between human and animal is unproblematic. If a child tells us that it saw an animal, we know it was not a human the child saw. For the biologist, however, the idea of ​​the human as non-animal would be absurd. Although it is perfectly in order in everyday language, biology must reject the opposition between human and animal. It hides continuities between us and the other animals.

Farisco says (if I understand him) something similar about neuroscience. Although the opposition between consciousness and the unconscious works in everyday language, it becomes problematic in neuroscience. It hides continuities in the brain’s way of functioning. Neuroscience should therefore view consciousness and the unconscious as continuous forms of the same basic phenomenon in living brains.

If biology talks about the human as one of the animal species, how does Farisco suggest that neuroscience should talk about consciousness? Here we face greater linguistic challenges than when biology considers humans to be animals.

Farico’s proposal is to widen the notion of consciousness to include also what we usually call the unconscious (much like the biologist widens the concept of animals). Farisco thus suggests, roughly, that the brain is conscious as long as it is alive, even in deep sleep or in coma. Note, however, that he uses the word in a new meaning! He does not claim what he appears to be claiming!

The brain works continually, whether we are conscious or not (in the ordinary sense). Most neural processes are unconscious and a prerequisite for consciousness (in the ordinary sense). Farisco suggests that we use the word consciousness for all these processes in living brains. The two states we usually oppose – consciousness and the unconscious – are thus forms of the same basic phenomenon, namely, consciousness in Farisco’s widened sense.

Farisco supports the widened concept of consciousness by citing neuroscientific evidence that I have to leave aside in this post. All I wish to do here is to point out that Farico’s concept of consciousness probably is as logical in neuroscience as the concept of the human as animal is in biology.

Do not let the linguistic challenges prevent you from seeing the logic of Farisco’s proposal!

Pär Segerdahl

Farisco, M., Laureys, S. and Evers, K. 2017. The intrinsic activity of the brain and its relation to levels and disorders of consciousness. Mind and Matter 15: 197-219

This post in Swedish

We recommend readings - the Ethics Blog


The unconscious deserves moral attention

January 10, 2018

Pär SegerdahlLast autumn, Michele Farisco wrote one of the most read posts on The Ethics Blog. The post was later republished by BioEdge.

Today, I want to recommend a recent article where Farisco develops his thinking – read it in the journal, Philosophy, Ethics, and Humanities in Medicine.

The article will certainly receive at least as much attention as the blog post did. Together with Kathinka Evers, Farisco develops a way of thinking about the unconscious that at first seems controversial, but which after careful consideration becomes increasingly credible. That combination is hard to beat.

What is it about? It is about patients with serious brain injuries, perhaps after a traffic accident. Ethical discussions about these patients usually focus on residual consciousness. We think that there is an absolute difference between consciousness and unconsciousness. Only a conscious person can experience well-being. Only a conscious person can have interests. Therefore, a patient with residual consciousness deserves a completely different care than an unconscious patient. A different attention to pain relief, peace and quiet, and stimulation. – Why create a warm and stimulating environment if the patient is completely unaware of it?

In the article, Farisco challenges the absolute difference between consciousness and unconsciousness. He describes neuroscientific evidence that indicates two often-overlooked connections between conscious and unconscious brain processes. The first is that the unconscious (at least partly) has the abilities that are considered ethically relevant when residual consciousness is discussed. The other connection is that conscious and unconscious brain processes are mutually dependent. They shape each other. Even unconsciously, the brain reacts uniquely to the voices of family members.

Farisco does not mean that this proves that we have an obligation to treat unconscious patients as conscious. However, the unconscious deserves moral attention. Perhaps we should strive to assess also retained unconscious abilities. In some cases, we should perhaps play the music the patient loved before the accident.

Pär Segerdahl

Farisco, M. and Evers, K. The ethical relevance of the unconscious. Philosophy, Ethics, and Humanities in Medicine (2017) DOI 10.1186/s13010-017-0053-9

This post in Swedish

We recommend readings - the Ethics Blog


%d bloggers like this: