Driverless car ethics

June 20, 2018

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


Prepare for robot nonsense

February 26, 2018

Pär SegerdahlAs computers and robots take over tasks that so far only humans could carry out, such as driving a car, we are likely to experience increasingly insidious uses of language by the technology’s intellectual clergy.

The idea of ​​intelligent computers and conscious robots is for some reason terribly fascinating. We see ourselves as intelligent and conscious beings. Imagine if also robots could be intelligent and aware! In fact, we have already seen them (almost): on the movie screen. Soon we may see them in reality too!

Imagine that artifacts that we always considered dead and mechanical one day acquired the enigmatic character of life! Imagine that we created intelligent life! Do we have enough exclamation marks for such a miracle?

The idea of ​​intelligent life in supercomputers often comes with the idea of a test that can determine if a supercomputer is intelligent. It is as if I wanted to make the idea of ​​perpetual motion machines credible by talking about a perpetuum mobile test, invented by a super-smart mathematician in the 17th century. The question if something is a perpetuum mobile is determinable and therefore worth considering! Soon they may function as engines in our intelligent, robot-driven cars!

There is a famous idea of ​​an intelligence test for computers, invented by the British mathematician, Alan Turing. The test allegedly can determine whether a machine “has what we have”: intelligence. How does the test work? Roughly, it is about whether you can distinguish a computer from a human – or cannot do it.

But distinguishing a computer from a human being surely is no great matter! Oh, I forgot to mention that there is a smoke screen in the test. You neither see, hear, feel, taste nor smell anything! In principle, you send written questions into the thick smoke. Out of the smoke comes written responses. But who wrote/generated the answers? Human or computer? If you cannot distinguish the computer-generated answers from human answers – well, then you had better take protection, because an intelligent supercomputer hides behind the smoke screen!

The test is thus adapted to the computer, which cannot have intelligent facial expressions or look perplexed, and cannot groan, “Oh no, what a stupid question!” The test is adapted to an engineer’s concept of intelligent handling of written symbol sequences. The fact that the test subject is a poor human being who cannot always say who/what “generated” the written answers hides this conceptual fact.

These insidious linguistic shifts are unusually obvious in an article I encountered through a rather smart search engine. The article asks if machines can be aware. And it responds: Yes, and a new Turing test can prove it.

The article begins with celebrating our amazing consciousness as “the ineffable and enigmatic inner life of the mind.” Consciousness is then exemplified by the whirl of thought and sensation that blossoms within us when we finally meet a loved one again, hear an exquisite violin solo, or relish an incredible meal.

After this ecstatic celebration of consciousness, the concept begins to be adapted to computer engineering so that finally it is merely a concept of information processing. The authors “show” that consciousness does not require interaction with the environment. Neither does it require memories. Consciousness does not require any emotions like anger, fear or joy. It does not require attention, self-reflection, language or ability to act in the world.

What then remains of consciousness, which the authors initially made it seem so amazing to possess? The answer in the article is that consciousness has to do with “the amount of integrated information that an organism, or a machine, can generate.”

The concept of consciousness is gradually adapted to what was to be proven. Finally, it becomes a feature that unsurprisingly can characterize a computer. After we swallowed the adaptation, the idea is that we, at the Grand Finale of the article, should once again marvel, and be amazed that a machine can have this “mysterious inner life” that we have, consciousness: “Oh, what an exquisite violin solo, not to mention the snails, how lovely to meet again like this!”

The new Turing test that the authors imagine is, as far as I understand, a kind of picture recognition test: Can a computer identify the content of a picture as “a robbery”? A conscious computer should be able to identify pictorial content as well as a human being can do it. I guess the idea is that the task requires very, very much integrated information. No simple rule of thumb, man + gun + building + terrified customer = robbery, will do the trick. It has to be such an enormous amount of integrated information that the computer simply “gets it” and understands that it is a robbery (and not a five-year-old who plays with a toy gun).

Believing in the test thus assumes that we swallowed the adapted concept of consciousness and are ecstatically amazed by super-large amounts of integrated information as: “the ineffable and enigmatic inner life of the mind.”

These kinds of insidious linguistic shifts will attract us even more deeply as robotics develop. Imagine an android with facial expression and voice that can express intelligence or groan at stupid questions. Then surely, we are dealing an intelligent and conscious machine!

Or just another deceitful smoke screen; a walking, interactive movie screen?

Pär Segerdahl

This post in Swedish

The temptation of rhetoric - the ethics blog


Stop talking about predatory journals?

November 22, 2017

Pär SegerdahlAlmost no researcher escapes the incessant emails from journals that offer to publish one’s research. A desire for gain, however, lies behind it all. Although it is not mentioned in the emails, the author typically is charged, and peer review is more or less a façade. Just submit your text and pay – they publish!

The unpleasant phenomenon is standardly referred to as predatory publishing. Worried researchers, publishers, and librarians who want to warn their users, they all talk about predatory journals. These journals pretend to be scientific, but they hardly are.

Lately, however, some researchers have begun to question the vocabulary of predation. Partly because there are scholars who themselves use these journals to promote their careers, and who therefore do not fall prey to them. Partly because even established journals sometimes use the methods of predatory journals, such as incessant spamming and high publishing fees. This is problematic, but does it make these journals predatory?

Another problem pointed out is the risk that we overreact and suspect also promising trends in academic publishing, such as publishing open access. Here too, authors often pay a fee, but the purpose is commendable: making scientific publications openly available on the internet, without payment barriers.

So, how should we talk, if we want to avoid talking about predatory journals?

Stefan Eriksson and Gert Helgesson annually update a blacklist of predatory journals in medical ethics, bioethics and research ethics. They have also published articles on the phenomenon. In a recent opinion piece in Learned Publishing, however, they propose talking instead about two types of problematic journals: deceptive and low-quality journals.

Deceptive journals actively mislead authors, readers and institutions by providing false information about peer review, editorial board, impact factor, publishing costs, and more. Deceptive journals should be counteracted through legal action.

Low-quality journals are not guilty of possibly illegal actions. They are just bad, considered as scientific journals. In addition to poor scientific quality, they can be recognized in several ways. For example, they may publish articles in a ridiculously broad field (e.g., medicine and non-medicine). They may send inquiries to researchers in the “wrong” field. They may lack strategies to deal with research misconduct. And so on.

Stefan Eriksson and Gert Helgesson emphasize that the distinction between deceptive and low-quality journals can help us more clearly see what we are dealing with. And act accordingly. Some journals are associated with actions that can be illegal. Other journals are rather characterized by poor quality.

Time to drop the colorful vocabulary of predation?

Pär Segerdahl

Eriksson, S. and Helgesson, G. (2017), Time to stop talking about ‘predatory journals’. Learned Publishing. doi:10.1002/leap.1135

This post in Swedish

Minding our language - the Ethics Blog


Global data sharing, national oversight bodies

November 8, 2017

Pär SegerdahlScience has an international character and global research collaboration is common. For medical research, this means that health data and biological samples linked to people in one nation often are transferred to researchers in other nations.

At the same time, the development of new information and communication technology increases the importance of people’s data protection rights. To provide satisfying data protection in the new internet world, data protection regulations are tightening, especially within the EU.

In an article in Health and Technology, lawyer Jane Reichel discusses challenges that this development poses for biomedical research.

I am not a lawyer, but if I understand Reichel right, legislation can accompany personal data across national borders. For example, the EU requires that the foreign receiver of European data subjects’ personal data will handle the data in accordance with EU legislation – even if the receiver is a research group in the United States or Japan.

The fact that one nation may need to follow a foreign nation’s legislation not only challenges concepts of sovereignty and territoriality. It also challenges the responsibility of research ethics committees. These committees operate administratively at national level. Now it seems they might also need to monitor foreign rights and global standards. Do these national bodies have the expertise and authority for such an international task?

Read the article about these exciting and unexpected legal issues!

Pär Segerdahl

Reichel, J. Health Technol. (2017). https://doi.org/10.1007/s12553-017-0182-6

This post in Swedish

Thinking about law - the Ethics Blog


Ethics, human rights and responsible innovation

October 31, 2017

josepine-fernow2It is difficult to predict the consequences of developing and using new technologies. We interact with smart devices and intelligent software on an almost daily basis. Some of us use prosthetics and implants to go about our business and most of us will likely live to see self-driving cars. In the meantime, Swedish research shows that petting robot cats looks promising in the care of patients with dementia. Genetic tests are cheaper than ever, and available to both patients and consumers. If you spit in a tube and mail it to a US company, they will tell you where your ancestors are from. Who knows? You could be part sub Saharan African, and part Scandinavian at the same time, and (likely) still be you.

Technologies, new and old, have both ethical and human rights impact. Today, we are closer to scenarios we only pictured in science fiction a few decades ago. Technology develops fast and it is difficult to predict what is on the horizon. The legislation, regulation and ethical guidance we have today was developed for a different future. Policy makers struggle to assess the ethical, legal and human rights impact of new and emerging technologies. These frameworks are challenged when a country like Saudi Arabia, criticized for not giving equal rights to women, offers a robot honorary citizenship. This autumn marks the start of a research initiative that will look at some of these questions. A group of researchers from Europe, Asia, Africa and the Americas join forces to help improve the ethical and legal frameworks we have today.

The SIENNA project (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact) will deliver proposals for professional ethics codes, guidelines for research ethics committees and better regulation in three areas: human genetics and genomics, human enhancement, and artificial intelligence & robotics. The proposals will build on input from stakeholders, experts and citizens. SIENNA will also look at some of the more philosophical questions these technologies raise: Where do we draw the line between health and illness, normality and abnormality? Can we expect intelligent software to be moral? Do we accept giving up some of our privacy to screen our genome for genetic disorders? And if giving up some of our personal liberty is the price we have to pay to interact with machines, are we willing to pay it?

 The project is co-ordinated by the University of Twente. Uppsala University’s Centre for Research Ethics & Bioethics contributes expertise on the ethical, legal and social issues of genetics and genomics, and experience of communicating European research. Visit the SIENNA website at www.sienna-project.eu to find out more about the project and our partners!

Josepine Fernow

The SIENNA projectStakeholder-informed ethics for new technologies with high socio-economic and human rights impact – has received just under € 4 million for a 3,5 year project under the European Union’s H2020 research and innovation programme, grant agreement No 741716.

Disclaimer: This text and its contents reflects only SIENNA’s view. The Commission is not responsible for any use that may be made of the information it contains.

SIENNA project

This post in Swedish

Approaching future issues - the Ethics Blog


Acknowledging the biobank and the people who built it

October 16, 2017

Pär SegerdahlBiomedical research increasingly often uses biological material and information collected in biobanks. In order for a biobank to work efficiently, it is important not only that the biological material is stored well. The material must also be made available to science so that researchers easily and responsibly can share samples and information.

Creating such a biobank is a huge effort. Researchers and clinicians who collect bioresources might even be reluctant to make the biobank openly available. Why make it easy for others to access to your biobank if they do not give you any recognition?

In an article in the Journal of Community Genetics, Heidi C. Howard and Deborah Mascalzoni, among others, discuss a system that would make it more attractive to develop well-functioning biobanks. It is a system for rewarding researchers and clinicians who create high quality bioresources by making their work properly acknowledged.

The system, presented in the article, is called the Bioresource Research Impact Factor (BRIF). If I understand it, the system may work the following way. A biobank is described in a permanent “marker” article published in a specific bioresource journal. Researchers who use the biobank then quote the article in their publications and funding grants. In this way, you can count citations of bioresources as you count citations of research articles.

The article also describes the results of a study of stakeholders’ awareness of BRIF, as well as an ethical analysis of how BRIF can contribute to more responsible biobanking.

If you are building a biobank, read the article and learn more about BRIF!

Pär Segerdahl

Howard, H.C., Mascalzoni, D., Mabile, L. et al. “How to responsibly acknowledge research work in the era of big data and biobanks: ethical aspects of the Bioresource Research Impact Factor (BRIF).” J Community Genet (2017). https://doi.org/10.1007/s12687-017-0332-6

This post in Swedish

We want to be just - the Ethics Blog


Moral panic in the intellect

September 6, 2017

Pär SegerdahlMoral panic develops intellectually. It is our thoughts that are racing. Certain mental images make such a deep impression on us that we take them for Reality, for Truth, for Facts. Do not believe that the intellect is cold and objective. It can boil over with agitated thoughts.

This is evident in bioethics, where many issues are filled with anguish. Research information about cloned animals, about new techniques for editing in the genome, or about embryonic stem cell research, evoke scary images of subversive forms of research, threatening human morality. The panic requires a sensitive intellect. There, the images of the research acquire such dimensions that they no longer fit into ordinary life. The images take over the intellect as the metaphysical horizon of Truth. Commonplace remarks that could calm down the agitated intellect appear to the intellect as naive.

A science news in National Geographic occasions these musings. It is about the first attempt in the United States to edit human embryos genetically. Using so-called CRISPR-Cas9 technique, the researchers removed a mutation associated with a common inherited heart disease. After the successful editing, the embryos were destroyed. (You find the scientific article reporting the research in Nature.)

Reading such research information, you might feel anxiety; anxiety that soon takes possession of your intellect: What will they do next? Develop “better” humans who look down on us as a lower species? Can we permit science to change human nature? NO, we must immediately introduce new legislation that bans all genetic editing of human embryos!

If the intellect can boil over with such agitated thoughts, and if moral panic legislation is imprudent, then I believe that bioethics needs to develop its therapeutic skills. Some bioethical issues need to be treated as affections of the intellect. Bioethical anxiety often arises, I believe, when research communication presents science as the metaphysical horizon of truth, instead of giving science an ordinary human horizon.

It may seem as if I took a stand for science by representing critics as blinded by moral panic. That is not the case, for the other side of moral panic is megalomania. Hyped notions of great breakthroughs and miraculous cures can drive entire research fields. Mental images that worry most people stimulate other personalities. Perhaps Paolo Macchiarini was such a personality, and perhaps he was promoted by a scientific culture of insane mental expectations on research and its heroes.

We need a therapeutic bioethics that can calm down the easily agitated intellect.

Pär Segerdahl

This post in Swedish

We think about bioethics : www.ethicsblog.crb.uu.se


%d bloggers like this: