A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: Artificial Intelligence (Page 3 of 4)

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

Ethics as renewed clarity about new situations

An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values ​​and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.

The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.

The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.

Read the article, which has already attracted well-deserved attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Anaïs Rességuier, Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society

This post in Swedish

We like critical thinking

Diversity in research: why do we need it? (by Karin Grasenick & Julia Trattnig)

Scientific discovery is based on the novelty of the questions you ask. This means that if you want to discover something new, you probably have to ask a different question. And since different people have different preconceptions and experiences than you, they are likely to formulate their questions differently. This makes a case for diversity in research, If we want to make new discoveries that concern diverse groups, diversity in research becomes even more important.

The Human Brain Project participated in the FENS 2020 Virtual Forum this summer, an international virtual neuroscience conference that explores all domains in modern brain research. For the Human Brain Project (HBP), committed to responsible research and innovation, this includes diversity. Which is why Karin Grasenick, Coordinator for Gender and Diversity in the HBP, explored the relationship between diversity and new discovery in the session “Of mice, men and machines” at the FENS 2020.  

So why is diversity in research crucial to make new discoveries? Research depends on the questions asked, the models used, and the details considered. For this reason, it is important to reflect on why certain variables are analysed, or which aspects might play a role. An example is Parkinson’s disease, where patients are affected differently depending on both age and gender. Being a (biological) man or woman, old or young is important for both diagnosis and treatment. If we know that diversity matters in research on Parkinson’s disease, it probably should do so in most neuroscience. Apart from gender and age, we also need to consider other aspects of diversity, like race, ethnicity, education or social background. Because depending on who you are, biologically, culturally and socially, you are likely to need different things.

A quite recent example for this is Covid-19, which does not only display gender differences (as it affects more men than women), but also racial differences: Black and Latino people in the US have been disproportionately affected, regardless of their living area (rural or urban) or their age (old or young). Again, the reasons for this are not simply biologically essentialist (e.g. hormones or chromosomes), but also linked to social aspects such as gendered lifestyles (men are more often smokers than women), inequities in the health system or certain jobs which cannot be done remotely (see for example this BBC Future text on why Covid-19 is different for men and women or this one on the racial inequity of coronavirus in The New York Times).

Another example is Machine Learning. If we train AI on data that is not representative of the population, we introduce bias in the algorithm. For example, applications to diagnose skin cancer in medicine more often fail to recognize tumours in darker skin correctly because they are trained using pictures of fair skin. There are several reasons for not training AI properly, it could be a cost issue, lack of material to train the AI on, but it is not unlikely that people with dark skin are discriminated because scientists and engineers simply did not think about diversity when picking material for the AI to train on. In the case of skin cancer, it is clear that diversity could indeed save lives.

But where to start? When you do research, there are two questions that must be asked: First, what is the focus of your research? And second, who are the beneficiaries of your research?

Whenever your research focus includes tissues, cells, animals or humans, you should consider diversity factors like gender, age, race, ethnicity, and environmental influences. Moreover, any responsible scientist should consider who has access to their research and profits from it, as well as the consequences their research might have for end users or the broader public.

However, as a researcher you need to consider not only the research subjects and the people your results benefit. The diversity of the research team also matters, because different people perceive problems in different ways and use different methods and processes to solve them. Which is why a diverse team is more innovative.

If you want to find out more about the role of diversity in research, check out the presentation “Of mice, men and machines” or read the blogpost on Common Challenges in Neuroscience, AI, Medical Informatics, Robotics and New Insights with Diversity & Ethics.

Written by…

Karin Grasenick, founder and managing partner of convelop, coordinates all issues related to Diversity and Equal Opportunities in the Human Brain Project and works as a process facilitator, coach and lecturer.

&

Julia Trattnig, consultant and scientific staff member at convelop, supports the Human Brain Project concerning all measures and activities for gender mainstreaming and diversity management.

We recommend readings

This is a guest blog post from the Human Brain Project (HBP). The HBP as received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

Human Brain Project logo

Ethical fitness apps for high performance morality

In an unusually rhetorical article for being in a scientific journal, the image is drawn of a humanity that frees itself from moral weakness by downloading ethical fitness apps.

The authors claim that the maxim “Know thyself!” from the temple of Apollo at Delphi is answered today more thoroughly than ever. Never has humanity known more about itself. Ethically, we are almost fully educated. We also know more than ever about the moral weaknesses that prevent us from acting in accordance with the ethical principles that we finally know so well. Research is discovering more and more mechanisms in the brain and in our psychology that affect humanity’s moral shortcomings.

Given this enormous and growing self-knowledge, why do we not develop artificial intelligence that supports a morally limping humanity? Why spend so much resources on developing even more intelligent artificial intelligence, which takes our jobs and might one day threaten humanity in the form of uncontrollable superintelligence? Why do we behave so unwisely when we could develop artificial intelligence to help us humans become superethical?

How can AI make morally weak humans super-ethical? The authors suggest a comparison with the fitness apps that help people to exercise more efficiently and regularly than they otherwise would. The authors’ suggestion is that our ethical knowledge of moral theories, combined with our growing scientific knowledge of moral weaknesses, can support the technological development of moral crutches: wise objects that support people precisely where we know that we are morally limping.

My personal assessment of this utopian proposal is that it might easily be realized in less utopian form. AI is already widely used as a support in decision-making. One could imagine mobile apps that support consumers to make ethical food choices in the grocery shop. Or computer games where consumers are trained to weigh different ethical considerations against each another, such as animal welfare, climate effects, ecological effects and much more. Nice looking presentations of the issues and encouraging music that make it fun to be moral.

The philosophical question I ask is whether such artificial decision support in shops and other situations really can be said to make humanity wiser and more ethical. Imagine a consumer who chooses among the vegetables, eagerly looking for decision support in the smartphone. What do you see? A human who, thanks to the mobile app, has become wiser than Socrates, who lived long before we knew as much about ourselves as we do today?

Ethical fitness apps are conceivable. However, the risk is that they spread a form of self-knowledge that flies above ourselves: self-knowledge suspiciously similar to the moral vice of self-satisfied presumptuousness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Pim Haselager & Giulio Mecacci (2020) Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly, AJOB Neuroscience, 11:2, 113-119, DOI: 10.1080/21507740.2020.1740353

The temptation of rhetoric

This post in Swedish

Responsibly planned research communication

Academic research is driven by dissemination of results to peers at conferences and through publication in scientific journals. However, research results belong not only to the research community. They also belong to society. Therefore, results should reach not only your colleagues in the field or the specialists in adjacent fields. They should also reach outside the academy.

Who is out there? A homogeneous public? No, it is not that simple. Communicating research is not two activities: first communicating the science to peers and then telling the popular scientific story to the public. Outside the academy, we find engineers, entrepreneurs, politicians, government officials, teachers, students, research funders, taxpayers, healthcare professionals… We are all out there with our different experiences, functions and skills.

Research communication is therefore a strategically more complicated task than just “reaching the public.” Why do you want to communicate your results; why are they important? Who will find your results important? How do you want to communicate them? When is the best time to communicate? There is not just one task here. You have to think through what the task is in each particular case. For the task varies with the answers to these questions. Only when you can think strategically about the task can you communicate research responsibly.

Josepine Fernow is a skilled and experienced research communications officer at CRB. She works with communication in several research projects, including the Human Brain Project and STARBIOS2. In the latter project, about Responsible Research and Innovation (RRI), she contributes in a new book with arguments for responsibly planned research communication: Achieving impact: some arguments for designing a communications strategy.

Josepine Fernow’s contribution is, in my view, more than a convincing argument. It is an eye-opening text that helps researchers see more clearly their diverse relationships to society, and thereby their responsibilities. The academy is not a rock of knowledge in a sea of ​​ignorant lay people. Society consists of experienced people who, because of what they know, can benefit from your research. It is easier to think strategically about research communication when you survey your relations to a diversified society that is already knowledgeable. Josepine Fernow’s argumentation helps and motivates you to do that.

Josepine Fernow also warns against exaggerating the significance of your results. Bioscience has potential to give us effective treatments for serious diseases, new crops that meet specific demands, and much more. Since we are all potential beneficiaries of such research, as future patients and consumers, we may want to believe the excessively wishful stories that some excessively ambitious researchers want to tell. We participate in a dangerous game of increasingly unrealistic hopes.

The name of this dangerous game is hype. Research hype can make it difficult for you to continue your research in the future, because of eroded trust. It can also make you prone to take unethical shortcuts. The “huge potential benefit” obscures your judgment as a responsible researcher.

In some research fields, it is extra difficult to avoid research hype, as exaggerated hopes seem inscribed in the very language of the field. An example is artificial intelligence (AI), where the use of psychological and neuroscientific vocabulary about machines can create the impression that one has already fulfilled the hopes. Anthropomorphic language can make it sound as if some machines already thought like humans and functioned like brains.

Responsible research communication is as important as difficult. Therefore, these tasks deserve our greatest attention. Read Josepine Fernow’s argumentation for carefully planned communication strategies. It will help you see more clearly your responsibility.

Finally, a reminder for those interested: the STARBIOS2 project organizes its final event via Zoom on Friday, May 29, 2020.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Fernow, J. (2019). Note #11: Achieving impact: Some arguments for designing a communications strategy, In A. Declich (Ed.), RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project, (pp. 177-180). Uppsala University. ISBN: 978-91-506-2811-1

We care about communication

This post in Swedish

Anthropomorphism in AI can limit scientific and technological development

Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

We recommend readings

This post in Swedish

What is a moral machine?

I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020).

We recommend readings

This post in Swedish

Artificial intelligence and living consciousness

The Ethics Blog will publish several posts on artificial intelligence in the future. Today, I just want to make a little observation of something remarkable.

The last century was marked by fear of human consciousness. Our mind seemed as mystic as the soul, as superfluous in a scientific age as God. In psychology, behaviorism flourished, which defined psychological words in terms of bodily behavior that could be studied scientifically in the laboratory. Our living consciousness was treated as a relic from bygone superstitious ages.

What is so remarkable about artificial intelligence? Suddenly, one seems to idolize consciousness. One wallows in previously sinful psychological words, at least when one talks about what computers and robots can do. These machines can see and hear; they can think and speak. They can even learn by themselves.

Does this mean that the fear of consciousness has ceased? Hardly, because when artificial intelligence employs psychological words such as seeing and hearing, thinking and understanding, the words cease to be psychological. The idea of computer “learning,” for example, is a technical term that computer experts define in their laboratories.

When artificial intelligence embellishes machines with psychological words, then, one repeats how behaviorism defined mind in terms of something else. Psychological words take on new machine meanings that overshadow the meanings the words have among living human beings.

Remember this next time you wonder if robots might become conscious. The development exhibits fear of consciousness. Therefore, what you are wondering is not if robots can become conscious. You wonder if your own consciousness can be superstition. Remarkable, right?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

We like challenging questions

This post in Swedish

How can we set future ethical standards for ICT, Big Data, AI and robotics?

josepine-fernow-siennaDo you use Google Maps to navigate in a new city? Ask Siri, Alexa or OK Google to play your favourite song? To help you find something on Amazon? To read a text message from a friend while you are driving your car? Perhaps your car is fitted with a semi-autonomous adaptive cruise control system… If any software or machine is going to perform in any autonomous way, it needs to collect data. About you, where you are going, what songs you like, your shopping habits, who your friends are and what you talk about. This begs the question:  are we willing to give up part of our privacy and personal liberty to enjoy the benefits technology offers.

It is difficult to predict the consequences of developing and using new technology. Policymakers struggle to assess the ethical, legal and human rights impacts of using different kinds of IT systems. In research, in industry and our homes. Good policy should be helpful for everyone that holds a stake. We might want it to protect ethical values and human rights, make research and development possible, allow technology transfer from academia to industry, make sure both large and smaller companies can develop their business, and make sure that there is social acceptance for technological development.

The European Union is serious about developing policy on the basis of sound research, rigorous empirical data and wide stakeholder consultation. In recent years, the Horizon2020 programme has invested € 10 million in three projects looking at the ethics and human rights implications of emerging digital technologies: PANELFIT, SHERPA and SIENNA.

The first project, PANELFIT (which is short for Participatory Approaches to a New Ethical and Legal Framework for ICT), will develop guidelines on the ethical and legal issues of ICT research and innovation. The second, SHERPA (stands for Shaping the ethical dimensions of Smart Information Systems (SIS) – A European Perspective), will develop tools to identify and address the ethical dimensions of smart information systems (SIS), which is the combination of artificial intelligence (AI) and big data analytics. SIENNA (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), will develop research ethics protocols, professional ethical codes, and better ethical and legal frameworks for AI and robotics, human enhancement technologies, and human genomics.

SSP-graphic

All three projects involve experts, publics and stakeholders to co-create outputs, in different ways. They also support the European Union’s vision of Responsible Research and Innovation (RRI). SIENNA, SHERPA and PANELFIT recently published an editorial in the Orbit Journal, inviting stakeholders and publics to engage with the projects and contribute to the work.

Want to read more? Rowena Rodrigues and Anaïs Resseguier have written about some of the issues raised by the use of artificial intelligence on Ethics Dialogues (The underdog in the AI and ethical debate: human autonomy), and you can find out more about the SIENNA project in a previous post on the Ethics Blog (Ethics, human rights and responsible innovation).

Want to know more about the collaboration between SIENNA, SHERPA and PANELFIT? Read the editorial in Orbit (Setting future ethical standards for ICT, Big Data, AI and robotics: The contribution of three European Projects), or watch a video from our joint webinar on May 20, 2019 on YouTube (SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics).

Want to know how SIENNA views the ethical impacts of AI and robotics? Download infographic (pdf) and read our state-of-the-art review for AI & robotics (deliverable report).

AI-robotics-ifographic

Josepine Fernow

This post in Swedish

We want solid foundations - the Ethics Blog

 

Driverless car ethics

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

« Older posts Newer posts »