A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 6 of 19)

“Cooperative,” “pleasant” and “reliable” robot colleague is wanted

Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.

Given that some people treat robots as good colleagues and that good colleagues contribute to a good working environment, it becomes reasonable to ask: Can a robot be a good colleague? The question is investigated by Sven Nyholm and Jilles Smids in the journal Science and Engineering Ethics.

The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!

What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.

The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.

The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”

If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.

Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

This post in Swedish

Approaching future issues

We shape the societies that shape us: our responsibility for human nature

Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.

For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?

As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.

Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.

Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.

The notion of ​​a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of ​​this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.

The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.

Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kathinka Evers. 2020. The Culture‐Bound Brain: Epigenetic Proaction Revisited. Theoria. doi:10.1111/theo.12264

This post in Swedish

We like challenging questions

An ideology that is completely foreign to my ideology

I read a newspaper editorial that criticized ideological elements in school teaching. The author had visited the website of one of the organizations hired by the schools and found clear expressions of a view of society based on ideological dogmas of a certain kind.

The criticism may well have been justified. What made me think was how the author explained the problem. It sounded as if the problem was that the ideology in question was foreign to the author’s own ideology: “foreign to me and most other …-ists”.

I was sad when I read this. It made it appear as if it was our human destiny to live trapped in ideological labyrinths, alien to each other. If we are foreign to an ideology, does it really mean nothing more than that the ideology is foreign to our own ideology?

Can we free ourselves from the labyrinths of ideology? Or would it be just a different ideology: “We anti-ideologues call for a fight against all ideologies”!? Obviously, it is difficult to fight all ideologies without becoming ideological yourself. Even peace movements bear the seeds of new conflicts. Which side for peace are you on?

Can we free ourselves by strictly sticking to the facts and nothing but the facts? Sticking to the facts is important. One problem is that ideologies already love to refer to facts, to strengthen the ideology and present it as the truth. Pointing out facts provides ammunition for even more ideological debate, of which we will soon become an engaged party: “We rationalists strongly oppose all ideologically biased descriptions of reality”!?

Can the solution be to always acknowledge ideological affiliation, so that we spread awareness of our ideological one-sidedness: “Hello, I represent the national organization against intestinal lavage – a practice that we anti-flushers see as a violation of human dignity”!? It can be good to inform others about our motives, so that they are not misled into believing what we say. However, it hardly shows a more beautiful aspect of humanity, but reinforces the image that conflicting forms of ideological one-sidedness are our destiny.

However, if we now see the problem clearly, if we see how every attempt to solve the problem recreates the problem, have we not opened ourselves to our situation? Have we not seen ourselves with a gaze that is no longer one-sided? Are we not free?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

Ethics as renewed clarity about new situations

An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values ​​and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.

The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.

The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.

Read the article, which has already attracted well-deserved attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Anaïs Rességuier, Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society

This post in Swedish

We like critical thinking

Ethical fitness apps for high performance morality

In an unusually rhetorical article for being in a scientific journal, the image is drawn of a humanity that frees itself from moral weakness by downloading ethical fitness apps.

The authors claim that the maxim “Know thyself!” from the temple of Apollo at Delphi is answered today more thoroughly than ever. Never has humanity known more about itself. Ethically, we are almost fully educated. We also know more than ever about the moral weaknesses that prevent us from acting in accordance with the ethical principles that we finally know so well. Research is discovering more and more mechanisms in the brain and in our psychology that affect humanity’s moral shortcomings.

Given this enormous and growing self-knowledge, why do we not develop artificial intelligence that supports a morally limping humanity? Why spend so much resources on developing even more intelligent artificial intelligence, which takes our jobs and might one day threaten humanity in the form of uncontrollable superintelligence? Why do we behave so unwisely when we could develop artificial intelligence to help us humans become superethical?

How can AI make morally weak humans super-ethical? The authors suggest a comparison with the fitness apps that help people to exercise more efficiently and regularly than they otherwise would. The authors’ suggestion is that our ethical knowledge of moral theories, combined with our growing scientific knowledge of moral weaknesses, can support the technological development of moral crutches: wise objects that support people precisely where we know that we are morally limping.

My personal assessment of this utopian proposal is that it might easily be realized in less utopian form. AI is already widely used as a support in decision-making. One could imagine mobile apps that support consumers to make ethical food choices in the grocery shop. Or computer games where consumers are trained to weigh different ethical considerations against each another, such as animal welfare, climate effects, ecological effects and much more. Nice looking presentations of the issues and encouraging music that make it fun to be moral.

The philosophical question I ask is whether such artificial decision support in shops and other situations really can be said to make humanity wiser and more ethical. Imagine a consumer who chooses among the vegetables, eagerly looking for decision support in the smartphone. What do you see? A human who, thanks to the mobile app, has become wiser than Socrates, who lived long before we knew as much about ourselves as we do today?

Ethical fitness apps are conceivable. However, the risk is that they spread a form of self-knowledge that flies above ourselves: self-knowledge suspiciously similar to the moral vice of self-satisfied presumptuousness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Pim Haselager & Giulio Mecacci (2020) Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly, AJOB Neuroscience, 11:2, 113-119, DOI: 10.1080/21507740.2020.1740353

The temptation of rhetoric

This post in Swedish

Autonomous together

Autonomy is such a cherished concept in ethics that I hardly dare to write about it. The fact that the concept cherishes the individual does not make my task any easier. The slightest error in my use of the term, and I risk being identified as an enemy perhaps not of the people but of the individual!

In ethics, autonomy means personal autonomy: individuals’ ability to govern their own lives. This ability is constantly at risk of being undermined. It is undermined if others unduly influence your decisions, if they control you. It is also undermined if you are not sufficiently well informed and rational. For example, if your decisions are based on false or contradictory information, or if your decisions result from compulsions or weakness of the will. It is your faculty of reason that should govern your life!

In an article in BMC Medical Ethics, Amal Matar, who has a PhD at CRB, discusses decision-making situations in healthcare where this individual-centered concept of autonomy seems less useful. It is about decisions made not by individuals alone, but by people together: by couples planning to become parents.

A couple planning a pregnancy together is expected to make joint decisions. Maybe about genetic tests and measures to be taken if the child risks developing a genetic disease. Here, as always, the healthcare staff is responsible for protecting the patients’ autonomy. However, how is this feasible if the decision is not made by individuals but jointly by a couple?

Personal autonomy is an idealized concept. No man is an island, it is said. This is especially evident when a couple is planning a life together. If a partner begins to emphasize his or her personal autonomy, the relationship probably is about to disintegrate. An attempt to correct the lack of realism in the idealized concept has been to develop ideas about relational autonomy. These ideas emphasize how individuals who govern their lives are essentially related to others. However, as you can probably hear, relational autonomy remains tied to the individual. Amal Matar therefore finds it urgent to take a further step towards realism concerning joint decisions made by couples.

Can we talk about autonomy not only at the level of the individual, but also at the level of the couple? Can a couple planning a pregnancy together govern their life by making decisions that are autonomous not only for each one of them individually, but also for them together as a couple? This is Amal Matar’s question.

Inspired by how linguistic meaning is conceptualized in linguistic theory as existing not only at the level of the word, but also at the level of the sentence (where words are joined together), Amal Matar proposes a new concept of couple autonomy. She suggests that couples can make joint decisions that are autonomous at both the individual and the couple’s level.

She proposes a three-step definition of couple autonomy. First, both partners must be individually autonomous. Then, the decision must be reached via a communicative process that meets a number of criteria (no partner dominates, sufficient time is given, the decision is unanimous). Finally, the definition allows one partner to autonomously transfer aspects of the decision to the other partner.

The purpose of the definition is not a philosophical revolution in ethics. The purpose is practical. Amal Matar wants to help couples and healthcare professionals to speak realistically about autonomy when the decision is a couple’s joint decision. Pretending that separate individuals make decisions in parallel makes it difficult to realistically assess and support the decision-making process, which is about interaction.

Amal Matar concludes the article, written together with Anna T. Höglund, Pär Segerdahl and Ulrik Kihlbom, with describing two cases. The cases show concretely how her definition can help healthcare professionals to assess and support autonomous decision-making at the level of the couple. In one case, the couple’s autonomy is undermined, in the other case, probably not.

Read the article as an example of how we sometimes need to modify cherished concepts to enable a realistic use of them. 

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Matar, A., Höglund, A.T., Segerdahl, P. and Kihlbom, U. Autonomous decisions by couples in reproductive care. BMC Med Ethics 21, 30 (2020). https://doi.org/10.1186/s12910-020-00470-w

We like challenging questions

This post in Swedish

Anthropomorphism in AI can limit scientific and technological development

Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

We recommend readings

This post in Swedish

We cannot control everything: the philosophical dimensions of life

Life always surpasses us. We thought we were in control, but then something unexpected happens that seems to upset the order. A storm, a forest fire, a pandemic. Life appears as a drawing in sand, the contours of which suddenly dissolve.

Of course, it is not that definitive. Even a storm, a forest fire, a pandemic, will pass. The contours of life return, in somewhat new forms. However, the unexpected reminded us that life is greater than our ability to control it. My question in this post is how we balance the will to control life against the knowledge that life always surpasses us.

That life is greater than our ability to control it is evident not only in the form of storms, forest fires and pandemics. It is evident also in the form of nice varying weather, growing forests and good health. Certainly, medicine contributes to better health. Nevertheless, it is not thanks to any pills that blood circulates in our bodies and food becomes nourishment for our cells. We are rightly grateful to medicine, which helps the sick. However, maybe we could devote life itself a thought of gratitude sometimes. Is not the body fantastic, which develops immunity in contact with viruses? Are not the forests and the climate wonderful, providing oxygen, sun and rain? And consider nature, on which we are like outgrowths, almost as fruits on a tree.

Many people probably want to object that it is pointless to philosophize about things that we cannot change. Why waste time reflecting on the uncontrollable dimensions of life, when we can develop new medicines? Should we not focus all our efforts on improving the world?

I just point out that we then reason as the artist who thought himself capable of painting only the foreground, without background. As though the background was a distraction from the foreground. However, if you want to emphasize the foreground, you must also pay attention to the background. Then the foreground appears. The foreground needs to be embraced by the background. Small and large presuppose each other.

Our desire to control life works more wisely, I believe, if we acknowledge our inevitable dependence on a larger, embracing background. As I said, we cannot control everything, just as an artist cannot paint only the foreground. I want to suggest that we can view philosophy as an activity that reminds us of that. It helps us see the controllable in the light of the uncontrollable. It reminds us of the larger context: the background that the human intellect does not master, but must presuppose and interact with wisely.

It does not have to be dramatic. Even everyday life has philosophical dimensions that exceed our conscious control. Children learn to talk beyond their parents’ control, without either curricula or examinations. No language teacher in the world can teach a toddler to talk through lessons in a classroom. It can only happen spontaneously and boundlessly, in the midst of life. Only those who already speak can learn language through lessons in a classroom.

The ability to talk is thus the background to language teaching in the classroom. A language teacher can plan the lessons in detail. The youngest children’s language acquisition, on the other hand, is so inextricably linked to what it is to live as a human being that it exceeds the intellect’s ability to organize and govern. We can only remind ourselves of the difference between foreground and background in language. Here follows such a philosophical reminder. A parent of a schoolchild can say, “Now you’ve been studying French for two hours and need a break: go out and play.” However, a parent of a small child who is beginning to talk cannot say, “Now you’ve been talking for two hours and need a break: go out and play!” The child talks constantly. It learns in the midst of playing, in the midst of life, beyond control. Therefore, the child has no breaks.

Had Herb Terrace seen the difference between foreground and background in language, he would never have used the insane method of training sign language with the chimpanzee Nim in a special classroom, as if Nim were a schoolchild who could already speak. Sometimes we need a bit of philosophy (a bit of reason) for our projects to work. Foreground and background interact everywhere. Our welfare systems do not work unless we fundamentally live by our own power, or by life’s own power. Pandemics hardly subside without the virus moving through sufficiently many of our, thereafter, immune bodies – under controlled forms that protect groups at risk and provide the severely ill care. Everywhere, foreground and background, controllable and uncontrollable, interact.

The dream of complete intellectual control is therefore a pitfall when we philosophize. At least if we need philosophy to elucidate the living background of what lies within human control. Then we cannot strive to define life as a single intellectually controllable foreground. A bit of philosophy can help us see the interplay between foreground and background. It can help us live actively and act wisely in the zone between controllable and uncontrollable.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

We like ethics

This post in Swedish

What is a moral machine?

I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020).

We recommend readings

This post in Swedish

« Older posts Newer posts »