As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.
In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.
In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.
Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.
In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?
This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.
In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.
Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0
Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.
We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function?
There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance?
The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance.
The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies.
A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments.
And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.
Development of new technologies sometimes draws inspiration from nature. How do plants and animals solve the problem? An example is robotics, where one wants to develop better robots based on what neuroscience knows about the brain. How does the brain solve the problem?
Neuroscience, in turn, sees new opportunities to test hypotheses about the brain by simulating them in robots. Perhaps one can simulate how areas of the brain interact in patients with Parkinson’s disease, to understand how their tremor and other difficulties are caused.
Neuroscience-inspired robotics, so-called neurorobotics, is still at an early stage. This makes neurorobotics an excellent area for being ethically and socially more proactive than we have been in previous technological developments. That is, we can already begin to identify possible ethical and social problems surrounding technological development and counteract them before they arise. For example, we cannot close our eyes to gender and equality issues, but must continuously reflect on how our own social and cultural patterns are reflected in the technology we develop. We need to open our eyes to our own blind spots!
You can read more about this ethical shift in technology development in an article in Science and Engineering Ethics (with Manuel Guerrero from CRB as one of the authors). The shift is called Responsible Research and Innovation, and is exemplified in the article by ongoing work in the European research project, Human Brain Project.
Not only neuroscientists and technology experts are collaborating in this project to develop neurorobotics. Scholars from the humanities and social sciences are also involved in the work. The article itself is an example of this broad collaboration. However, the implementation of responsible research and development is also at an early stage. It still needs to find more concrete forms of work that make it possible not only to anticipate ethical and social problems and reflect on them, but also to act and intervene to influence scientific and technological development.
Aicardi, C., Akintoye, S., Fothergill, B.T. et al. Ethical and Social Aspects of Neurorobotics. Sci Eng Ethics 26, 2533–2546 (2020). https://doi.org/10.1007/s11948-020-00248-8
Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.
The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!
What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.
The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.
The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”
If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.
Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.
Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.
For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?
As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.
Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.
Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.
The notion of a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.
The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.
Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.
Michele Farisco,Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.
The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?
Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.
Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.
The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?
Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w
The STARBIOS2 project has carried out its activities in a context of the profound transformations that affect contemporary societies, and now we are all facing the Covid-19 pandemic. Science and society have always coevolved, they are interconnected entities, but their relationship is changing and it has been for some time. This shift from modern to so-called postmodern society affects all social institutions in similar ways, whether their work is in politics, religion, family, state administration, or bioscience.
We can find a wide range of phenomena connected to this trend in the literature, for instance: globalization; weakening of previous social “structures” (rules, models of action, values and beliefs); more capacity and power of individuals to think and act more freely (thanks also to new communication technologies); exposure to risks of different kinds (climate change, weakening of welfare, etc.); great social and cultural diversification; and weakening of traditional boundaries and spheres of life, etc.
In this context, we are witnessing the diminishing authority and prestige of all political, religious, even scientific institutions, together with a decline in people’s trust towards these institutions. One example would be the anti-vaccination movement.
Meanwhile, scientific research is also undergoing profound transformations, experiencing a transition that has been examined in various ways and called various names. At the heart of this transformation is the relationship between research and the society it belongs to. We can observe a set of global trends in science.
Such trends include the increasing relationship between universities, governments and industries; the emergence of approaches aimed at “opening” science to society, such as citizen science; the diffusion of cooperative practices in scientific production; the increasing relevance of transdisciplinarity; the increasing expectation that scientific results have economic, social, and environmental impacts; the increasingly competitive access to public funds for research; the growing importance attached to quantitative evaluation systems based on publications, often with distorting effects and questionable results; and the emergence on the international economic and technological scene of actors such as India, China, Brazil, South Africa and others. These trends produce risks and opportunities for both science and society.
Critical concerns for science includes career difficulties for young researchers and women in the scientific sector; the cost of publishing and the difficulties to publish open access; and the protection of intellectual property rights.
Of course, these trends and issues manifest in different ways and intensities according to the different political, social and cultural contexts they exist in.
After the so-called “biological revolution” and within the context of the “fourth industrial revolution” and with “converging technologies” like genetics, robotics, info-digital, neurosciences, nanotechnologies, biotechnologies, and artificial intelligence, the biosciences are at a crossroads in its relationship to society.
In this new context, more and more knowledge is produced and technological solutions developed require a deeper understanding of their status, limits, and ethical and social acceptability (take organoids, to name one example). Moreover, food security, clean energy transition, climate change, and pandemics are all challenges where bioscience can play a crucial role, while new legal, ethical, and social questions that need to be dealt with arise.
These processes have been running for years, albeit in different ways, and national and international decision-makers have been paying attention. Various forms of governance have been developed and implemented over time, to re-establish and harmonize the relationship between scientific and technological research and the rest of society, including more general European strategies and approaches such as Smart Specialization, Open Innovation, Open Science and Responsible Research and Innovation as well as strategies related to specific social aspects of science (such as ethics or gender).
Taking on an approach such as RRI is not simply morally recommendable, but indispensable for attempting a re-alignment between scientific research and the needs of society. Starting from the areas of the life of the scientific communities that are most crucial to science-society relations (The 5+1 RRI keys: Science education, Gender equality, Public engagement, Ethics, Open access, and the cross-cutting sixth key: Governance) and taking the four RRI dimensions into account (anticipation, inclusiveness, responsiveness, and reflexivity) can provide useful guidance for how to activate and drive change in research organisations and research systems.
We elaborate and experiment, in search of the most effective and most relevant solution. While at the same time, there is a need to encourage mainstreaming of the most substantial solutions, to root them more deeply and sustainably in the complex fabric of scientific organisations and networks. Which leads us to ask ourselves: in this context, how can we mainstream RRI and its application in the field of bioscience?
Based on what we know, and on experiences from the STARBIOS2 project, RRI and similar approaches need to be promoted and supported by specific policies and contextualised on at least four levels.
Organizational contextualization Where mainstreaming takes place through the promotion of a greater embedment of RRI, or similar approaches, within the individual research organizations such as universities, national institutes, private centres, etc.
Disciplinary or sectoral contextualization Where mainstreaming consists of adapting the responsible research and innovation approach to a specific discipline − for example, biotechnology − or to an entire “sector” in a broad sense, such as bioscience.
Geopolitical and cultural contextualization Where mainstreaming aims to identify forms of adaptation, or rather reshaping, RRI or similar approaches, in various geopolitical and cultural contexts, taking into account elements such as the features of the national research systems, the economy, territorial dynamics, local philosophy and traditions, etc.
Historical contextualization Where RRI mainstreaming is related to the ability of science to respond to the challenges that history poses from time to time − and of which the COVID-19 pandemic is only the last, serious example − and to prevent them as much as possible.
During the course of the STARBIOS2 project, we have developed a set of guidelines and a sustainable model for RRI implementation in bioscience research institutions. Over the course of 4 years, 6 bioscience research institutions in Europe, and 3 outside Europe, worked together to achieve structural change towards RRO in their own research institutions with the goal of achieving responsible biosciences. We were looking forward to revealing and discussing our results in April, but with the Covid-19 outbreak, neither that event nor our Cape Town workshop was a possibility. Luckily, we have adapted and will now share our findings online, at our final event on 29 May. We hope to see you there.
For our final remark, as the Covid-19 pandemic is challenging our societies, our political and economic systems, we recognise that scientists are also being challenged. By the corona virus as well as by contextual challenges. The virus is testing their ability to play a key role to the public, to share information and to produce relevant knowledge. But when we go back to “normal”, the challenge of changing science-society relations will persist. And we will remain convinced that RRI and similar approaches will be a valuable contribution to addressing these challenges, now and in the future.
Written by…
Daniele Mezzana, a social researcher working in the STARBIOS2 project (Structural Transformation to Attain Responsible BIOSciences) as part of the coordination team at University of Rome – Tor Vergata.
This text is based on the Discussion Note for the STARBIOS2 final event on 29 May 2020.
The STARBIOS2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 709517. The contents of this text and the view expressed are the sole responsibility of the author and under no circumstances can be regarded as reflecting the position of the European Union.
Our attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.
Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?
Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.
Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?
RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.
A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.
The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!
Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.
The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!
Pharmaceutical companies want to quickly manufacture a vaccine against covid-19, with human testing and launch in the market as soon as possible. In a debate article, Jessica Nihlén Fahlquist at CRB warns of the risk of losing the larger risk perspective: “Tests on people and a potential premature mass vaccination entail risks. It is easy to forget about similar situations in the past,” she writes.
It may take time for side effects to appear. Unfortunately, it therefore also takes time to develop new safe vaccines. We need to develop a vaccine, but even with new vaccines, caution is needed.
I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.
The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.
The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)
The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?
John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.
In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.
All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.
Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?
Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!
I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.
During the last phase of the Human Brain Project, the activities on this blog received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. HBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539). The views and opinions expressed on this blog are the sole responsibility of the author(s) and do not necessarily reflect the views of the European Commission.
Recent Comments