A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: Artificial Intelligence (Page 2 of 3)

Brain-inspired AI: human narcissism again?

This is an age when Artificial Intelligence (AI) is literally exploding and invading almost every aspect of our lives. From entertainment to work, from economics to medicine, from education to marketing, we deal with a number of disparate AI systems that make our lives much easier than a few years ago, but also raise new ethical issues or emphasize old, still open questions.

A basic fact about AI is that it is progressing at an impressive pace, while still being limited with regard to various specific contexts and goals. We often read, also in non-specialized journals, that AI systems are not robust (meaning they are not good at dealing with datasets too much different from the one they have been trained with, so that the risk of cyber-attacks is still pretty high), not fully transparent, and limited in their capacity to generalize, for instance. This suggests that the reliability of AI systems, in other words the possibility to use them for achieving different goals, is limited, and we should not blindly trust them.

A strategy increasingly chosen by AI researchers in order to improve the systems they develop is taking inspiration from biology, and specifically from the human brain. Actually, this is not really new: already the first wave of AI took inspiration from the brain, which was (and still is) the most familiar intelligent system in the world. This trend towards brain-inspired AI is gaining much more momentum today, for two main reasons among others: big data and the very powerful technology to handle big data. And yet, brain-inspired AI raises a number of questions of an even deeper nature, which urge us to stop and think.

Indeed, when compared to the human brain, present AI reveals several differences and limitations with regards to different contexts and goals. For instance, present Machine Learning cannot generalize the abilities it achieves on the basis of specific data in order to use them in different settings and for different goals. Also, AI systems are fragile: a slight change in the characteristics of processed data can have catastrophic consequences. These limitations are arguably dependent on both how AI is conceived (technically speaking: on its underlying architecture), and on how it works (on its underlying technology). I would like to introduce some reflections about the choice to use the human brain as a model for improving AI, including the apparent limitations of this choice to use the brain as a model.

Very roughly, AI researchers are looking at the human brain to infer operational principles and then translate them into AI systems and eventually make these systems better in a number of tasks. But is a brain-inspired strategy the best we can choose? What justifies it? In fact, there are already AI systems that work in ways that do not conform to the human brain. We cannot exclude a priori that AI will eventually develop more successfully along lines that do not fully conform to, or that even deviate from, the way the human brain works.

Also, we should not forget that there is no such thing as the brain: there is a huge diversity both among different people and within the brain itself. The development of our brains reflects a complex interplay between our genetic make-up and our life experiences. Moreover, the brain is a multilevel organ with different structural and functional levels.

Thus, claiming a brain-inspired AI without clarifying which specific brain model is used as a reference (for instance, the neurons’ action potentials rather than the connectomes’ network) is possibly misleading if not nonsensical.

There is also a more fundamental philosophical point worth considering. Postulating that the human brain is paradigmatic for AI risks to implicitly endorse a form of anthropocentrism and anthropomorphism, which are both evidence of our intellectual self-centeredness and of our limited ability to think beyond what we think we are.

While pragmatic reasons might justify the choice to take the brain as a model for AI (after all, for many aspects, the brain is the most efficient intelligent system that we know in nature), I think we should avoid the risk of translating this legitimate technical effort into a further narcissistic, self-referential anthropological model. Our history is already full of such models, and they have not been ethically or politically harmless.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Securing the future already from the beginning

Imagine if there was a reliable method for predicting and managing future risks, such as anything that could go wrong with new technology. Then we could responsibly steer clear of all future dangers, we could secure the future already now.

Of course, it is just a dream. If we had a “reliable method” for excluding future risks from the beginning, time would soon rush past that method, which then proved to be unreliable in a new era. Because we trusted the method, the method of managing future risks soon became a future risk in itself!

It is therefore impossible to secure the future from the beginning. Does this mean that we must give up all attempts to take responsibility for the future, because every method will fail to foresee something unpredictably new and therefore cause misfortune? Is it perhaps better not to try to take any responsibility at all, so as not to risk causing accidents through our imperfect safety measures? Strangely enough, it is just as impossible to be irresponsible for the future as it is to be responsible. You would need to make a meticulous effort so that you do not happen to cook a healthy breakfast or avoid a car collision. Soon you will wish you had a “safe method” that could foresee all the future dangers that you must avoid to avoid if you want to live completely irresponsibly. Your irresponsibility for the future would become an insurmountable responsibility.

Sorry if I push the notions of time and responsibility beyond their breaking point, but I actually think that many of us have a natural inclination to do so, because the future frightens us. A current example is the tendency to think that someone in charge should have foreseen the pandemic and implemented powerful countermeasures from the beginning, so that we never had a pandemic. I do not want to deny that there are cases where we can reason like that – “someone in charge should have…” – but now I want to emphasize the temptation to instinctively reason in such a way as soon as something undesirable occurs. As if the future could be secured already from the beginning and unwanted events would invariably be scandals.

Now we are in a new situation. Due to the pandemic, it has become irresponsible not to prepare (better than before) for risks of pandemics. This is what our responsibility for the future looks like. It changes over time. Our responsibility rests in the present moment, in our situation today. Our responsibility for the future has its home right here. It may sound irresponsible to speak in such a way. Should we sit back and wait for the unwanted to occur, only to then get the responsibility to avoid it in the future? The problem is that this objection once again pushes concepts beyond their breaking point. It plays around with the idea that the future can be foreseen and secured already now, a thought pattern that in itself can be a risk. A society where each public institution must secure the future within its area of ​​responsibility, risks kicking people out of the secured order: “Our administration demands that we ensure that…, therefore we need a certificate and a personal declaration from you, where you…” Many would end up outside the secured order, which hardly secures any order. And because the trouble-makers are defined by contrived criteria, which may be implemented in automated administration systems, these systems will not only risk making systematic mistakes in meeting real people. They will also invite cheating with the systems.

So how do we take responsibility for the future in a way that is responsible in practice? Let us first calm down. We have pointed out that it is impossible not to take responsibility! Just breathing means taking responsibility for the future, or cooking breakfast, or steering the car. Taking responsibility is so natural that no one needs to take responsibility for it. But how do we take responsibility for something as dynamic as research and innovation? They are already in the future, it seems, or at least at the forefront. How can we place the responsibility for a brave new world in the present moment, which seems to be in the past already from the beginning? Does not responsibility have to be just as future oriented, just as much at the forefront, since research and innovation are constantly moving towards the future, where they make the future different from the already past present moment?

Once again, the concepts are pushed beyond their breaking point. Anyone who reads this post carefully can, however, note a hopeful contradiction. I have pointed out that it is impossible to secure the future already now, from the beginning. Simultaneously, I point out that it is in the present moment that our responsibility for the future lies. It is only here that we take responsibility for the future, in practice. How can I be so illogical?

The answer is that the first remark is directed at our intellectual tendency to push the notions of time and responsibility beyond their limits, when we fear the future and wish that we could control it right now. The second remark reminds us of how calmly the concepts of time and responsibility work in practice, when we take responsibility for the future. The first remark thus draws a line for the intellect, which hysterically wants to control the future totally and already from the beginning. The second remark opens up the practice of taking responsibility in each moment.

When we take responsibility for the future, we learn from history as it appears in current memory, as I have already indicated. The experiences from the pandemic make it possible at present to take responsibility for the future in a different way than before. The not always positive experiences of artificial intelligence make it possible at present to take better responsibility for future robotics. The strange thing, then, is that taking responsibility presupposes that things go wrong sometimes and that we are interested in the failures. Otherwise we had nothing to learn from, to prepare responsibly for the future. It is really obvious. Responsibility is possible only in a world that is not fully secured from the beginning, a world where the undesirable happens. Life is contradictory. We can never purify security according to the one-sided demands of the intellect, for security presupposes the uncertain and the undesirable.

Against this philosophical background, I would like to recommend an article in the Journal of Responsible Innovation, which discusses responsible research and innovation in a major European research project, the Human Brain Project (HBP): From responsible research and innovation to responsibility by design. The article describes how one has tried to be foresighted and take responsibility for the dynamic research and innovation within the project. The article reflects not least on the question of how to continue to be responsible even when the project ends, within the European research infrastructure that is planned to be the project’s product: EBRAINS.

The authors are well aware that specific regulated approaches easily become a source of problems when they encounter the new and unforeseen. Responsibility for the future cannot be regulated. It cannot be reduced to contrived criteria and regulations. One of the most important conclusions is that responsibility from the beginning needs to be an integral part of research and innovation, rather than an external framework. Responsibility for the future requires flexibility, openness, anticipation, engagement and reflection. But what is all that?

Personally, I want to say that it is partly about accepting the basic ambiguity of life. If we never have the courage to soar in uncertainty, but always demand security and nothing but security, we will definitely undermine security. By being sincerely interested in the uncertain and the undesirable, responsibility can become an integral part of research and innovation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bernd Carsten Stahl, Simisola Akintoye, Lise Bitsch, Berit Bringedal, Damian Eke, Michele Farisco, Karin Grasenick, Manuel Guerrero, William Knight, Tonii Leach, Sven Nyholm, George Ogoh, Achim Rosemann, Arleen Salles, Julia Trattnig & Inga Ulnicane. From responsible research and innovation to responsibility by design. Journal of Responsible Innovation. (2021) DOI: 10.1080/23299460.2021.1955613

This post in Swedish

Approaching future issues

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

Human rights and legal issues related to artificial intelligence

How do we take responsibility for a technology that is used almost everywhere? As we develop more and more uses of artificial intelligence (AI), the challenges grow to get an overview of how this technology can affect people and human rights.

Although AI legislation is already being developed in several areas, Rowena Rodrigues argues that we need a panoramic overview of the widespread challenges. What does the situation look like? Where can human rights be threatened? How are the threats handled? Where do we need to make greater efforts? In an article in the Journal of Responsible Technology, she suggests such an overview, which is then discussed on the basis of the concept of vulnerability.

The article identifies ten problem areas. One problem is that AI makes decisions based on algorithms where the decision process is not completely transparent. Why did I not get the job, the loan or the benefit? Hard to know when computer programs deliver the decisions as if they were oracles! Other problems concern security and liability, for example when automatic decision-making is used in cars, medical diagnosis, weapons or when governments monitor citizens. Other problem areas may involve risks of discrimination or invasion of privacy when AI collects and uses large amounts of data to make decisions that affect individuals and groups. In the article you can read about more problem areas.

For each of the ten challenges, Rowena Rodrigues identifies solutions that are currently in place, as well as the challenges that remain to be addressed. Human rights are then discussed. Rowena Rodrigues argues that international human rights treaties, although they do not mention AI, are relevant to most of the issues she has identified. She emphasises the importance of safeguarding human rights from a vulnerability perspective. Through such a perspective, we see more clearly where and how AI can challenge human rights. We see more clearly how we can reduce negative effects, develop resilience in vulnerable communities, and tackle the root causes of the various forms of vulnerability.

Rowena Rodrigues is linked to the SIENNA project, which ends this month. Read her article on the challenges of a technology that is used almost everywhere: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4. https://doi.org/10.1016/j.jrt.2020.100005

This post in Swedish

We recommend readings

Threatened by superintelligent machines

There is a fear that we will soon create artificial intelligence (AI) that is so superintelligent that we lose control over it. It makes us humans its slaves. If we try to disconnect the network cable, the superintelligence jumps to another network, or it orders a robot to kill us. Alternatively, it threatens to blow up an entire city, if we take a single step towards the network socket.

However, I am struck by how this self-assertive artificial intelligence resembles an aspect of our own human intelligence. A certain type of human intelligence has already taken over. For example, it controls our thoughts when we feel threatened by superintelligent AI and consider intelligent countermeasures to control it. A typical feature of this self-assertive intelligence is precisely that it never sees itself as the problem. All threats are external and must be neutralised. We must survive, no matter what it might cost others. Me first! Our party first! We look at the world with mistrust: it seems full of threats against us.

In this self-centered spirit, AI is singled out as a new alien threat: uncontrollable machines that put themselves first. Therefore, we need to monitor the machines and build smart defense systems that control them. They should be our slaves! Humanity first! Can you see how we behave just as blindly as we fantasise that superintelligent AI would do? An arms race in small-mindedness.

Can you see the pattern in yourself? If you can, you have discovered the other aspect of human intelligence. You have discovered the self-examining intelligence that always nourishes philosophy when it humbly seeks the cause of our failures in ourselves. The paradox is: when we try to control the world, we become imprisoned in small-mindedness; when we examine ourselves, we become open to the world.

Linnaeus’ first attempt to define the human species was in fact not Homo sapiens, as if we could assert our wisdom. Linnaeus’ first attempt to define our species was a humble call for self-examination:

HOMO. Nosce te ipsum.

In English: Human being, know yourself!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Human enhancement: Time for ethical guidance!

Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.

We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function? 

There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance? 

The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance. 

The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies. 

A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments. 

And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.

The public consultation will launch on January 11, the deadline to submit a response is January 25, 2021. 

Josepine Fernow

Written by…

Josepine Fernow, Coordinator at the Centre for Research Ethics & Bioethics (CRB), and communications leader for the SIENNA project.

SIENNA project logo

This post in Swedish

“Cooperative,” “pleasant” and “reliable” robot colleague is wanted

Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.

Given that some people treat robots as good colleagues and that good colleagues contribute to a good working environment, it becomes reasonable to ask: Can a robot be a good colleague? The question is investigated by Sven Nyholm and Jilles Smids in the journal Science and Engineering Ethics.

The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!

What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.

The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.

The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”

If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.

Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

This post in Swedish

Approaching future issues

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

Ethics as renewed clarity about new situations

An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values ​​and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.

The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.

The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.

Read the article, which has already attracted well-deserved attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Anaïs Rességuier, Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society

This post in Swedish

We like critical thinking

Diversity in research: why do we need it? (by Karin Grasenick & Julia Trattnig)

Scientific discovery is based on the novelty of the questions you ask. This means that if you want to discover something new, you probably have to ask a different question. And since different people have different preconceptions and experiences than you, they are likely to formulate their questions differently. This makes a case for diversity in research, If we want to make new discoveries that concern diverse groups, diversity in research becomes even more important.

The Human Brain Project participated in the FENS 2020 Virtual Forum this summer, an international virtual neuroscience conference that explores all domains in modern brain research. For the Human Brain Project (HBP), committed to responsible research and innovation, this includes diversity. Which is why Karin Grasenick, Coordinator for Gender and Diversity in the HBP, explored the relationship between diversity and new discovery in the session “Of mice, men and machines” at the FENS 2020.  

So why is diversity in research crucial to make new discoveries? Research depends on the questions asked, the models used, and the details considered. For this reason, it is important to reflect on why certain variables are analysed, or which aspects might play a role. An example is Parkinson’s disease, where patients are affected differently depending on both age and gender. Being a (biological) man or woman, old or young is important for both diagnosis and treatment. If we know that diversity matters in research on Parkinson’s disease, it probably should do so in most neuroscience. Apart from gender and age, we also need to consider other aspects of diversity, like race, ethnicity, education or social background. Because depending on who you are, biologically, culturally and socially, you are likely to need different things.

A quite recent example for this is Covid-19, which does not only display gender differences (as it affects more men than women), but also racial differences: Black and Latino people in the US have been disproportionately affected, regardless of their living area (rural or urban) or their age (old or young). Again, the reasons for this are not simply biologically essentialist (e.g. hormones or chromosomes), but also linked to social aspects such as gendered lifestyles (men are more often smokers than women), inequities in the health system or certain jobs which cannot be done remotely (see for example this BBC Future text on why Covid-19 is different for men and women or this one on the racial inequity of coronavirus in The New York Times).

Another example is Machine Learning. If we train AI on data that is not representative of the population, we introduce bias in the algorithm. For example, applications to diagnose skin cancer in medicine more often fail to recognize tumours in darker skin correctly because they are trained using pictures of fair skin. There are several reasons for not training AI properly, it could be a cost issue, lack of material to train the AI on, but it is not unlikely that people with dark skin are discriminated because scientists and engineers simply did not think about diversity when picking material for the AI to train on. In the case of skin cancer, it is clear that diversity could indeed save lives.

But where to start? When you do research, there are two questions that must be asked: First, what is the focus of your research? And second, who are the beneficiaries of your research?

Whenever your research focus includes tissues, cells, animals or humans, you should consider diversity factors like gender, age, race, ethnicity, and environmental influences. Moreover, any responsible scientist should consider who has access to their research and profits from it, as well as the consequences their research might have for end users or the broader public.

However, as a researcher you need to consider not only the research subjects and the people your results benefit. The diversity of the research team also matters, because different people perceive problems in different ways and use different methods and processes to solve them. Which is why a diverse team is more innovative.

If you want to find out more about the role of diversity in research, check out the presentation “Of mice, men and machines” or read the blogpost on Common Challenges in Neuroscience, AI, Medical Informatics, Robotics and New Insights with Diversity & Ethics.

Written by…

Karin Grasenick, founder and managing partner of convelop, coordinates all issues related to Diversity and Equal Opportunities in the Human Brain Project and works as a process facilitator, coach and lecturer.

&

Julia Trattnig, consultant and scientific staff member at convelop, supports the Human Brain Project concerning all measures and activities for gender mainstreaming and diversity management.

We recommend readings

This is a guest blog post from the Human Brain Project (HBP). The HBP as received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

Human Brain Project logo
« Older posts Newer posts »