A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: Artificial Intelligence (Page 1 of 3)

Women on AI-assisted mammography

The use of AI tools in healthcare has become a recurring theme on this blog. So far, the posts have mainly been about mobile and online apps for use by patients and the general public. Today, the theme is more advanced AI tools which are used professionally by healthcare staff.

Within the Swedish program for breast cancer screening, radiologists interpret large amounts of X-ray images to detect breast cancer at an early stage. The workload is great and most of the time the images show no signs of cancer or pre-cancers. Today, AI tools are being tested that could improve mammography in several ways. AI could be used as an assisting resource for the radiologists to detect additional tumors. It could also be used as an independent reader of images to relieve radiologists, as well as to support assessments of which patients should receive care more immediately.

For AI-assisted mammography to work, not only the technology needs to be developed. Researchers also need to investigate how women think about AI-assisted mammography. How do they perceive AI-assisted breast cancer screening? Four researchers, including Jennifer Viberg Johansson and Åsa Grauman at CRB, interviewed sixteen women who underwent mammography at a Swedish hospital where an AI tool was tested as a third reviewer of the X-ray images, along with the two radiologists.

Several of the interviewees emphasized that AI is only a tool: AI cannot replace the doctor because humans have abilities beyond image recognition, such as intuition, empathy and holistic thinking. Another finding was that some of the interviewees had a greater tolerance for human error than if the AI tool failed, which was considered unacceptable. Some argued that if the AI tool makes a mistake, the mistake will be repeated systematically, while human errors are occasional. Some believed that the responsibility when the technology fails lies with the humans and not with the technology.

Personally, I cannot help but speculate that the sharp distinction between human error, which is easier to reconcile with, and unacceptably failing technology, is connected to the fact that we can say of humans who fail: “After all, the radiologists surely did their best.” On the other hand, we hardly say about failing AI: “After all, the technology surely did its best.” Technology does not become subject to certain forms of conciliatory considerations.

The authors themselves emphasize that the participants in the study saw AI as a valuable tool in mammography, but held that the tool cannot replace humans in the process. The authors also emphasize that the interviewees preferred that the AI tool identify possible tumors with high sensitivity, even if this leads to many false positive results and thus to unnecessary worry and fear. In order for patients to understand AI-assisted healthcare, effective communication efforts are required, the authors conclude.

It is difficult to summarize the rich material from interview studies. For more results, read the study here: Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Dembrower K, Strand F, et al. Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024;14:e084014. doi: 10.1136/bmjopen-2024-084014

This post in Swedish

Approaching future issues

Using artificial intelligence with academic integrity

AI tools can both transform and produce content such as texts, images and music. The tools are also increasingly available as online services. One example is the ChatGPT tool, which you can ask questions and get well-informed, logically reasoned answers from. Answers that the tool can correct if you point out errors and ambiguities. You can interact with the tool almost as if you were conversing with a human.

Such a tool can of course be very useful. It can help you solve problems and find relevant information. I venture to guess that the response from the tool can also stimulate creativity and open the mind to unexpected possibilities, just as conversations with people tend to do. However, like all technology, these tools can also be abused and students have already used ChatGPT to complete their assignments.

The challenge in education and research is thus to learn to use these AI tools with academic integrity. Using AI tools is not automatically cheating. Seven participants in a European network for academic integrity (ENAI), including Sonja Bjelobaba at CRB, write about the challenge in an editorial in International Journal for Educational Integrity. Above all, the authors summarize tentative recommendations from ENAI on the ethical use of AI in academia.

An overarching aim in the recommendations is to integrate recommendations on AI with other related recommendations on academic integrity. Thus, all persons, sources and tools that influenced ideas or generated content must be clearly acknowledged – including the use of AI tools. Appropriate use of tools that affect the form of the text (such as proofreading tools, spelling checkers and thesaurus) are generally acceptable. Furthermore, an AI tool cannot be listed as a co-author in a publication, as the tool cannot take responsibility for the content.

The recommendations also emphasize the importance of educational efforts on the ethical use of AI tools. Read the recommendations in their entirety here: ENAI Recommendations on the ethical use of Artificial Intelligence in Education.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Foltynek, T., Bjelobaba, S., Glendinning, I. et al. ENAI Recommendations on the ethical use of Artificial Intelligence in Education. International Journal for Educational Integrity 19, 12 (2023). https://doi.org/10.1007/s40979-023-00133-4

This post in Swedish

We care about education

A new project will explore the prospect of artificial awareness

The neuroethics group at CRB has just started its work as part of a new European research project about artificial awareness. The project is called “Counterfactual Assessment and Valuation for Awareness Architecture” (CAVAA), and is funded for a duration of four years. The consortium is composed of 10 institutions, coordinated by the Radboud University in the Netherlands.

The goal of CAVAA is “to realize a theory of awareness instantiated as an integrated computational architecture…, to explain awareness in biological systems and engineer it in technological ones.” Different specific objectives derive from this general goal. First, CAVAA has a robust theoretical component: it relies on a strong theoretical framework. Conceptual reflection on awareness, including its definition and the identification of features that allow its attribution to either biological organisms or artificial systems, is an explicit task of the project. Second, CAVAA is interested in exploring the connection between awareness in biological organisms and its possible replication in artificial systems. The project thus gives much attention to the connection between neuroscience and AI. Third, against this background, CAVAA aims at replicating awareness in artificial settings. Importantly, the project also has a clear ethical responsibility, more specifically about anticipating the potential societal and ethical impact of aware artificial systems.

There are several reasons why a scientific project with a strong engineering and computer science component also has philosophers on board. We are asked to contribute to developing a strong and consistent theoretical account of awareness, including the conceptual conceivability and the technical feasibility of its artificial replication. This is not straightforward, not only because there are many content-related challenges, but also because there are logical traps to avoid. For instance, we should avoid the temptation to validate an empirical statement on the basis of our own theory: this would possibly be tautological or circular.

In addition to this theoretical contribution, we will also collaborate in identifying indicators of awareness and benchmarks for validating the cognitive architecture that will be developed. Finally, we will collaborate in the ethical analysis concerning potential future scenarios related to artificial awareness, such as the possibility of developing artificial moral agents or the need to extend moral rights also to artificial systems.

In the end, there are several potential contributions that philosophy can provide to the scientific attempt to replicate biological awareness in artificial systems. Part of this possible collaboration is the fundamental and provoking question: why should we try to develop artificial awareness at all? What is the expected benefit, should we succeed? This is definitely an open question, with possible arguments for and against attempting such a grand accomplishment.

There is also another question of equal importance, which may justify the effort to identify the necessary and sufficient conditions for artificial systems to become aware, and how to recognize them as such. What if we will inadvertently create (or worse: have already created) forms of artificial awareness, but do not recognize this and treat them as if they were unaware? Such scenarios also confront us with serious ethical issues. So, regardless of our background beliefs about artificial awareness, it is worth investing in thinking about it.

Stay tuned to hear more from CAVAA!

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Part of international collaborations

AI narratives from the Global North

The way we develop, adopt, regulate and accept artificial intelligence is embedded in our societies and cultures. Our narratives about intelligent machines take on a flavour of the art, literature and imaginations of the people who live today, and of those that came before us. But some of us are missing from the stories that are told about thinking machines. A recent paper about forgotten African AI narratives and the future of AI in Africa shines a light on some of the missing narratives.

In the paper, Damian Eke and George Ogoh point to the fact that how artificial intelligence is developed, adopted, regulated and accepted is hugely influenced by socio-cultural, ethical, political, media and historical narratives. But most of the stories we tell about intelligent machines are imagined and conceptualised in the Global North. The paper begs the question whether it is a problem? And if so, in what way? When machine narratives put the emphasis on technology neutrality, that becomes a problem that goes beyond AI.

What happens when Global North narratives set the agenda for research and innovation also in the Global South, and what happens more specifically to the agenda for artificial intelligence? The impact is difficult to quantify. But when historical, philosophical, socio-cultural and political narratives from Africa are missing, we need to understand why and what it might imply. Damian Eke & George Ogoh provide a list of reasons for why this is important. One is concerns about the state of STEM education (science, technology, engineering and mathematics) in many African countries. Another reason is the well-documented issue of epistemic injustice: unfair discrimination against people because of prejudices about their knowledge. The dominance of Global North narratives could lead to devaluing the expertise of Africans in the tech community. This brings us to the point of the argument, which is that African socio-cultural, ethical and political contexts and narratives are absent from the global debate about responsible AI.

The paper makes the case for including African AI narratives not only into the research and development of artificial intelligence, but also into the ethics and governance of technology more broadly. Such inclusion would help counter epistemic injustice. If we fail to include narratives from the South into the AI discourse, the development can never be truly global. Moreover, excluding African AI narratives will limit our understanding of how different cultures in Africa conceptualise AI, and we miss an important perspective on how people across the world perceive the risks and benefits of machine learning and AI powered technology. Nor will we understand the many ways in which stories, art, literature and imaginations globally shape those perceptions.

If we want to develop an “AI for good”, it needs to be good for Africa and other parts of the Global South. According to Damian Eke and George Ogoh, it is possible to create a more meaningful and responsible narrative about AI. That requires that we identify and promote people-centred narratives. And anchor AI ethics for Africa in African ethical principles, like ubuntu. But the key for African countries to participate in the AI landscape is a greater focus on STEM education and research. The authors end their paper with a call to improve the diversity of voices in the global discourse about AI. Culturally sensitive and inclusive AI applications would benefit us all, for epistemic injustice is not just a geographical problem. Our view of whose knowledge has value is powered by a broad variety of forms of prejudice.

Damian Eke and George Ogoh are both actively contributing to the Human Brain Project’s work on responsible research and innovation. The Human Brain Project is a European Flagship project providing in-depth understanding of the complex structure and function of the human brain, using interdisciplinary approaches.

Do you want to learn more? Read the article here: Forgotten African AI Narratives and the future of AI in Africa.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Eke D, Ogoh G, Forgotten African AI Narratives and the future of AI in Africa, International Review of Information Ethics, 2022;31(08).

We want to be just

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

How can neuroethics and AI ethics join their forces?

As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.

In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.

In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.

Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.

In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?

This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.

In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0

We transcend disciplinary borders

Images of good and evil artificial intelligence

As Michele Farisco has pointed out on this blog, artificial intelligence (AI) often serves as a projection screen for our self-images as human beings. Sometimes also as a projection screen for our images of good and evil, as you will soon see.

In AI and robotics, autonomy is often sought in the sense that the artificial intelligence should be able to perform its tasks optimally without human guidance. Like a self-driving car, which safely takes you to your destination without you having to steer, accelerate or brake. Another form of autonomy that is often sought is that artificial intelligence should be self-learning and thus be able to improve itself and become more powerful without human guidance.

Philosophers have discussed whether AI can be autonomous even in another sense, which is associated with human reason. According to this picture, we can as autonomous human beings examine our final goals in life and revise them if we deem that new knowledge about the world motivates it. Some philosophers believe that AI cannot do this, because the final goal, or utility function, would make it irrational to change the goal. The goal is fixed. The idea of such stubbornly goal-oriented AI can evoke worrying images of evil AI running amok among us. But the idea can also evoke reassuring images of good AI that reliably supports us.

Worried philosophers have imagined an AI that has the ultimate goal of making ordinary paper clips. This AI is assumed to be self-improving. It is therefore becoming increasingly intelligent and powerful when it comes to its goal of manufacturing paper clips. When the raw materials run out, it learns new ways to turn the earth’s resources into paper clips, and when humans try to prevent it from destroying the planet, it learns to destroy humanity. When the planet is wiped out, it travels into space and turns the universe into paper clips.

Philosophers who issue warnings about “evil” super-intelligent AI also express hopes for “good” super-intelligent AI. Suppose we could give self-improving AI the goal of serving humanity. Without getting tired, it would develop increasingly intelligent and powerful ways of serving us, until the end of time. Unlike the god of religion, this artificial superintelligence would hear our prayers and take ever-smarter action to help us. It would probably sooner or later learn to prevent earthquakes and our climate problems would soon be gone. No theodicy in the world could undermine our faith in this artificial god, whose power to protect us from evil is ever-increasing. Of course, it is unclear how the goal of serving humanity can be defined. But given the opportunity to finally secure the future of humanity, some hopeful philosophers believe that the development of human-friendly self-improving AI should be one of the most essential tasks of our time.

I read all this in a well-written article by Wolfhart Totschnig, who questions the rigid goal orientation associated with autonomous AI in the scenarios above. His most important point is that rigidly goal-oriented AI, which runs amok in the universe or saves humanity from every predicament, is not even conceivable. Outside its domain, the goal loses its meaning. The goal of a self-driving car to safely take the user to the destination has no meaning outside the domain of road traffic. Domain-specific AI can therefore not be generalized to the world as a whole, because the utility function loses its meaning outside the domain, long before the universe is turned into paper clips or the future of humanity is secured by an artificially good god.

This is, of course, an important philosophical point about goals and meaning, about specific domains and the world as a whole. The critique helps us to more realistically assess the risks and opportunities of future AI, without being bewitched by our images. At the same time, I get the impression that Totschnig continues to use AI as a projection screen for human self-images. He argues that future AI may well revise its ultimate goals as it develops a general understanding of the world. The weakness of the above scenarios was that they projected today’s domain-specific AI, not the general intelligence of humans. We then do not see the possibility of a genuinely human-like AI that self-critically reconsiders its final goals when new knowledge about the world makes it necessary. Truly human-equivalent AI would have full autonomy.

Projecting human self-images on future AI is not just a tendency, as far as I can judge, but a norm that governs the discussion. According to this norm, the wrong image is projected in the scenarios above. An image of today’s machines, not of our general human intelligence. Projecting the right self-image on future AI thus appears as an overall goal. Is the goal meaningful or should it be reconsidered self-critically?

These are difficult issues and my impression of the philosophical discussion may be wrong. If you want to judge for yourself, read the article: Fully autonomous AI.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Totschnig, W. Fully Autonomous AI. Sci Eng Ethics 26, 2473–2485 (2020). https://doi.org/10.1007/s11948-020-00243-z

This post in Swedish

We like critical thinking

Digital twins, virtual brains and the dangers of language

A new computer simulation technology has begun to be introduced, for example, in the manufacturing industry. The computer simulation is called a digital twin, which challenges me to bring to life for the reader what something that sounds so imaginative can be in reality.

The most realistic explanation I can find actually comes from Harry Potter’s world. Do you remember the map of Hogwarts, which not only shows all the rooms and corridors, but also the steps in real time of those who sneak around the school? A similar map can be easily created in a computer environment by connecting the map in the computer to sensors in the floor of the building that the map depicts. Immediately you have an interactive digital map of the building that is automatically updated and shows people’s movements in it. Imagine further that the computer simulation can make calculations that predict crowds that exceed the authorities’ recommendations, and that it automatically sends out warning messages via a speaker system. As far as I understand, such an interactive digital map can be called a digital twin for an intelligent house.

Of course, this is a revolutionary technology. The architect’s drawing in a computer program gets extended life in both the production and maintenance of the building. The digital simulation is connected to sensors that update the simulation with current data on relevant factors in the construction process and thereafter in the finished building. The building gets a digital twin that during the entire life cycle of the building automatically contacts maintenance technicians when the sensors show that the washing machines are starting to wear out or that the air is not circulating properly.

The scope of use for digital twins is huge. The point of them, as I understand it, is not that they are “exact virtual copies of reality,” whatever that might mean. The point is that the computer simulation is linked to the simulated object in a practically relevant way. Sensors automatically update the simulation with relevant data, while the simulation automatically updates the simulated object in relevant ways. At the same time, users, manufacturers, maintenance technicians and other actors are updated, who easily can monitor the object’s current status, opportunities and risks, wherever they are in the world.

The European flagship project Human Brain Project plans to develop digital twins of human brains by building virtual brains in a computer environment. In a new article, the philosophers Kathinka Evers and Arleen Salles, who are both working in the project, examine the enormous challenges involved in developing digital twins of living human brains. Is it even conceivable?

The authors compare types of objects that can have digital twins. It can be artefacts such as buildings and cars, or natural inanimate phenomena such as the bedrock at a mine. But it could also be living things such as the heart or the brain. The comparisons in the article show that the brain stands out in several ways, all of which make it unclear whether it is reasonable to talk about digital twins of human brains. Would it be more appropriate to talk about digital cousins?

The brain is astronomically complex and despite new knowledge about it, it is highly opaque to our search for knowledge. How can we talk about a digital twin of something that is as complex as a galaxy and as unknown as a black hole? In addition, the brain is fundamentally dynamically interactive. It is connected not only with the body but also with culture, society and the world around it, with which it develops in uninterrupted interaction. The brain almost merges with its environment. Does that imply that a digital twin would have to be a twin of the brain-body-culture-society-world, that is, a digital twin of everything?

No, of course not. The aim of the project is to find specific medical applications of the new computer simulation technology. By developing digital twins of certain aspects of certain parts of patients’ brains, it is hoped that one can improve and individualize, for example, surgical procedures for diseases such as epilepsy. Just as the map from Harry Potter’s world shows people’s steps in real time, the digital twin of the brain could follow the spread of certain nerve impulses in certain parts of the patient’s brain. This can open up new opportunities to monitor, diagnose, predict and treat diseases such as epilepsy.

Should we avoid the term digital twin when talking about the brain? Yes, it would probably be wiser to talk about digital siblings or digital cousins, argue Kathinka Evers and Arleen Salles. Although experts in the field understand its technical use, the term “digital twin” is linguistically risky when we talk about human brains. It easily leads the mind astray. We imagine that the digital twin must be an exact copy of a human’s whole brain. This risks creating unrealistic expectations and unfounded fears about the development. History shows that language also contains other dangers. Words come with normative expectations that can have ethical and social consequences that may not have been intended. Talking about a digital twin of a mining drill is probably no major linguistic danger. But when it comes to the brains of individual people, the talk of digital twins can become a new linguistic arena where we reinforce prejudices and spread fears.

After reading some popular scientific explanations of digital twins, I would like to add that caution may be needed also in connection with industrial applications. After all, the digital twin of a mining drill is not an “exact virtual copy of the real drill” in some absolute sense, right down to the movements of individual atoms. The digital twin is a copy in the practical sense that the application makes relevant. Sometimes it is enough to copy where people put their feet down, as in Harry Potter’s world, whose magic unexpectedly helps us understand the concept of a digital twin more realistically than many verbal explanations do. Explaining words with the help of other words is not always clarifying, if all the words steer thought in the same direction. The words “copy” and “replica” lead our thinking just as right and just as wrong as the word “twin” does.

If you want to better understand the challenges of creating digital twins of human brains and the importance of conceptual clarity concerning the development, read the philosophically elucidatory article: Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers, Kathinka & Salles, Arleen. (2021). Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics. SCIO: Revista de Filosofía. 27-53. 10.46583 / scio_2021.21.846

This post in Swedish

Minding our language

Brain-inspired AI: human narcissism again?

This is an age when Artificial Intelligence (AI) is literally exploding and invading almost every aspect of our lives. From entertainment to work, from economics to medicine, from education to marketing, we deal with a number of disparate AI systems that make our lives much easier than a few years ago, but also raise new ethical issues or emphasize old, still open questions.

A basic fact about AI is that it is progressing at an impressive pace, while still being limited with regard to various specific contexts and goals. We often read, also in non-specialized journals, that AI systems are not robust (meaning they are not good at dealing with datasets too much different from the one they have been trained with, so that the risk of cyber-attacks is still pretty high), not fully transparent, and limited in their capacity to generalize, for instance. This suggests that the reliability of AI systems, in other words the possibility to use them for achieving different goals, is limited, and we should not blindly trust them.

A strategy increasingly chosen by AI researchers in order to improve the systems they develop is taking inspiration from biology, and specifically from the human brain. Actually, this is not really new: already the first wave of AI took inspiration from the brain, which was (and still is) the most familiar intelligent system in the world. This trend towards brain-inspired AI is gaining much more momentum today, for two main reasons among others: big data and the very powerful technology to handle big data. And yet, brain-inspired AI raises a number of questions of an even deeper nature, which urge us to stop and think.

Indeed, when compared to the human brain, present AI reveals several differences and limitations with regards to different contexts and goals. For instance, present Machine Learning cannot generalize the abilities it achieves on the basis of specific data in order to use them in different settings and for different goals. Also, AI systems are fragile: a slight change in the characteristics of processed data can have catastrophic consequences. These limitations are arguably dependent on both how AI is conceived (technically speaking: on its underlying architecture), and on how it works (on its underlying technology). I would like to introduce some reflections about the choice to use the human brain as a model for improving AI, including the apparent limitations of this choice to use the brain as a model.

Very roughly, AI researchers are looking at the human brain to infer operational principles and then translate them into AI systems and eventually make these systems better in a number of tasks. But is a brain-inspired strategy the best we can choose? What justifies it? In fact, there are already AI systems that work in ways that do not conform to the human brain. We cannot exclude a priori that AI will eventually develop more successfully along lines that do not fully conform to, or that even deviate from, the way the human brain works.

Also, we should not forget that there is no such thing as the brain: there is a huge diversity both among different people and within the brain itself. The development of our brains reflects a complex interplay between our genetic make-up and our life experiences. Moreover, the brain is a multilevel organ with different structural and functional levels.

Thus, claiming a brain-inspired AI without clarifying which specific brain model is used as a reference (for instance, the neurons’ action potentials rather than the connectomes’ network) is possibly misleading if not nonsensical.

There is also a more fundamental philosophical point worth considering. Postulating that the human brain is paradigmatic for AI risks to implicitly endorse a form of anthropocentrism and anthropomorphism, which are both evidence of our intellectual self-centeredness and of our limited ability to think beyond what we think we are.

While pragmatic reasons might justify the choice to take the brain as a model for AI (after all, for many aspects, the brain is the most efficient intelligent system that we know in nature), I think we should avoid the risk of translating this legitimate technical effort into a further narcissistic, self-referential anthropological model. Our history is already full of such models, and they have not been ethically or politically harmless.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Securing the future already from the beginning

Imagine if there was a reliable method for predicting and managing future risks, such as anything that could go wrong with new technology. Then we could responsibly steer clear of all future dangers, we could secure the future already now.

Of course, it is just a dream. If we had a “reliable method” for excluding future risks from the beginning, time would soon rush past that method, which then proved to be unreliable in a new era. Because we trusted the method, the method of managing future risks soon became a future risk in itself!

It is therefore impossible to secure the future from the beginning. Does this mean that we must give up all attempts to take responsibility for the future, because every method will fail to foresee something unpredictably new and therefore cause misfortune? Is it perhaps better not to try to take any responsibility at all, so as not to risk causing accidents through our imperfect safety measures? Strangely enough, it is just as impossible to be irresponsible for the future as it is to be responsible. You would need to make a meticulous effort so that you do not happen to cook a healthy breakfast or avoid a car collision. Soon you will wish you had a “safe method” that could foresee all the future dangers that you must avoid to avoid if you want to live completely irresponsibly. Your irresponsibility for the future would become an insurmountable responsibility.

Sorry if I push the notions of time and responsibility beyond their breaking point, but I actually think that many of us have a natural inclination to do so, because the future frightens us. A current example is the tendency to think that someone in charge should have foreseen the pandemic and implemented powerful countermeasures from the beginning, so that we never had a pandemic. I do not want to deny that there are cases where we can reason like that – “someone in charge should have…” – but now I want to emphasize the temptation to instinctively reason in such a way as soon as something undesirable occurs. As if the future could be secured already from the beginning and unwanted events would invariably be scandals.

Now we are in a new situation. Due to the pandemic, it has become irresponsible not to prepare (better than before) for risks of pandemics. This is what our responsibility for the future looks like. It changes over time. Our responsibility rests in the present moment, in our situation today. Our responsibility for the future has its home right here. It may sound irresponsible to speak in such a way. Should we sit back and wait for the unwanted to occur, only to then get the responsibility to avoid it in the future? The problem is that this objection once again pushes concepts beyond their breaking point. It plays around with the idea that the future can be foreseen and secured already now, a thought pattern that in itself can be a risk. A society where each public institution must secure the future within its area of ​​responsibility, risks kicking people out of the secured order: “Our administration demands that we ensure that…, therefore we need a certificate and a personal declaration from you, where you…” Many would end up outside the secured order, which hardly secures any order. And because the trouble-makers are defined by contrived criteria, which may be implemented in automated administration systems, these systems will not only risk making systematic mistakes in meeting real people. They will also invite cheating with the systems.

So how do we take responsibility for the future in a way that is responsible in practice? Let us first calm down. We have pointed out that it is impossible not to take responsibility! Just breathing means taking responsibility for the future, or cooking breakfast, or steering the car. Taking responsibility is so natural that no one needs to take responsibility for it. But how do we take responsibility for something as dynamic as research and innovation? They are already in the future, it seems, or at least at the forefront. How can we place the responsibility for a brave new world in the present moment, which seems to be in the past already from the beginning? Does not responsibility have to be just as future oriented, just as much at the forefront, since research and innovation are constantly moving towards the future, where they make the future different from the already past present moment?

Once again, the concepts are pushed beyond their breaking point. Anyone who reads this post carefully can, however, note a hopeful contradiction. I have pointed out that it is impossible to secure the future already now, from the beginning. Simultaneously, I point out that it is in the present moment that our responsibility for the future lies. It is only here that we take responsibility for the future, in practice. How can I be so illogical?

The answer is that the first remark is directed at our intellectual tendency to push the notions of time and responsibility beyond their limits, when we fear the future and wish that we could control it right now. The second remark reminds us of how calmly the concepts of time and responsibility work in practice, when we take responsibility for the future. The first remark thus draws a line for the intellect, which hysterically wants to control the future totally and already from the beginning. The second remark opens up the practice of taking responsibility in each moment.

When we take responsibility for the future, we learn from history as it appears in current memory, as I have already indicated. The experiences from the pandemic make it possible at present to take responsibility for the future in a different way than before. The not always positive experiences of artificial intelligence make it possible at present to take better responsibility for future robotics. The strange thing, then, is that taking responsibility presupposes that things go wrong sometimes and that we are interested in the failures. Otherwise we had nothing to learn from, to prepare responsibly for the future. It is really obvious. Responsibility is possible only in a world that is not fully secured from the beginning, a world where the undesirable happens. Life is contradictory. We can never purify security according to the one-sided demands of the intellect, for security presupposes the uncertain and the undesirable.

Against this philosophical background, I would like to recommend an article in the Journal of Responsible Innovation, which discusses responsible research and innovation in a major European research project, the Human Brain Project (HBP): From responsible research and innovation to responsibility by design. The article describes how one has tried to be foresighted and take responsibility for the dynamic research and innovation within the project. The article reflects not least on the question of how to continue to be responsible even when the project ends, within the European research infrastructure that is planned to be the project’s product: EBRAINS.

The authors are well aware that specific regulated approaches easily become a source of problems when they encounter the new and unforeseen. Responsibility for the future cannot be regulated. It cannot be reduced to contrived criteria and regulations. One of the most important conclusions is that responsibility from the beginning needs to be an integral part of research and innovation, rather than an external framework. Responsibility for the future requires flexibility, openness, anticipation, engagement and reflection. But what is all that?

Personally, I want to say that it is partly about accepting the basic ambiguity of life. If we never have the courage to soar in uncertainty, but always demand security and nothing but security, we will definitely undermine security. By being sincerely interested in the uncertain and the undesirable, responsibility can become an integral part of research and innovation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bernd Carsten Stahl, Simisola Akintoye, Lise Bitsch, Berit Bringedal, Damian Eke, Michele Farisco, Karin Grasenick, Manuel Guerrero, William Knight, Tonii Leach, Sven Nyholm, George Ogoh, Achim Rosemann, Arleen Salles, Julia Trattnig & Inga Ulnicane. From responsible research and innovation to responsibility by design. Journal of Responsible Innovation. (2021) DOI: 10.1080/23299460.2021.1955613

This post in Swedish

Approaching future issues

« Older posts