A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroethics (Page 2 of 9)

AI narratives from the Global North

The way we develop, adopt, regulate and accept artificial intelligence is embedded in our societies and cultures. Our narratives about intelligent machines take on a flavour of the art, literature and imaginations of the people who live today, and of those that came before us. But some of us are missing from the stories that are told about thinking machines. A recent paper about forgotten African AI narratives and the future of AI in Africa shines a light on some of the missing narratives.

In the paper, Damian Eke and George Ogoh point to the fact that how artificial intelligence is developed, adopted, regulated and accepted is hugely influenced by socio-cultural, ethical, political, media and historical narratives. But most of the stories we tell about intelligent machines are imagined and conceptualised in the Global North. The paper begs the question whether it is a problem? And if so, in what way? When machine narratives put the emphasis on technology neutrality, that becomes a problem that goes beyond AI.

What happens when Global North narratives set the agenda for research and innovation also in the Global South, and what happens more specifically to the agenda for artificial intelligence? The impact is difficult to quantify. But when historical, philosophical, socio-cultural and political narratives from Africa are missing, we need to understand why and what it might imply. Damian Eke & George Ogoh provide a list of reasons for why this is important. One is concerns about the state of STEM education (science, technology, engineering and mathematics) in many African countries. Another reason is the well-documented issue of epistemic injustice: unfair discrimination against people because of prejudices about their knowledge. The dominance of Global North narratives could lead to devaluing the expertise of Africans in the tech community. This brings us to the point of the argument, which is that African socio-cultural, ethical and political contexts and narratives are absent from the global debate about responsible AI.

The paper makes the case for including African AI narratives not only into the research and development of artificial intelligence, but also into the ethics and governance of technology more broadly. Such inclusion would help counter epistemic injustice. If we fail to include narratives from the South into the AI discourse, the development can never be truly global. Moreover, excluding African AI narratives will limit our understanding of how different cultures in Africa conceptualise AI, and we miss an important perspective on how people across the world perceive the risks and benefits of machine learning and AI powered technology. Nor will we understand the many ways in which stories, art, literature and imaginations globally shape those perceptions.

If we want to develop an “AI for good”, it needs to be good for Africa and other parts of the Global South. According to Damian Eke and George Ogoh, it is possible to create a more meaningful and responsible narrative about AI. That requires that we identify and promote people-centred narratives. And anchor AI ethics for Africa in African ethical principles, like ubuntu. But the key for African countries to participate in the AI landscape is a greater focus on STEM education and research. The authors end their paper with a call to improve the diversity of voices in the global discourse about AI. Culturally sensitive and inclusive AI applications would benefit us all, for epistemic injustice is not just a geographical problem. Our view of whose knowledge has value is powered by a broad variety of forms of prejudice.

Damian Eke and George Ogoh are both actively contributing to the Human Brain Project’s work on responsible research and innovation. The Human Brain Project is a European Flagship project providing in-depth understanding of the complex structure and function of the human brain, using interdisciplinary approaches.

Do you want to learn more? Read the article here: Forgotten African AI Narratives and the future of AI in Africa.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Eke D, Ogoh G, Forgotten African AI Narratives and the future of AI in Africa, International Review of Information Ethics, 2022;31(08).

We want to be just

Does the brain make room for free will?

The question of whether we have free will has been debated throughout the ages and everywhere in the world. Can we influence our future or is it predetermined? If everything is predetermined and we lack free will, why should we act responsibly and by what right do we hold each other accountable?

There have been different ideas about what predetermines the future and excludes free will. People have talked about fate and about the gods. Today, we rather imagine that it is about necessary causal relationships in the universe. It seems that the strict determinism of the material world must preclude the free will that we humans perceive ourselves to have. If we really had free will, we think, then nature would have to give us a space of our own to decide in. A causal gap where nature does not determine everything according to its laws, but allows us to act according to our will. But this seems to contradict our scientific world view.

In an article in the journal Intellectica, Kathinka Evers at CRB examines the plausibility of this choice between two extreme positions: either strict determinism that excludes free will, or free will that excludes determinism.

Kathinka Evers approaches the problem from a neuroscientific perspective. This particular perspective has historically tended to support one of the positions: strict determinism that excludes free will. How can the brain make room for free will, if our decisions are the result of electrochemical processes and of evolutionarily developed programs? Is it not right there, in the brain, that our free will is thwarted by material processes that give us no space to act?

Some authors who have written about free will from a neuroscientific perspective have at times explained away freedom as the brain’s user’s illusion: as a necessary illusion, as a fictional construct. Some have argued that since social groups function best when we as individuals assume ourselves to be responsible actors, we must, after all, keep this old illusion alive. Free will is a fiction that works and is needed in society!

This attitude is unsound, says Kathinka Evers. We cannot build our societies on assumptions that contradict our best knowledge. It would be absurd to hold people responsible for actions that they in fact have no ability to influence. At the same time, she agrees that the notion of free will is socially important. But if we are to retain the notion, it must be consistent with our knowledge of the brain.

One of the main points of the article is that our knowledge of the brain could actually provide some room for free will. The brain could function beyond the opposition between indeterminism and strict determinism, some neuroscientific theories suggest. This does not mean that there would be uncaused neural events. Rather, a determinism is proposed where the relationship between cause and effect is variable and contingent, not invariable and necessary, as we commonly assume. As far as I understand, it is about the fact that the brain has been shown to function much more independently, actively and flexibly than in the image of it as a kind of programmed machine. Different incoming nerve signals can stabilize different neural patterns of connections in the brain, which support the same behavioural ability. And the same incoming nerve signal can stabilize different patterns of connections in the brain that result in the same behavioural ability. Despite great variation in how individuals’ neural patterns of connections are stabilized, the same common abilities are supported. This model of the brain is thus deterministic, while being characterized by variability. It describes a kind of kaleidoscopically variable causality in the brain between incoming signals and resulting behaviours and abilities.

Kathinka Evers thus hypothetically suggests that this variability in the brain, if real, could provide empirical evidence that free will is compatible with determinism.

Read the philosophically exciting article here: Variable determinism in social applications: translating science to society

Although Kathinka Evers suggests that a certain amount of free will could be compatible with what we know about the brain, she emphasizes that neuroscience gives us increasingly detailed knowledge about how we are conditioned by inherited programs, for example, during adolescence, as well as by our conditions and experiences in childhood. We should, after all, be cautiously restrained in praising and blaming each other, she concludes the article, referring to the Stoic Epictetus, one of the philosophers who thought about free will and who rather emphasized freedom from the notion of a free will.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers Kathinka (2021/2). Variable Determinism in Social Applications: Translating Science to Society. In Monier Cyril & Khamassi Mehdi (Eds), Liberty and cognition, Intellectica, 75, pp.73-89.

This post in Swedish

We like challenging questions

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

An ethical strategy for improving the healthcare of brain-damaged patients

How can we improve the clinical care of brain-damaged patients? Individual clinicians, professional and patient associations, and other relevant stakeholders are struggling with this huge challenge.

A crucial step towards a better treatment of these very fragile patients is the elaboration and adoption of agreed-upon recommendations for their clinical treatment, both in emergency and intensive care settings. These recommendations should cover different aspects, from diagnosis to prognosis and rehabilitation plan. Both Europe and the US have issued relevant guidelines on Disorders of Consciousness (DoCs) in order to make clinical practice consistent and ultimately more beneficial to patients.

Nevertheless, these documents risk becoming ineffective or not having sufficient impact if they are not complemented with a clear strategy for operationalizing them. In other words, it is necessary to develop an adequate translation of the guidelines into actual clinical practice.

In a recent article that I wrote with Arleen Salles, we argue that ethics plays a crucial role in elaborating and implementing this strategy. The application of the guidelines is ethically very relevant, as it can directly impact the patients’ well-being, their right to the best possible care, communication between clinicians and family members, and overall shared decision-making. Failure to apply the guidelines in an ethically sound manner may inadvertently lead to unequal and unfair treatment of certain patients.

To illustrate, both documents recommend integrating behavioural and instrumental approaches to improve the diagnostic accuracy of DoCs (such as vegetative state/unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation). This recommendation is commendable, but not easy to follow because of a number of shortcomings and limitations in the actual clinical settings where patients with DoCs are diagnosed and treated. For instance, not all “ordinary,” non-research oriented hospitals have the necessary financial, human, and technical resources to afford the dual approach recommended by the guidelines. The implementation of the guidelines is arguably a complex process, involving several actors at different levels of action (from the administration to the clinical staff, from the finances to the therapy, etc.). Therefore, it is crucial to clearly identify “who is responsible for what” at each level of the implementation process.

For this reason, we propose that a strategy is built up to operationalize the guidelines, based on a clarification of the notion of responsibility. We introduce a Distributed Responsibility Model (DRM), which frames responsibility as multi-level and multi-dimensional. The main tenet of DRM is a shift from an individualistic to a modular understanding of responsibility, where several agents share professional and/or moral obligations across time. Moreover, specific responsibilities are assigned depending on the different areas of activity. In this way, each agent is assigned a specific autonomy in relation to their field of activity, and the mutual interaction between different agents is clearly defined. As a result, DRM promotes trust between the various agents.

Neither the European nor the US guidelines explicitly address the issue of implementation in terms of responsibility. We argue that this is a problem, because in situations of scarce resources and financial and technological constraints, it is important to explicitly conceptualize responsibility as a distributed ethical imperative that involves several actors. This will make it easier to identify possible failures at different levels and to implement adequate corrective action.

In short, we identify three main levels of responsibility: institutional, clinical, and interpersonal. At the institutional level, responsibility refers to the obligations of the relevant institution or organization (such as the hospital or the research centre). At the clinical level, responsibility refers to the obligations of the clinical staff. At the interpersonal level, responsibility refers to the involvement of different stakeholders with individual patients (more specifically, institutions, clinicians, and families/surrogates).

Our proposal in the article is thus to combine these three levels, as formalized in DRM, in order to operationalize the guidelines. This can help reduce the gap between the recommendations and actual clinical practice.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, Michele; Salles, Arleen. American and European Guidelines on Disorders of Consciousness: Ethical Challenges of Implementation, Journal of Head Trauma Rehabilitation: April 13, 2022. doi: 10.1097/HTR.0000000000000776

We want solid foundations

How can we detect consciousness in brain-damaged patients?

Detecting consciousness in brain-damaged patients can be a huge challenge and the results are often uncertain or misinterpreted. In a previous post on this blog I described six indicators of consciousness that I introduced together with a neuroscientist and another philosopher. Those indicators were originally elaborated targeting animals and AI systems. Our question was: what capacities (deducible from behavior and performance or relevant cerebral underpinnings) make it reasonable to attribute consciousness to these non-human agents? In the same post, I mentioned that we were engaged in a multidisciplinary exploration of the clinical relevance of selected indicators, specifically for testing them on patients with Disorders of Consciousness (DoCs, for instance, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State, Cognitive-Motor Dissociation). While this multidisciplinary work is still in progress, we recently published an ethical reflection on the clinical relevance of the indicators of consciousness, taking DoCs as a case study.

To recapitulate, indicators of consciousness are conceived as particular capacities that can be deduced from the behavior or cognitive performance of a subject and that serve as a basis for a reasonable inference about the level of consciousness of the subject in question. Importantly, also the neural correlates of the relevant behavior or cognitive performance may make possible deducing the indicators of consciousness.  This implies the relevance of the indicators to patients with DoCs, who are often unable to behave or to communicate overtly. Responses in the brain can be used to deduce the indicators of consciousness in these patients.

On the basis of this relevance, we illustrate how the different indicators of consciousness might be applied to patients with DoCs with the final goal of contributing to improve the assessment of their residual conscious activity. In fact, a still astonishing rate of misdiagnosis affects this clinical population. It is estimated that up to 40 % of patients with DoCs are wrongly diagnosed as being in Vegetative State/Unresponsive Wakefulness Syndrome, while they are actually in a Minimally Conscious State. The difference of these diagnoses is not minimal, since they have importantly different prognostic implications, which raises a huge ethical problem.

We also argue for the need to recognize and explore the specific quality of the consciousness possibly retained by patients with DoCs. Because of the devastating damages of their brain, it is likely that their residual consciousness is very different from that of healthy subjects, usually assumed as a reference standard in diagnostic classification. To illustrate, while consciousness in healthy subjects is characterized by several distinct sensory modalities (for example, seeing, hearing and smelling), it is possible that in patients with DoCs, conscious contents (if any) are very limited in sensory modalities. These limitations may be evaluated based on the extent of the brain damage and on the patients’ residual behaviors (for instance, sniffing for smelling). Also, consciousness in healthy subjects is characterized by both dynamics and stability: it includes both dynamic changes and short-term stabilization of contents. Again, in the case of patients with DoCs, it is likely that their residual consciousness is very unstable and flickering, without any capacity for stabilization. If we approach patients with DoCs without acknowledging that consciousness is like a spectrum that accommodates different possible shapes and grades, we exclude a priori the possibility of recognizing the peculiarity of consciousness possibly retained by these patients.

The indicators of consciousness we introduced offer a potential help to identify the specific conscious abilities of these patients. While in this paper we argue for the rationale behind the clinical use of these indicators, and for their relevance to patients with DoCs, we also acknowledge that they open up new lines of research with concrete application to patients with DoCs. As already mentioned, this more applied work is in progress and we are confident of being able to present relevant results in the weeks to come.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 2330 (2022). https://doi.org/10.1186/s12910-022-00770-3

We have a clinical perspective

Fact resistance, human nature and contemplation

Sometimes we all resist facts. I saw a cyclist slip on the icy road. When I asked if it went well, she was on her feet in an instant and denied everything: “I did not fall!” It is human to deny facts. They can hurt and be disturbing.

What are we resisting? The usual answer is that fact-resistant individuals or groups resist facts about the world around us, such as statistics on violent crime, on vaccine side effects, on climate change or on the spread of disease. It then becomes natural to offer resistance to fact resistance by demanding more rigour in the field of knowledge. People should learn to turn more rigorously to the world they live in! The problem is that fact-resistant attitudes do just that. They are almost bewitched by the world and by the causes of what are perceived as outrageous problems in it. And now we too are bewitched by fact resistance and speculate about the causes of this outrageous problem.

Of course, we believe that our opposition is justified. But who does not think so? Legitimate resistance is met by legitimate resistance, and soon the conflict escalates around its double spiral of legitimacy. The possibility of resolving it is blocked by the conflict itself, because all parties are equally legitimate opponents of each other. Everyone hears their own inner voices warning them from acknowledging their mistakes, from acknowledging their uncertainty, from acknowledging their human resistance to reality, as when we fall off the bike and wish it had never happened. The opposing side would immediately seize the opportunity! Soon, our mistake is a scandal on social media. So we do as the person who slipped on the icy road, we deny everything without thinking: “I was not wrong, I had my own facts!” We ignore the fact that life thereby becomes a lie, because our inner voices warn us from acknowledging our uncertainty. We have the right to be recognized, our voices insist, at least as an alternative to the “established view.”

Conflicts give us no time for reflection. Yet, there is really nothing stopping us from sitting down, in the midst of conflict, and resolving it within ourselves. When we give ourselves time to think for ourselves, we are freer to acknowledge our uncertainty and examine our spirals of thought. Of course, this philosophical self-examination does not resolve the conflict between legitimate opponents which escalates around us as increasingly impenetrable and real. It only resolves the conflict within ourselves. But perhaps our thoughtful philosophical voice still gives a hint of how, just by allowing us to soar in uncertainty, we already see the emptiness of the conflict and are free from it?

If we more often dared to soar in uncertainty, if it became more permissible to say “I do not know,” if we listened more attentively to thoughtful voices instead of silencing them with loud knowledge claims, then perhaps fact resistance also decreases. Perhaps fact resistance is not least resistance to an inner fact. To a single inner fact. What fact? Our insecurity as human beings, which we do not permit ourselves. But if you allow yourself to slip on the icy road, then you do not have to deny that you did!

A more thoughtful way of being human should be possible. We shape the societies that shape us.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

How can neuroethics and AI ethics join their forces?

As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.

In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.

In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.

Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.

In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?

This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.

In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0

We transcend disciplinary borders

Images of good and evil artificial intelligence

As Michele Farisco has pointed out on this blog, artificial intelligence (AI) often serves as a projection screen for our self-images as human beings. Sometimes also as a projection screen for our images of good and evil, as you will soon see.

In AI and robotics, autonomy is often sought in the sense that the artificial intelligence should be able to perform its tasks optimally without human guidance. Like a self-driving car, which safely takes you to your destination without you having to steer, accelerate or brake. Another form of autonomy that is often sought is that artificial intelligence should be self-learning and thus be able to improve itself and become more powerful without human guidance.

Philosophers have discussed whether AI can be autonomous even in another sense, which is associated with human reason. According to this picture, we can as autonomous human beings examine our final goals in life and revise them if we deem that new knowledge about the world motivates it. Some philosophers believe that AI cannot do this, because the final goal, or utility function, would make it irrational to change the goal. The goal is fixed. The idea of such stubbornly goal-oriented AI can evoke worrying images of evil AI running amok among us. But the idea can also evoke reassuring images of good AI that reliably supports us.

Worried philosophers have imagined an AI that has the ultimate goal of making ordinary paper clips. This AI is assumed to be self-improving. It is therefore becoming increasingly intelligent and powerful when it comes to its goal of manufacturing paper clips. When the raw materials run out, it learns new ways to turn the earth’s resources into paper clips, and when humans try to prevent it from destroying the planet, it learns to destroy humanity. When the planet is wiped out, it travels into space and turns the universe into paper clips.

Philosophers who issue warnings about “evil” super-intelligent AI also express hopes for “good” super-intelligent AI. Suppose we could give self-improving AI the goal of serving humanity. Without getting tired, it would develop increasingly intelligent and powerful ways of serving us, until the end of time. Unlike the god of religion, this artificial superintelligence would hear our prayers and take ever-smarter action to help us. It would probably sooner or later learn to prevent earthquakes and our climate problems would soon be gone. No theodicy in the world could undermine our faith in this artificial god, whose power to protect us from evil is ever-increasing. Of course, it is unclear how the goal of serving humanity can be defined. But given the opportunity to finally secure the future of humanity, some hopeful philosophers believe that the development of human-friendly self-improving AI should be one of the most essential tasks of our time.

I read all this in a well-written article by Wolfhart Totschnig, who questions the rigid goal orientation associated with autonomous AI in the scenarios above. His most important point is that rigidly goal-oriented AI, which runs amok in the universe or saves humanity from every predicament, is not even conceivable. Outside its domain, the goal loses its meaning. The goal of a self-driving car to safely take the user to the destination has no meaning outside the domain of road traffic. Domain-specific AI can therefore not be generalized to the world as a whole, because the utility function loses its meaning outside the domain, long before the universe is turned into paper clips or the future of humanity is secured by an artificially good god.

This is, of course, an important philosophical point about goals and meaning, about specific domains and the world as a whole. The critique helps us to more realistically assess the risks and opportunities of future AI, without being bewitched by our images. At the same time, I get the impression that Totschnig continues to use AI as a projection screen for human self-images. He argues that future AI may well revise its ultimate goals as it develops a general understanding of the world. The weakness of the above scenarios was that they projected today’s domain-specific AI, not the general intelligence of humans. We then do not see the possibility of a genuinely human-like AI that self-critically reconsiders its final goals when new knowledge about the world makes it necessary. Truly human-equivalent AI would have full autonomy.

Projecting human self-images on future AI is not just a tendency, as far as I can judge, but a norm that governs the discussion. According to this norm, the wrong image is projected in the scenarios above. An image of today’s machines, not of our general human intelligence. Projecting the right self-image on future AI thus appears as an overall goal. Is the goal meaningful or should it be reconsidered self-critically?

These are difficult issues and my impression of the philosophical discussion may be wrong. If you want to judge for yourself, read the article: Fully autonomous AI.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Totschnig, W. Fully Autonomous AI. Sci Eng Ethics 26, 2473–2485 (2020). https://doi.org/10.1007/s11948-020-00243-z

This post in Swedish

We like critical thinking

Digital twins, virtual brains and the dangers of language

A new computer simulation technology has begun to be introduced, for example, in the manufacturing industry. The computer simulation is called a digital twin, which challenges me to bring to life for the reader what something that sounds so imaginative can be in reality.

The most realistic explanation I can find actually comes from Harry Potter’s world. Do you remember the map of Hogwarts, which not only shows all the rooms and corridors, but also the steps in real time of those who sneak around the school? A similar map can be easily created in a computer environment by connecting the map in the computer to sensors in the floor of the building that the map depicts. Immediately you have an interactive digital map of the building that is automatically updated and shows people’s movements in it. Imagine further that the computer simulation can make calculations that predict crowds that exceed the authorities’ recommendations, and that it automatically sends out warning messages via a speaker system. As far as I understand, such an interactive digital map can be called a digital twin for an intelligent house.

Of course, this is a revolutionary technology. The architect’s drawing in a computer program gets extended life in both the production and maintenance of the building. The digital simulation is connected to sensors that update the simulation with current data on relevant factors in the construction process and thereafter in the finished building. The building gets a digital twin that during the entire life cycle of the building automatically contacts maintenance technicians when the sensors show that the washing machines are starting to wear out or that the air is not circulating properly.

The scope of use for digital twins is huge. The point of them, as I understand it, is not that they are “exact virtual copies of reality,” whatever that might mean. The point is that the computer simulation is linked to the simulated object in a practically relevant way. Sensors automatically update the simulation with relevant data, while the simulation automatically updates the simulated object in relevant ways. At the same time, users, manufacturers, maintenance technicians and other actors are updated, who easily can monitor the object’s current status, opportunities and risks, wherever they are in the world.

The European flagship project Human Brain Project plans to develop digital twins of human brains by building virtual brains in a computer environment. In a new article, the philosophers Kathinka Evers and Arleen Salles, who are both working in the project, examine the enormous challenges involved in developing digital twins of living human brains. Is it even conceivable?

The authors compare types of objects that can have digital twins. It can be artefacts such as buildings and cars, or natural inanimate phenomena such as the bedrock at a mine. But it could also be living things such as the heart or the brain. The comparisons in the article show that the brain stands out in several ways, all of which make it unclear whether it is reasonable to talk about digital twins of human brains. Would it be more appropriate to talk about digital cousins?

The brain is astronomically complex and despite new knowledge about it, it is highly opaque to our search for knowledge. How can we talk about a digital twin of something that is as complex as a galaxy and as unknown as a black hole? In addition, the brain is fundamentally dynamically interactive. It is connected not only with the body but also with culture, society and the world around it, with which it develops in uninterrupted interaction. The brain almost merges with its environment. Does that imply that a digital twin would have to be a twin of the brain-body-culture-society-world, that is, a digital twin of everything?

No, of course not. The aim of the project is to find specific medical applications of the new computer simulation technology. By developing digital twins of certain aspects of certain parts of patients’ brains, it is hoped that one can improve and individualize, for example, surgical procedures for diseases such as epilepsy. Just as the map from Harry Potter’s world shows people’s steps in real time, the digital twin of the brain could follow the spread of certain nerve impulses in certain parts of the patient’s brain. This can open up new opportunities to monitor, diagnose, predict and treat diseases such as epilepsy.

Should we avoid the term digital twin when talking about the brain? Yes, it would probably be wiser to talk about digital siblings or digital cousins, argue Kathinka Evers and Arleen Salles. Although experts in the field understand its technical use, the term “digital twin” is linguistically risky when we talk about human brains. It easily leads the mind astray. We imagine that the digital twin must be an exact copy of a human’s whole brain. This risks creating unrealistic expectations and unfounded fears about the development. History shows that language also contains other dangers. Words come with normative expectations that can have ethical and social consequences that may not have been intended. Talking about a digital twin of a mining drill is probably no major linguistic danger. But when it comes to the brains of individual people, the talk of digital twins can become a new linguistic arena where we reinforce prejudices and spread fears.

After reading some popular scientific explanations of digital twins, I would like to add that caution may be needed also in connection with industrial applications. After all, the digital twin of a mining drill is not an “exact virtual copy of the real drill” in some absolute sense, right down to the movements of individual atoms. The digital twin is a copy in the practical sense that the application makes relevant. Sometimes it is enough to copy where people put their feet down, as in Harry Potter’s world, whose magic unexpectedly helps us understand the concept of a digital twin more realistically than many verbal explanations do. Explaining words with the help of other words is not always clarifying, if all the words steer thought in the same direction. The words “copy” and “replica” lead our thinking just as right and just as wrong as the word “twin” does.

If you want to better understand the challenges of creating digital twins of human brains and the importance of conceptual clarity concerning the development, read the philosophically elucidatory article: Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers, Kathinka & Salles, Arleen. (2021). Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics. SCIO: Revista de Filosofía. 27-53. 10.46583 / scio_2021.21.846

This post in Swedish

Minding our language

Inspired

What does it mean to be inspired by someone? Think of these inspired music albums where artists lovingly pay tribute to a great musician by making their own interpretations of the songs. These interpretations often express deep gratitude for the inspiration received from the musician. We can feel similar gratitude to inspiring people in many different areas.

Why are we inspired by inspiring people? Here is a tempting picture. The person who inspires us has something that we lack. To be inspired is to want what the inspiring person has: “I also want to be able to…”; “I want to be as good as…” and so on. That is why we imitate those who inspire us. That is why we train hard. By imitating, by practicing, the inspiring person’s abilities can be transferred to us who lack them.

This could be called a pneumatic picture of inspiration. The inspiring one is, so to speak, an air tank with overpressure. The rest of us are tanks with negative pressure. The pressure difference causes the inspiration. By imitating the inspiring person, the pressure difference is evened out. The pressure migrates from the inspiring to the inspired. We inhale the air that flows from the tank with overpressure.

This picture is certainly partly correct, but it is hardly the whole truth about inspiration. I am not a musician. There is a big difference in pressure between me and any musician. Why does this pressure difference not cause inspiration? Why do I not start imitating musicians, training hard so that some of the musicians’ overpressure is transferred to me?

The pneumatic picture is not the whole truth, other pictures of inspiration are possible. Here is one. Maybe inspiration is not aroused by difference, not by the fact that we lack what the inspiring person has. Perhaps inspiration is aroused by similarity, by the fact that we sense a deep affinity with the one who inspires us. When we are inspired, we recognize ourselves in the one who inspires us. We discover something we did not know about ourselves. Seeds that we did not know existed in us begin to sprout, when the inspiring person makes us aware that we have the same feeling, the same passion, the same creativity… At that moment, the inspiration is aroused in us.

In this alternative picture of inspiration, there is no transfer of abilities from the inspiring one to the inspired ones. Rather, the abilities grow spontaneously in the inspired ones themselves, when they sense their affinity with the inspiring one. In the inspiring person, this growth has already taken place. Creativity has had time to develop and take shape, so that the rest of us can recognize ourselves in it. This alternative image of inspiration also provides an alternative image of human history in different areas. We are familiar with historical representations of how predecessors inspired their successors, as if the abilities of the predecessors were transferred horizontally in time. In the alternative picture, history is not just horizontal. Above all, it has a vertical depth dimension in each of us. Growing takes place vertically in each new generation, much like seeds sprout in the earth and grow towards the sky. History is, in this alternative image, a series of vertical growing, where it is difficult to distinguish the living creativity in the depth dimension from the imitation on the surface.

Why am I writing a post about inspiration? Apart from the fact that it is inspiring to think about something as vital as inspiration, I want to show how unnoticed we make pictures of facts. We do not see that it is actually just pictures, which could be replaced by completely different pictures. I learned this from the philosopher Ludwig Wittgenstein, who inspired me to examine philosophical questions myself: questions which surprisingly often arise because we are captured in our images of things. Our captivity in certain images prevents us from seeing other possibilities and obvious facts.

In addition, I want to show that it really makes a difference if we are caught in our pictures of things or open to the possibility of completely different pictures. It has been a long time since I wrote about ape language research on this blog, but the attempt to teach apes human language is an example of what a huge difference it can make, if we free ourselves from a picture that prevents us from seeing the possibility of other pictures.

Attempts to teach apes human language were based on the first picture, which highlights the difference between the one who inspires and the one who is inspired. It was thought that because apes lack the language skills that we humans have, there is only one way to teach apes human language. We need to transfer the language skills horizontally to the apes, by training them. This “single” opportunity failed so clearly, and the failure was so well-documented, that only a few researchers were subsequently open to the results of a markedly more successful, at least as well-documented experiment, which was based on the alternative picture of inspiration.

In the alternative experiment, the researchers saw an opportunity that the first picture made it difficult to see. If apes and humans live together daily in a closely united group, so that they have opportunities to sense affinities with each other, then language seeds that we did not know existed in apes could be inspired to sprout and grow spontaneously in the apes themselves. Vertically within the apes, rather than through horizontal transmission, as when humans train animals. In fact, this alternative experiment was so successful that it resulted in a series of spontaneous language growths in apes. As time went on, new-born apes were inspired not only by the humans in the group, but also by the older apes whose linguistic creativity had taken shape.

If you want to read more about this unexpected possibility of inspiration between species, which suggests unexpected affinities, as when humans are inspired by each other, you will find a book reference below. I wrote the book a long time ago with William M. Fields and Sue Savage-Rumbaugh. Both have inspired me – for which I am deeply grateful – for example, in this blog post with its alternative picture of inspiration. That I mention the book again is because I hope that the time is ripe for philosophers, psychologists, anthropologists, educationalists, linguists, neuroscientists and many others to be inspired by the unexpected possibility of human-inspired linguistic creativity in our non-human relatives.

To finally connect the threads of music and ape language research, I can tell you that two great musicians, Paul McCartney and Peter Gabriel, have visited the language-inspired apes. Both of them played music with the apes and Peter Gabriel and Panbanisha even created a song together. Can we live without inspiration?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Segerdahl, P., Fields, W. & Savage-Rumbaugh, S. 2005. Kanzi’s Primal Language. The Cultural Initiation of Primates into Language. Palgrave Macmillan

Segerdahl, P. 2017. Can an Ape Become Your Co-Author? Reflections on Becoming as a Presupposition of Teaching. In: A Companion to Wittgenstein on Education. Pedagogical Investigations. Peters, M. A. and Stickney, J. (Eds.). Singapore: Springer, pp. 539-553

This post in Swedish

We write about apes

« Older posts Newer posts »