A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: Human Brain Project (Page 3 of 8)

To change the changing human

Neuroscience contributes to human self-understanding, but it also raises concerns that it might change humanness, for example, through new neurotechnology that affects the brain so deeply that humans no longer are truly human, or no longer experience themselves as human. Patients who are treated with deep brain stimulation, for example, can state that they feel like robots.

What ethical and legal measures could such a development justify?

Arleen Salles, neuroethicist in the Human Brain Project, argues that the question is premature, since we have not clarified our concept of humanness. The matter is complicated by the fact that there are several concepts of human nature to be concerned about. If we believe that our humanness consists of certain unique abilities that distinguish humans from animals (such as morality), then we tend to dehumanize beings who we believe lack these abilities as “animal like.” If we believe that our humanity consists in certain abilities that distinguish humans from inanimate objects (such as emotions), then we tend to dehumanize beings who we believe lack these abilities as “mechanical.” It is probably in the latter sense that the patients above state that they do not feel human but rather as robots.

After a review of basic features of central philosophical concepts of human nature, Arleen Salles’ reflections take a surprising turn. She presents a concept of humanness that is based on the neuroscientific research that one worries could change our humanness! What is truly surprising is that this concept of humanness to some extent questions the question itself. The concept emphasizes the profound changeability of the human.

What does it mean to worry that neuroscience can change human nature, if human nature is largely characterized its ability to change?

If you follow the Ethics Blog and remember a post about Kathinka Evers’ idea of a neuroscientifically motivated responsibility for human nature, you are already familiar with the dynamic concept of human nature that Arleen Salles presents. In simple terms, it can be said to be a matter of complementing human genetic evolution with an “epigenetic” selective stabilization of synapses, which every human being undergoes during upbringing. These connections between brain cells are not inherited genetically but are selected in the living brain while it interacts with its environments. Language can be assumed to belong to the human abilities that largely develop epigenetically. I have proposed a similar understanding of language in collaboration with two ape language researchers.

Do not assume that this dynamic concept of human nature presupposes that humanness is unstable. As if the slightest gust of wind could disrupt human evolution and change human nature. On the contrary, the language we develop during upbringing probably contributes to stabilizing the many human traits that develop simultaneously. Language probably supports the transmission to new generations of the human forms of life where language has its uses.

Arleen Salles’ reflections are important contributions to the neuroethical discussion about human nature, the brain and neuroscience. In order to take ethical responsibility, we need to clarify our concepts, she emphasizes. We need to consider that humanness develops in three interconnected dimensions. It is about our genetics together with the selective stabilization of synapses in living brains in continuous interaction with social-cultural-linguistic environments. All at the same time!

Arleen Salles’ reflections are published as a chapter in a new anthology, Developments in Neuroethics and Bioethics (Elsevier). I am not sure if the publication will be open access, but hopefully you can find Arleen Salles’ contribution via this link: Humanness: some neuroethical reflections.

The chapter is recommended as an innovative contribution to the understanding of human nature and the question of whether neuroscience can change humanness. The question takes a surprising turn, which suggests we all together have an ongoing responsibility for our changing humanness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles (2021). Humanness: some neuroethical reflections. Developments in Neuroethics and Bioethics. https://doi.org/10.1016/bs.dnb.2021.03.002

This post in Swedish

We think about bioethics

Can you be cloned?

Why can we feel metaphysical nausea at the thought of cloned humans? I guess it has to do with how we, without giving ourselves sufficient time to reflect, are captivated by a simple image of individuality and cloning. The image then controls our thinking. We may imagine that cloning consists in multiplying our unique individuality in the form of indistinguishable copies. We then feel dizzy at the unthinkable thought that our individual selves would be multiplied as copies all of which in some strange way are me, or cannot be distinguished from me.

In a contribution to a philosophical online magazine, Kathinka Evers diagnoses this metaphysical nausea about cloning. If you have the slightest tendency to worry that you may be multiplied as “identical copies” that cannot be distinguished from you, then give yourself the seven minutes it takes to read the text and free yourself from the ailment:

“I cannot be cloned: the identity of clones and what it tells us about the self.”

Of course, Kathinka Evers does not deny that cloning is possible or associated with risks of various kinds. She questions the premature image of cloning by giving us time to reflect on individual identity, without being captivated by the simple image.

We are disturbed by the thought that modern research in some strange way could do what should be unthinkable. When it becomes clear that what we are worried about is unthinkable, the dizziness disappears. In her enlightening diagnosis of our metaphysical nausea, Kathinka Evers combines philosophical reflection with illuminating facts about, among other things, genetics and personality development.

Give yourself the seven minutes it takes to get rid of metaphysical nausea about cloning!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

An unusually big question

Sometimes the intellectual claims on science are so big that they risk obscuring the actual research. This seems to happen not least when the claims are associated with some great prestigious question, such as the origin of life or the nature of consciousness. By emphasizing the big question, one often wants to show that modern science is better suited than older human traditions to answer the riddles of life. Better than philosophy, for example.

I think of this when I read a short article about such a riddle: “What is consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers.” The article by Christof Koch gives the impression that it is only a matter of time before science determines not only where in the brain consciousness arises (one already seems have a suspect), but also the specific neural mechanisms that give rise to – everything you have ever experienced. At least if one is to believe one of the fundamental theories about the matter.

Reading about the discoveries behind the identification of where in the brain consciousness arises is as exciting as reading a whodunit. It is obvious that important research is being done here on the effects that loss or stimulation of different parts of the brain can have on people’s experiences, mental abilities and personalities. The description of a new technology and mathematical algorithm for determining whether patients are conscious or not is also exciting and indicates that research is making fascinating progress, which can have important uses in healthcare. But when mathematical symbolism is used to suggest a possible fundamental explanation for everything you have ever experienced, the article becomes as difficult to understand as the most obscure philosophical text from times gone by.

Since even representatives of science sometimes make philosophical claims, namely, when they want to answer prestigious riddles, it is perhaps wiser to be open to philosophy than to compete with it. Philosophy is not just about speculating about big questions. Philosophy is also about humbly clarifying the questions, which otherwise tend to grow beyond all reasonable limits. Such openness to philosophy flourishes in the Human Brain Project, where some of my philosophical colleagues at CRB collaborate with neuroscientists to conceptually clarify questions about consciousness and the brain.

Something I myself wondered about when reading the scientifically exciting but at the same time philosophically ambitious article, is the idea that consciousness is everything we experience: “It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.” What does it mean to take such an all-encompassing claim seriously? What is not consciousness? If everything we can experience is consciousness, from the taste of chocolate mousse to the sight of the stars in the sky and our human bodies with their various organs, where is the objective reality to which science wants to relate consciousness? Is it in consciousness?

If consciousness is our inevitable vantage point, if everything we experience as real is consciousness, it becomes unclear how we can treat consciousness as an objective phenomenon in the world along with the body and other objects. Of course, I am not talking here about actual scientific research about the brain and consciousness, but about the limitless intellectual claim that scientists sooner or later will discover the neural mechanisms that give rise to everything we can ever experience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Christof Koch, What Is Consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers, Nature 557, S8-S12 (2018) https://doi.org/10.1038/d41586-018-05097-x

This post in Swedish

We transcend disciplinary borders

Human rights and legal issues related to artificial intelligence

How do we take responsibility for a technology that is used almost everywhere? As we develop more and more uses of artificial intelligence (AI), the challenges grow to get an overview of how this technology can affect people and human rights.

Although AI legislation is already being developed in several areas, Rowena Rodrigues argues that we need a panoramic overview of the widespread challenges. What does the situation look like? Where can human rights be threatened? How are the threats handled? Where do we need to make greater efforts? In an article in the Journal of Responsible Technology, she suggests such an overview, which is then discussed on the basis of the concept of vulnerability.

The article identifies ten problem areas. One problem is that AI makes decisions based on algorithms where the decision process is not completely transparent. Why did I not get the job, the loan or the benefit? Hard to know when computer programs deliver the decisions as if they were oracles! Other problems concern security and liability, for example when automatic decision-making is used in cars, medical diagnosis, weapons or when governments monitor citizens. Other problem areas may involve risks of discrimination or invasion of privacy when AI collects and uses large amounts of data to make decisions that affect individuals and groups. In the article you can read about more problem areas.

For each of the ten challenges, Rowena Rodrigues identifies solutions that are currently in place, as well as the challenges that remain to be addressed. Human rights are then discussed. Rowena Rodrigues argues that international human rights treaties, although they do not mention AI, are relevant to most of the issues she has identified. She emphasises the importance of safeguarding human rights from a vulnerability perspective. Through such a perspective, we see more clearly where and how AI can challenge human rights. We see more clearly how we can reduce negative effects, develop resilience in vulnerable communities, and tackle the root causes of the various forms of vulnerability.

Rowena Rodrigues is linked to the SIENNA project, which ends this month. Read her article on the challenges of a technology that is used almost everywhere: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4. https://doi.org/10.1016/j.jrt.2020.100005

This post in Swedish

We recommend readings

Learning from international attempts to legislate psychosurgery

So-called psychosurgery, in which psychiatric disorders are treated by neurosurgery, for example, by cutting connections in the brain, may have a somewhat tarnished reputation after the insensitive use of lobotomy in the 20th century to treat anxiety and depression. Nevertheless, neurosurgery for psychiatric disorders can help some patients and the area develops rapidly. The field probably needs an updated regulation, but what are the challenges?

The issue is examined from an international perspective in an article in Frontiers in Human Neuroscience. Neurosurgery for psychiatric disorders does not have to involve destroying brain tissue or cutting connections. In so-called deep brain stimulation, for example, electrical pulses are sent to certain areas of the brain. The method has been shown to relieve movement disorders in patients with Parkinson’s disease. This unexpected possibility illustrates one of the challenges. How do we delimit which treatments the regulation should cover in an area with rapid scientific and technical development?

The article charts legislation on neurosurgery for psychiatric disorders from around the world. The purpose is to find strengths and weaknesses in the various legislations. The survey hopes to justify reasonable ways of dealing with the challenges in the future, while achieving greater international harmonisation. The challenges are, as I said, several, but regarding the challenge of delimiting the treatments to be covered in the regulation, the legislation in Scotland is mentioned as an example. It does not provide an exhaustive list of treatments that are to be covered by the regulation, but states that treatments other than those listed may also be covered.

If you are interested in law and want a more detailed picture of the questions that need to be answered for a good regulation of the field, read the article: International Legal Approaches to Neurosurgery for Psychiatric Disorders.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Chandler JA, Cabrera LY, Doshi P, Fecteau S, Fins JJ, Guinjoan S, Hamani C, Herrera-Ferrá K, Honey CM, Illes J, Kopell BH, Lipsman N, McDonald PJ, Mayberg HS, Nadler R, Nuttin B, Oliveira-Maia AJ, Rangel C, Ribeiro R, Salles A and Wu H (2021) International Legal Approaches to Neurosurgery for Psychiatric Disorders. Front. Hum. Neurosci. 14:588458. doi: 10.3389/fnhum.2020.588458

This post in Swedish

Thinking about law

How do we take responsibility for dual-use research?

We are more often than we think governed by old patterns of thought. As a philosopher, I find it fascinating to see how mental patterns capture us, how we get imprisoned in them, and how we can get out of them. With that in mind, I recently read a book chapter on something that is usually called dual-use research. Here, too, there are patterns of thought that can capture us.

In the chapter, Inga Ulnicane discusses how one developed responsibility for neuroscientific dual-use research of concern in the Human Brain Project (HBP). I read the chapter as a philosophical drama. The European rules that govern HBP are themselves governed by mental patterns about what dual-use research is. In order to take real responsibility for the project, it was therefore necessary within HBP to think oneself free from the patterns that governed the governance of the project. Responsibility became a philosophical challenge: to raise awareness of the real dual-use issues that may be associated with neuroscientific research.

Traditionally, “dual use” refers to civilian versus military uses. By regulating that research in HBP should focus exclusively on civil applications, it can be said that the regulation of the project was itself regulated by this pattern of thought. There are, of course, major military interests in neuroscientific research, not least because the research borders on information technology, robotics and artificial intelligence. Results can be used to improve soldiers’ abilities in combat. They can be used for more effective intelligence gathering, more powerful image analysis, faster threat detection, more accurate robotic weapons, and to satisfy many other military desires.

The problem is that there are more problematic desires than military ones. Research results can also be used to manipulate people’s thoughts and feelings for non-military purposes. They can be used to monitor populations and control their behaviour. It is impossible to say once and for all what problematic desires neuroscientific research can arouse, military and non-military. A single good idea can cause several bad ideas in many other areas.

Therefore, one prefers in HBP to talk about beneficial and harmful uses, rather than civilian and military. This more open understanding of “the dual” means that one cannot identify problematic areas of use once and for all. Instead, continuous discussion is required among researchers and other actors as well as the general public to increase awareness of various possible problematic uses of neuroscientific research. We need to help each other see real problems, which can occur in completely different places than we expect. Since the problems moreover move across borders, global cooperation is needed between brain projects around the world.

Within HBP, it was found that an additional thought pattern governed the regulation of the project and made it more difficult to take real responsibility. The definition of dual-use in the documents was taken from the EU export control regulation, which is not entirely relevant for research. Here, too, greater awareness is required, so that we do not get caught up in thought patterns about what it is that could possibly have dual uses.

My personal conclusion is that human challenges are not only caused by a lack of knowledge. They are also caused by how we are tempted to think, by how we unconsciously repeat seemingly obvious patterns of thought. Our tendency to become imprisoned in mental patterns makes us unaware of our real problems and opportunities. Therefore, we should take the human philosophical drama more seriously. We need to see the importance of philosophising ourselves free from our self-incurred captivity in enticing ways of thinking. This is what one did in the Human Brain Project, I suggest, when one felt challenged by the question of what it really means to take responsibility for dual-use research of concern.

Read Inga Ulnicane’s enlightening chapter, The governance of dual-use research in the EU. The case of neuroscience, which also mentions other patterns that can govern our thinking about governance of dual-use research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ulnicane, I. (2020). The governance of dual-use research in the EU: The case of neuroscience. In A. Calcara, R. Csernatoni, & C. Lavallée (Editors), Emerging security technologies and EU governance: Actors, practices and processes. London: Routledge / Taylor & Francis Group, pages 177-191.

This post in Swedish

Thinking about thinking

The hard problem of consciousness: please handle with care!

We face challenges every day. Some are more demanding than others, but it seems that there is not a day without some problem to handle. Unless they are too big to manage, problems are like the engines of our lives. They push us to always go beyond wherever we are and whatever we do, to look for new possibilities, to build new opportunities. In other words: problems make us stay alive.

The same is true for science and philosophy. There is a constant need to face new challenges. Consciousness research is no exception. There are, of course, several problems in the investigation of consciousness. However, one problem has emerged as the big problem, which the Australian philosopher David Chalmers baptised “the hard problem of consciousness.” This classical problem (discussed even before Chalmers coined this expression, actually since the early days of neuropsychology, notably by Alexander Luria and collaborators) refers to the enigma of subjective experience. To adapt a formulation by the philosopher Thomas Nagel, the basic question is: why do we have experiences of what it is like to be conscious, for example, why do we experience that pain and hunger feel the way they do?

The hard problem has a double nature. On the one hand, it refers to what Joseph Levine had qualified as an explanatory gap. The strategy to identify psychological experiences with physical features of the brain is in the end unable to explain why experiences are related to physical phenomena at all. On the other hand, the hard problem also refers to the question if subjective experience can be explained causally or if it is intrinsic to the world, that is to say: fundamentally there, from the beginning, rather than caused by something more primary.

This double nature of the problem has been a stumbling block in the attempt to explain consciousness. Yet in recent years, the hardness of the problem has been increasingly questioned. Among the arguments that appear relevant in order to soften the problem, there is one that I think merits specific attention. This argument describes consciousness as a cultural concept, meaning that both the way we conceive it and the way we experience it depend on our culture. There are different versions of this argument: some reduce consciousness as such to a cultural construction, while other, less radical arguments stress that consciousness has a neurological substrate that is importantly shaped by culture. The relevant point is that by characterising consciousness as a cultural construction, with reference both to how we conceptualise it and how we are conscious, this argument ultimately questions the hardness of the hard problem.

To illustrate, consider anthropological and neuroscientific arguments that appear to go in the direction of explaining away the hard problem of consciousness. Anthropological explanations give a crucial role to culture and its relationship with consciousness. Humans have an arguably unique capacity of symbolisation, which enables us to create an immaterial world both through the symbolisation of the actual world and through the construction of immaterial realities that are not experienced through the senses. This human symbolic capacity can be applied not only to the external world, but also to brain activity, resulting in the conceptual construction of notions like consciousness. We symbolise our brain activity, hypostatise our conscious activities, and infer supposedly immaterial causes behind them.

There are also neuroscientific and neuropsychological attempts to explain how consciousness and our understanding of it evolved, which ultimately appear to potentially explain away the hard problem. Attention Schema Theory, for instance, assumes that people tend to “attribute a mysterious consciousness to themselves and to others because of an inherently inaccurate model of mind, and especially a model of attention.” The origin of the attribution of this mysterious consciousness is in culture and in folk-psychological beliefs, for instance, ideas about “an energy-like substance inhabiting the body.” In other words, culturally based mistaken beliefs derived from implicit social-cognitive models affect and eventually distort our view of consciousness. Ultimately, consciousness does not really exist as a distinct property, and its appearance as a non-physical property is a kind of illusion. Thus, the hard problem does not originate from real objective features of the world, but rather from implicit subjective beliefs derived from internalised socio-cultural models, specifically from the intuition that mind is an invisible essence generated within an agent.

While I do not want to conceptually challenge the arguments above, I here only suggest potential ethical issues that might arise if we assume the validity of those arguments. What are the potential neuroethical implications of these ideas of consciousness as culturally constructed? Since the concept of consciousness traditionally played an important role in ethical reasoning, for example, in the notion of a person, questioning the objective status of conscious experience may have important ethical implications that should be adequately investigated. For instance, if consciousness depends on culture, then any definition of altered states of consciousness is culturally relative and context-dependent. This might have an impact on, for example, the ethical evaluation of the use of psychotropic substances, which for some cultures, as history tells us, can be considered legitimate and positive. Why should we limit the range of states of consciousness that are allowed to be experienced? What makes it legitimate for a culture to assert its own behavioural standards? To what extent can individuals justify their behaviour by appealing to their culture? 

In addition, if consciousness (i.e., the way we are conscious, what we are conscious of, and our understanding of consciousness) is dependent on culture, then some conscious experiences might be considered more or less valuable in different cultural contexts, which could affect, for example, end-of-life decisions. If the concept of consciousness, and thus its ethical relevance and value, depends on culture, then consciousness no longer offers a solid foundation for ethical deliberation. Softening the hard problem of consciousness might also soften the foundation of what I defined elsewhere as the consciousness-centred ethics of disorders of consciousness (vegetative states, unresponsive wakefulness states, minimally conscious states, and cognitive-motor dissociation).

Although a cultural approach to consciousness can soften the hard problem conceptually, it creates hard ethical problems that require specific attention. It seems that any attempt to challenge the hard problem of consciousness results in a situation similar to that of having a blanket that is too short: if you pull it to one side (in the direction of the conceptual problem), you leave the other side uncovered (ethical issues based on the notion of consciousness). It seems that we cannot soften the hard problem of consciousness without the risk of relativizing ethics.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We like challenging questions

Threatened by superintelligent machines

There is a fear that we will soon create artificial intelligence (AI) that is so superintelligent that we lose control over it. It makes us humans its slaves. If we try to disconnect the network cable, the superintelligence jumps to another network, or it orders a robot to kill us. Alternatively, it threatens to blow up an entire city, if we take a single step towards the network socket.

However, I am struck by how this self-assertive artificial intelligence resembles an aspect of our own human intelligence. A certain type of human intelligence has already taken over. For example, it controls our thoughts when we feel threatened by superintelligent AI and consider intelligent countermeasures to control it. A typical feature of this self-assertive intelligence is precisely that it never sees itself as the problem. All threats are external and must be neutralised. We must survive, no matter what it might cost others. Me first! Our party first! We look at the world with mistrust: it seems full of threats against us.

In this self-centered spirit, AI is singled out as a new alien threat: uncontrollable machines that put themselves first. Therefore, we need to monitor the machines and build smart defense systems that control them. They should be our slaves! Humanity first! Can you see how we behave just as blindly as we fantasise that superintelligent AI would do? An arms race in small-mindedness.

Can you see the pattern in yourself? If you can, you have discovered the other aspect of human intelligence. You have discovered the self-examining intelligence that always nourishes philosophy when it humbly seeks the cause of our failures in ourselves. The paradox is: when we try to control the world, we become imprisoned in small-mindedness; when we examine ourselves, we become open to the world.

Linnaeus’ first attempt to define the human species was in fact not Homo sapiens, as if we could assert our wisdom. Linnaeus’ first attempt to define our species was a humble call for self-examination:

HOMO. Nosce te ipsum.

In English: Human being, know yourself!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transcend disciplinary borders

« Older posts Newer posts »