A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: consciousness (Page 2 of 5)

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

An unusually big question

Sometimes the intellectual claims on science are so big that they risk obscuring the actual research. This seems to happen not least when the claims are associated with some great prestigious question, such as the origin of life or the nature of consciousness. By emphasizing the big question, one often wants to show that modern science is better suited than older human traditions to answer the riddles of life. Better than philosophy, for example.

I think of this when I read a short article about such a riddle: “What is consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers.” The article by Christof Koch gives the impression that it is only a matter of time before science determines not only where in the brain consciousness arises (one already seems have a suspect), but also the specific neural mechanisms that give rise to – everything you have ever experienced. At least if one is to believe one of the fundamental theories about the matter.

Reading about the discoveries behind the identification of where in the brain consciousness arises is as exciting as reading a whodunit. It is obvious that important research is being done here on the effects that loss or stimulation of different parts of the brain can have on people’s experiences, mental abilities and personalities. The description of a new technology and mathematical algorithm for determining whether patients are conscious or not is also exciting and indicates that research is making fascinating progress, which can have important uses in healthcare. But when mathematical symbolism is used to suggest a possible fundamental explanation for everything you have ever experienced, the article becomes as difficult to understand as the most obscure philosophical text from times gone by.

Since even representatives of science sometimes make philosophical claims, namely, when they want to answer prestigious riddles, it is perhaps wiser to be open to philosophy than to compete with it. Philosophy is not just about speculating about big questions. Philosophy is also about humbly clarifying the questions, which otherwise tend to grow beyond all reasonable limits. Such openness to philosophy flourishes in the Human Brain Project, where some of my philosophical colleagues at CRB collaborate with neuroscientists to conceptually clarify questions about consciousness and the brain.

Something I myself wondered about when reading the scientifically exciting but at the same time philosophically ambitious article, is the idea that consciousness is everything we experience: “It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.” What does it mean to take such an all-encompassing claim seriously? What is not consciousness? If everything we can experience is consciousness, from the taste of chocolate mousse to the sight of the stars in the sky and our human bodies with their various organs, where is the objective reality to which science wants to relate consciousness? Is it in consciousness?

If consciousness is our inevitable vantage point, if everything we experience as real is consciousness, it becomes unclear how we can treat consciousness as an objective phenomenon in the world along with the body and other objects. Of course, I am not talking here about actual scientific research about the brain and consciousness, but about the limitless intellectual claim that scientists sooner or later will discover the neural mechanisms that give rise to everything we can ever experience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Christof Koch, What Is Consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers, Nature 557, S8-S12 (2018) https://doi.org/10.1038/d41586-018-05097-x

This post in Swedish

We transcend disciplinary borders

The hard problem of consciousness: please handle with care!

We face challenges every day. Some are more demanding than others, but it seems that there is not a day without some problem to handle. Unless they are too big to manage, problems are like the engines of our lives. They push us to always go beyond wherever we are and whatever we do, to look for new possibilities, to build new opportunities. In other words: problems make us stay alive.

The same is true for science and philosophy. There is a constant need to face new challenges. Consciousness research is no exception. There are, of course, several problems in the investigation of consciousness. However, one problem has emerged as the big problem, which the Australian philosopher David Chalmers baptised “the hard problem of consciousness.” This classical problem (discussed even before Chalmers coined this expression, actually since the early days of neuropsychology, notably by Alexander Luria and collaborators) refers to the enigma of subjective experience. To adapt a formulation by the philosopher Thomas Nagel, the basic question is: why do we have experiences of what it is like to be conscious, for example, why do we experience that pain and hunger feel the way they do?

The hard problem has a double nature. On the one hand, it refers to what Joseph Levine had qualified as an explanatory gap. The strategy to identify psychological experiences with physical features of the brain is in the end unable to explain why experiences are related to physical phenomena at all. On the other hand, the hard problem also refers to the question if subjective experience can be explained causally or if it is intrinsic to the world, that is to say: fundamentally there, from the beginning, rather than caused by something more primary.

This double nature of the problem has been a stumbling block in the attempt to explain consciousness. Yet in recent years, the hardness of the problem has been increasingly questioned. Among the arguments that appear relevant in order to soften the problem, there is one that I think merits specific attention. This argument describes consciousness as a cultural concept, meaning that both the way we conceive it and the way we experience it depend on our culture. There are different versions of this argument: some reduce consciousness as such to a cultural construction, while other, less radical arguments stress that consciousness has a neurological substrate that is importantly shaped by culture. The relevant point is that by characterising consciousness as a cultural construction, with reference both to how we conceptualise it and how we are conscious, this argument ultimately questions the hardness of the hard problem.

To illustrate, consider anthropological and neuroscientific arguments that appear to go in the direction of explaining away the hard problem of consciousness. Anthropological explanations give a crucial role to culture and its relationship with consciousness. Humans have an arguably unique capacity of symbolisation, which enables us to create an immaterial world both through the symbolisation of the actual world and through the construction of immaterial realities that are not experienced through the senses. This human symbolic capacity can be applied not only to the external world, but also to brain activity, resulting in the conceptual construction of notions like consciousness. We symbolise our brain activity, hypostatise our conscious activities, and infer supposedly immaterial causes behind them.

There are also neuroscientific and neuropsychological attempts to explain how consciousness and our understanding of it evolved, which ultimately appear to potentially explain away the hard problem. Attention Schema Theory, for instance, assumes that people tend to “attribute a mysterious consciousness to themselves and to others because of an inherently inaccurate model of mind, and especially a model of attention.” The origin of the attribution of this mysterious consciousness is in culture and in folk-psychological beliefs, for instance, ideas about “an energy-like substance inhabiting the body.” In other words, culturally based mistaken beliefs derived from implicit social-cognitive models affect and eventually distort our view of consciousness. Ultimately, consciousness does not really exist as a distinct property, and its appearance as a non-physical property is a kind of illusion. Thus, the hard problem does not originate from real objective features of the world, but rather from implicit subjective beliefs derived from internalised socio-cultural models, specifically from the intuition that mind is an invisible essence generated within an agent.

While I do not want to conceptually challenge the arguments above, I here only suggest potential ethical issues that might arise if we assume the validity of those arguments. What are the potential neuroethical implications of these ideas of consciousness as culturally constructed? Since the concept of consciousness traditionally played an important role in ethical reasoning, for example, in the notion of a person, questioning the objective status of conscious experience may have important ethical implications that should be adequately investigated. For instance, if consciousness depends on culture, then any definition of altered states of consciousness is culturally relative and context-dependent. This might have an impact on, for example, the ethical evaluation of the use of psychotropic substances, which for some cultures, as history tells us, can be considered legitimate and positive. Why should we limit the range of states of consciousness that are allowed to be experienced? What makes it legitimate for a culture to assert its own behavioural standards? To what extent can individuals justify their behaviour by appealing to their culture? 

In addition, if consciousness (i.e., the way we are conscious, what we are conscious of, and our understanding of consciousness) is dependent on culture, then some conscious experiences might be considered more or less valuable in different cultural contexts, which could affect, for example, end-of-life decisions. If the concept of consciousness, and thus its ethical relevance and value, depends on culture, then consciousness no longer offers a solid foundation for ethical deliberation. Softening the hard problem of consciousness might also soften the foundation of what I defined elsewhere as the consciousness-centred ethics of disorders of consciousness (vegetative states, unresponsive wakefulness states, minimally conscious states, and cognitive-motor dissociation).

Although a cultural approach to consciousness can soften the hard problem conceptually, it creates hard ethical problems that require specific attention. It seems that any attempt to challenge the hard problem of consciousness results in a situation similar to that of having a blanket that is too short: if you pull it to one side (in the direction of the conceptual problem), you leave the other side uncovered (ethical issues based on the notion of consciousness). It seems that we cannot soften the hard problem of consciousness without the risk of relativizing ethics.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We like challenging questions

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transcend disciplinary borders

Anthropomorphism in AI can limit scientific and technological development

Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

We recommend readings

This post in Swedish

Artificial intelligence and living consciousness

The Ethics Blog will publish several posts on artificial intelligence in the future. Today, I just want to make a little observation of something remarkable.

The last century was marked by fear of human consciousness. Our mind seemed as mystic as the soul, as superfluous in a scientific age as God. In psychology, behaviorism flourished, which defined psychological words in terms of bodily behavior that could be studied scientifically in the laboratory. Our living consciousness was treated as a relic from bygone superstitious ages.

What is so remarkable about artificial intelligence? Suddenly, one seems to idolize consciousness. One wallows in previously sinful psychological words, at least when one talks about what computers and robots can do. These machines can see and hear; they can think and speak. They can even learn by themselves.

Does this mean that the fear of consciousness has ceased? Hardly, because when artificial intelligence employs psychological words such as seeing and hearing, thinking and understanding, the words cease to be psychological. The idea of computer “learning,” for example, is a technical term that computer experts define in their laboratories.

When artificial intelligence embellishes machines with psychological words, then, one repeats how behaviorism defined mind in terms of something else. Psychological words take on new machine meanings that overshadow the meanings the words have among living human beings.

Remember this next time you wonder if robots might become conscious. The development exhibits fear of consciousness. Therefore, what you are wondering is not if robots can become conscious. You wonder if your own consciousness can be superstition. Remarkable, right?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

We like challenging questions

This post in Swedish

Neuroethics as foundational

As neuroscience expands, the need for ethical reflection also expands. A new field has emerged, neuroethics, which celebrated its 15th anniversary last year. This was noted in the journal AJOB Neuroscience through an article about the area’s current and future challenges.

In one of the published comments, three researchers from the Human Brain Project and CRB emphasize the importance of basic conceptual analysis in neuroethics. The new field of neuroethics is more than just a kind of ethical mediator between neuroscience and society. Neuroethics can and should contribute to the conceptual self-understanding of neuroscience, according to Arleen Salles, Kathinka Evers and Michele Farisco. Without such self-understanding, the ethical challenges become unclear, sometimes even imaginary.

Foundational conceptual analysis can sound stiff. However, if I understand the authors, it is just the opposite. Conceptual analysis is needed to make concepts agile, when habitual thinking made them stiff. One example is the habitual thinking that facts about the brain can be connected with moral concepts, so that, for example, brain research can explain to us what it “really” means to be morally responsible for our actions. Such habitual thinking about the role of the brain in human life may suggest purely imaginary ethical concerns about the expansion of neuroscience.

Another example the authors give is the external perspective on consciousness in neuroscience. Neuroscience does not approach consciousness from a first-person perspective, but from a third-person perspective. Neuroscience may need to be reminded of this and similar conceptual limitations, to better understand the models that one develops of the brain and human consciousness, and the conclusions that can be drawn from the models.

Conceptual neuroethics is needed to free concepts from intellectual deadlocks arising with the expansion of neuroscience. Thus, neuroethics can contribute to deepening the self-understanding of neuroscience as a science with both theoretical and practical dimensions. At least that is how I understand the spirit of the authors’ comment in AJOB Neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Emerging Issues Task Force, International Neuroethics Society (2019) Neuroethics at 15: The Current and Future Environment for Neuroethics, AJOB Neuroscience, 10:3, 104-110, DOI: 10.1080/21507740.2019.1632958

Arleen Salles, Kathinka Evers & Michele Farisco (2019) The Need for a Conceptual Expansion of Neuroethics, AJOB Neuroscience, 10:3, 126-128, DOI: 10.1080/21507740.2019.1632972

We like ethics

This post in Swedish

An extended concept of consciousness and an ethics of the whole brain

Pär SegerdahlWhen we visit a newly operated patient, we probably wonder: Has she regained consciousness? The question is important to us. If the answer is yes then she is among us, we can socialize. If the answer is negative then she is absent, it is not possible to socialize. We can only wait and hope that she returns to us.

Michele Farisco at CRB proposes in a new dissertation a more extensive concept of consciousness. According to this concept, we are conscious without interruption, basically, as long as the brain lives. This sounds controversial. It appears insensitive to the enormous importance it has for us in everyday life whether someone is conscious or not.

Maybe I should explain right away that it is not about changing our usual ways of speaking of consciousness. Rather, Michele Farisco suggests a new neuroscientific concept of consciousness. Science sometimes needs to use familiar words in unfamiliar ways. For example, biology cannot speak of humans and animals as an opposition, as we usually do. For biology, the human is one of the animals. Just as biology extends the concept of an animal to us humans, Michele Farisco extends the concept of consciousness to the entire living brain.

Why can an extended concept of consciousness be reasonable in neuroscience? A simple answer is that the brain continues to be active, even when in the ordinary sense we lose consciousness and the ability to socialize. The brain continues to interact with the signals from the body and from the environment. Neural processes that keep us alive continue, albeit in modified forms. The seemingly lifeless body in the hospital bed is a poor picture of the unconscious brain. It may be very active. In fact, some types of brain processes are extra prominent at rest, when the brain does not respond to external stimuli.

Additional factors support an extended neuroscientific concept of consciousness. One is that even when we are conscious in the usual sense, many brain processes happen unconsciously. These processes often do the same work that conscious processes do, or support conscious processes, or are shaped by conscious processes. When we look neuroscientifically at the brain, our black and white opposition between conscious and unconscious becomes difficult to discern. It may be more reasonable to speak of continuities, of levels of the same consciousness, which always is inherent in the living brain.

In short, neuroscience may gain from not adopting our ordinary concept of consciousness, which makes such an opposition between conscious and unconscious. The difference that is absolute when we visit a newly operated patient – is she conscious or not? – is not as black and white when we study the brain.

Does Michele Farisco propose that neuroscience should make no difference whatsoever between what we commonly call conscious and unconscious, between being present and absent? No, of course not. Neuroscience must continue to explore that difference. However, we can understand the difference as a modification of the same basic consciousness, of the same basic brain activity. Neuroscience needs to study differences without falling victim to a black and white opposition. Much like biology needs to study differences between humans and other animals, even when it extends the concept of an animal to the human.

The point, then, is that neuroscience needs to be open to both difference and continuity. Michele Farisco proposes a neuroscientific distinction between aware and unaware consciousness. It captures both aspects, the difference and the continuity.

Michele Farisco’s extended concept of consciousness also has ethical consequences. It can motivate an ethics of the whole brain, not just of the conscious brain, in the usual sense. The question is no longer, merely, whether the patient is conscious or not. The question is at what level the patient is conscious. We may need to consider ethically even unconscious brains and brain processes, in the ordinary sense. For example, by talking calmly near the patient, even though she does not seem to hear, or by playing music that the patient usually appreciates.

Perhaps we should not settle for waiting and hoping that the patient will return to us. The brain is already here. At several levels, this brain may continue to socialize, even though the patient does not seem to respond.

If you want to know more about Michele Farisco’s extended concept of consciousness and his ethics of the whole brain, read the dissertation that he recently defended. You can also read about new technological opportunities to communicate with patients suffering from severe disorders of consciousness, and about new opportunities to diagnose such disorders.

Pär Segerdahl

Farisco, Michele. 2019. Brain, consciousness and disorders of consciousness at the intersection of neuroscience and philosophy. (Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Medicine 1597.) Uppsala: Acta Universitatis Upsaliensis.

This post in Swedish

We challenge habits of thought : the Ethics Blog

Neuroethical reflection in the Human Brain Project

Arleen SallesThe emergence of several national level brain initiatives and the priority given to neuroscientific research make it important to examine the values underpinning the research, and to address the ethical, social, legal, philosophical, and regulatory issues that it raises.

Neuroscientific insights allow us to understand more about the human brain: about its dynamic nature and about its disorders. These insights also provide the basis for potentially manipulating the brain through neurotechnology and pharmacotherapy. Research in neuroscience thus raises multiple concerns: From questions about the ethical significance of natural and engineered neural circuitry, to the issue of how a biological model or a neuroscientific account of brain disease might impact individuals, communities, and societies at large. From how to protect human brain data to how to determine and guard against possible misuses of neuroscientific findings.

Furthermore, the development and applications of neuro-technology to alleviate symptoms or even enhance the human brain raise further concerns, such as their potential impact on the personality, agency, and autonomy of some users. Indeed, some empirical findings appear to even challenge long held conceptions about who we are, the capacity to choose freely, consciousness, and moral responsibility.

Neuroethics is the field of study devoted to examining these critical issues. Unfortunately, it has sometimes been reduced to a subfield of applied ethics understood as a merely procedural approach. However, in our understanding, neuroethics is methodologically much richer. It is concerned not just with using ethical theory to address normative issues about right and wrong, but notably with providing needed conceptual clarification of the relevant neuroscientific and philosophical notions. Only by having conceptual clarity about the challenges presented will we be able to address and adequately manage them.

So understood, neuroethics plays a key role in the Human Brain Project (HBP). The HBP is a European Community Flagship Project of Information and Computing Technologies (ICT). It proposes that to achieve a fuller understanding of the brain, it is necessary to integrate the massive volumes of both already available data and new data coming from labs around the world. Expected outcomes include the creation and operation of an ICT infrastructure for neuroscience and brain related research in medicine and computing. The goal is to achieve a multilevel understanding of the brain (from genes to cognition), its diseases and the effects of drugs (allowing early diagnoses and personalised treatments), and to capture the brain’s computational capabilities.

The HBP is funded by the European Commission in the framework of the EU’s Horizon 2020 research-funding programme. The programme promotes responsible research and innovation (RRI). RRI is generally understood as an interactive process that engages social actors, researchers, and innovators who must be mutually responsive and work towards the ethical permissibility of the relevant research and its products. The goal is to ensure that scientific progress and innovation are responsible and sustainable: that they increase individual and societal flourishing and maximize the common good.

To develop, broaden, and enhance RRI within the project, the HBP established the Ethics and Society subproject. Ethics and Society  is structured around a number of RRI activities such as foresight analysis (to identify at an early stage ethical and social concerns), citizens’ engagement (to promote involvement with different points of view and to strengthen public dialogue), and ethics support (to carry out research in applied ethics and to develop principles and mechanisms that ensure that ethical issues raised by research subprojects are communicated and managed and that HBP researchers comply with ethical codes and legal norms).

Neuroethical reflection plays a key role in this integration of social, scientific, and ethical inquiry. Notably, in the HBP such reflection includes conceptual and philosophical analysis. Insofar as it does, neuroethics aims to offer more than assistance to neuroscientists and social scientists in identifying the social, political, and cultural components of the research. Via conceptual analysis, neuroethics attempts to open a productive space within the HBP for examining the relevant issues, carrying out self-critical analysis, and providing the necessary background to examine potential impacts and issues raised. Neuroethical reflection in the HBP does not exclusively focus on ethical applications and normative guidance. Rather, it takes as a starting point the view that the full range of issues raised by neuroscience cannot be adequately dealt with without also focusing on the construction of knowledge, the meaning of the relevant notions, and the legitimacy of the various interpretations of relevant scientific findings.

At present, the importance of neuroethics is not in question. It is a key concern of the International Brain Initiative, and the different international brain projects are trying to integrate neuroethics into their research in different ways. What continues to be unique to neuroethics in the HBP, however, is its commitment to the idea that making progress in addressing the host of ethical, social, legal, regulatory and philosophical issues raised by brain research to a great extent depends on a conceptual neuroethical approach. It enables constructive critical alertness and a thought-out methodology that can achieve both substantial scientific ground and conceptual clarity.

If you want to read more, see below a list of publications on which this post is based.

Arleen Salles

Delegates eaGNS. Neuroethics Questions to Guide Ethical Research in the International Brain Initiatives. Neuron. 2018.

Evers K, Salles A, Farisco M. Theoretical Framing for Neuroethics: The Need for a Conceptual Aproach. In: Racine E, Aspler, J., editor. Debates About Neuroethics: Springer; 2017.

Salles A, Evers K. Social Neuroscience and Neuroethics: A Fruitful Synergy. In: Ibanez A, Sedeno, L., Garcia, A., editor. Social Neuroscience and Social Science: The Missing Link: Springer; 2017. p. 531-46.

Farisco M, Salles A, Evers K. Neuroethics: A Conceptual Approach. Camb Q Healthc Ethics. 2018;27(4):717-27.

Salles A, Evers K, Farisco M. Neuroethics and Philosophy in Responsible Research and Innovation: The Case of the Human Brain Project. Neuroethics. 2018.

Salles A, Bjaalie JG, Evers K, Farisco M, Fothergill BT, Guerrero M, et al. The Human Brain Project: Responsible Brain Research for the Benefit of Society. Neuron. 2019;101(3):380-4.

« Older posts Newer posts »