A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroscience (Page 1 of 4)

A charming idea about consciousness

Some ideas can have such a charm that you only need to hear them once to immediately feel that they are probably true: “there must be some grain of truth in it.” Conspiracy theories and urban myths probably spread in part because of how they manage to charm susceptible human minds by ringing true. It is said that even some states of illness are spread because the idea of ​​the illness has such a strong impact on many of us. In some cases, we only need to hear about the diagnosis to start showing the symptoms and maybe we also receive the treatment. But even the idea of diseases spread by ideas has charm, so we should be on our guard.

The temptation to fall for the charm of certain ideas naturally also exists in academia. At the same time, philosophy and science are characterized by self-critical examination of ideas that may sound so attractive that we do not notice the lack of examination. As long as the ideas are limited hypotheses that can in principle be tested, it is relatively easy to correct one’s hasty belief in them. But sometimes these charming ideas consist of grand hypotheses about elusive phenomena that no one knows how to test. People can be so convinced by such ideas that they predict that future science just needs to fill in the details. A dangerous rhetoric to get caught up in, which also has its charm.

Last year I wrote a blog post about a theory at the border between science and philosophy that I would like to characterize as both grand and charming. This is not to say that the theory must be false, just that in our time it may sound immediately convincing. The theory is an attempt to explain an elusive “phenomenon” that perplexes science, namely the nature of consciousness. Many feel that if we could explain consciousness on purely scientific grounds, it would be an enormously significant achievement.

The theory claims that consciousness is a certain mathematically defined form of information processing. Associating consciousness with information is timely, we are immediately inclined to listen. What type of information processing would consciousness be? The theory states that consciousness is integrated information. Integration here refers not only to information being stored as in computers, but to all this diversified information being interconnected and forming an organized whole, where all parts are effectively available globally. If I understand the matter correctly, you can say that the integrated information of a system is the amount of generated information that exceeds the information generated by the parts. The more information a system manages to integrate, the more consciousness the system has.

What, then, is so charming about the idea that ​​consciousness is integrated information? Well, the idea might seem to fit with how we experience our conscious lives. At this moment you are experiencing multitudes of different sensory impressions, filled with details of various kinds. Visual impressions are mixed with impressions from the other senses. At the same time, however, these sensory impressions are integrated into a unified experience from a single viewpoint, your own. The mathematical theory of information processing where diversification is combined with integration of information may therefore sound attractive as a theory of consciousness. We may be inclined to think: Perhaps it is because the brain processes information in this integrative way that our conscious lives are characterized by a personal viewpoint and all impressions are organized as an ego-centred subjective whole. Consciousness is integrated information!

It becomes even more enticing when it turns out that the theory, called Integrated Information Theory (IIT), contains a calculable measure (Phi) of the amount of integrated information. If the theory is correct, then one would be able to quantify consciousness and give different systems different Phi for the amount of consciousness. Here the idea becomes charming in yet another way. Because if you want to explain consciousness scientifically, it sounds like a virtue if the theory enables the quantification of how much consciousness a system generates. The desire to explain consciousness scientifically can make us extra receptive to the idea, which is a bit deceptive.

In an article in Behavioral and Brain Sciences, Björn Merker, Kenneth Williford and David Rudrauf examine the theory of consciousness as integrated information. The review is detailed and comprehensive. It is followed up by comments from other researchers, and ends with the authors’ response. What the three authors try to show in the article is that even if the brain does integrate information in the sense of the theory, the identification of consciousness with integrated information is mistaken. What the theory describes is efficient network organization, rather than consciousness. Phi is a measure of network efficiency, not of consciousness. What the authors examine in particular is that charming feature I just mentioned: the theory can seem to “fit” with how we experience our conscious lives from a unified ego-centric viewpoint. It is true that integrated information constitutes a “unity” in the sense that many things are joined in a functionally organized way. But that “unity” is hardly the same “unity” that characterizes consciousness, where the unity is your own point of view on your experiences. Effective networks can hardly be said to have a “viewpoint” from a subjective “ego-centre” just because they integrate information. The identification of features of our conscious lives with the basic concepts of the theory is thus hasty, tempting though it may be.

The authors do not deny that the brain integrates information in accordance with the theory. The theory mathematically describes an efficient way to process information in networks with limited energy resources, something that characterizes the brain, the authors point out. But if consciousness is identified with integrated information, then many other systems that process information in the same efficient way would also be conscious. Not only other biological systems besides the brain, but also artifacts such as some large-scale electrical power grids and social networks. Proponents of the theory seem to accept this, but we have no independent reason to suppose that systems other than the brain would have consciousness. Why then insist that other systems are also conscious? Well, perhaps because one is already attracted by the association between the basic concepts of the theory and the organization of our conscious experiences, as well as by the possibility of quantifying consciousness in different systems. The latter may sound like a scientific virtue. But if the identification is false from the beginning, then the virtue appears rather as a departure from science. The theory might flood the universe with consciousness. At least that is how I understand the gist of ​​the article.

Anyone who feels the allure of the theory that consciousness is integrated information should read the careful examination of the idea: The integrated information theory of consciousness: A case of mistaken identity.

The last word has certainly not been said and even charming ideas can turn out to be true. The problem is that the charm easily becomes the evidence when we are under the influence of the idea. Therefore, I believe that the careful discussion of the theory of consciousness as integrated information is urgent. The article is an excellent example of the importance of self-critical examination in philosophy and science.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Merker, B., Williford, K., & Rudrauf, D. (2022). The integrated information theory of consciousness: A case of mistaken identity. Behavioral and Brain Sciences, 45, E41. doi:10.1017/S0140525X21000881

This post in Swedish

We like critical thinking

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

To change the changing human

Neuroscience contributes to human self-understanding, but it also raises concerns that it might change humanness, for example, through new neurotechnology that affects the brain so deeply that humans no longer are truly human, or no longer experience themselves as human. Patients who are treated with deep brain stimulation, for example, can state that they feel like robots.

What ethical and legal measures could such a development justify?

Arleen Salles, neuroethicist in the Human Brain Project, argues that the question is premature, since we have not clarified our concept of humanness. The matter is complicated by the fact that there are several concepts of human nature to be concerned about. If we believe that our humanness consists of certain unique abilities that distinguish humans from animals (such as morality), then we tend to dehumanize beings who we believe lack these abilities as “animal like.” If we believe that our humanity consists in certain abilities that distinguish humans from inanimate objects (such as emotions), then we tend to dehumanize beings who we believe lack these abilities as “mechanical.” It is probably in the latter sense that the patients above state that they do not feel human but rather as robots.

After a review of basic features of central philosophical concepts of human nature, Arleen Salles’ reflections take a surprising turn. She presents a concept of humanness that is based on the neuroscientific research that one worries could change our humanness! What is truly surprising is that this concept of humanness to some extent questions the question itself. The concept emphasizes the profound changeability of the human.

What does it mean to worry that neuroscience can change human nature, if human nature is largely characterized its ability to change?

If you follow the Ethics Blog and remember a post about Kathinka Evers’ idea of a neuroscientifically motivated responsibility for human nature, you are already familiar with the dynamic concept of human nature that Arleen Salles presents. In simple terms, it can be said to be a matter of complementing human genetic evolution with an “epigenetic” selective stabilization of synapses, which every human being undergoes during upbringing. These connections between brain cells are not inherited genetically but are selected in the living brain while it interacts with its environments. Language can be assumed to belong to the human abilities that largely develop epigenetically. I have proposed a similar understanding of language in collaboration with two ape language researchers.

Do not assume that this dynamic concept of human nature presupposes that humanness is unstable. As if the slightest gust of wind could disrupt human evolution and change human nature. On the contrary, the language we develop during upbringing probably contributes to stabilizing the many human traits that develop simultaneously. Language probably supports the transmission to new generations of the human forms of life where language has its uses.

Arleen Salles’ reflections are important contributions to the neuroethical discussion about human nature, the brain and neuroscience. In order to take ethical responsibility, we need to clarify our concepts, she emphasizes. We need to consider that humanness develops in three interconnected dimensions. It is about our genetics together with the selective stabilization of synapses in living brains in continuous interaction with social-cultural-linguistic environments. All at the same time!

Arleen Salles’ reflections are published as a chapter in a new anthology, Developments in Neuroethics and Bioethics (Elsevier). I am not sure if the publication will be open access, but hopefully you can find Arleen Salles’ contribution via this link: Humanness: some neuroethical reflections.

The chapter is recommended as an innovative contribution to the understanding of human nature and the question of whether neuroscience can change humanness. The question takes a surprising turn, which suggests we all together have an ongoing responsibility for our changing humanness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles (2021). Humanness: some neuroethical reflections. Developments in Neuroethics and Bioethics. https://doi.org/10.1016/bs.dnb.2021.03.002

This post in Swedish

We think about bioethics

An unusually big question

Sometimes the intellectual claims on science are so big that they risk obscuring the actual research. This seems to happen not least when the claims are associated with some great prestigious question, such as the origin of life or the nature of consciousness. By emphasizing the big question, one often wants to show that modern science is better suited than older human traditions to answer the riddles of life. Better than philosophy, for example.

I think of this when I read a short article about such a riddle: “What is consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers.” The article by Christof Koch gives the impression that it is only a matter of time before science determines not only where in the brain consciousness arises (one already seems have a suspect), but also the specific neural mechanisms that give rise to – everything you have ever experienced. At least if one is to believe one of the fundamental theories about the matter.

Reading about the discoveries behind the identification of where in the brain consciousness arises is as exciting as reading a whodunit. It is obvious that important research is being done here on the effects that loss or stimulation of different parts of the brain can have on people’s experiences, mental abilities and personalities. The description of a new technology and mathematical algorithm for determining whether patients are conscious or not is also exciting and indicates that research is making fascinating progress, which can have important uses in healthcare. But when mathematical symbolism is used to suggest a possible fundamental explanation for everything you have ever experienced, the article becomes as difficult to understand as the most obscure philosophical text from times gone by.

Since even representatives of science sometimes make philosophical claims, namely, when they want to answer prestigious riddles, it is perhaps wiser to be open to philosophy than to compete with it. Philosophy is not just about speculating about big questions. Philosophy is also about humbly clarifying the questions, which otherwise tend to grow beyond all reasonable limits. Such openness to philosophy flourishes in the Human Brain Project, where some of my philosophical colleagues at CRB collaborate with neuroscientists to conceptually clarify questions about consciousness and the brain.

Something I myself wondered about when reading the scientifically exciting but at the same time philosophically ambitious article, is the idea that consciousness is everything we experience: “It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.” What does it mean to take such an all-encompassing claim seriously? What is not consciousness? If everything we can experience is consciousness, from the taste of chocolate mousse to the sight of the stars in the sky and our human bodies with their various organs, where is the objective reality to which science wants to relate consciousness? Is it in consciousness?

If consciousness is our inevitable vantage point, if everything we experience as real is consciousness, it becomes unclear how we can treat consciousness as an objective phenomenon in the world along with the body and other objects. Of course, I am not talking here about actual scientific research about the brain and consciousness, but about the limitless intellectual claim that scientists sooner or later will discover the neural mechanisms that give rise to everything we can ever experience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Christof Koch, What Is Consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers, Nature 557, S8-S12 (2018) https://doi.org/10.1038/d41586-018-05097-x

This post in Swedish

We transcend disciplinary borders

How do we take responsibility for dual-use research?

We are more often than we think governed by old patterns of thought. As a philosopher, I find it fascinating to see how mental patterns capture us, how we get imprisoned in them, and how we can get out of them. With that in mind, I recently read a book chapter on something that is usually called dual-use research. Here, too, there are patterns of thought that can capture us.

In the chapter, Inga Ulnicane discusses how one developed responsibility for neuroscientific dual-use research of concern in the Human Brain Project (HBP). I read the chapter as a philosophical drama. The European rules that govern HBP are themselves governed by mental patterns about what dual-use research is. In order to take real responsibility for the project, it was therefore necessary within HBP to think oneself free from the patterns that governed the governance of the project. Responsibility became a philosophical challenge: to raise awareness of the real dual-use issues that may be associated with neuroscientific research.

Traditionally, “dual use” refers to civilian versus military uses. By regulating that research in HBP should focus exclusively on civil applications, it can be said that the regulation of the project was itself regulated by this pattern of thought. There are, of course, major military interests in neuroscientific research, not least because the research borders on information technology, robotics and artificial intelligence. Results can be used to improve soldiers’ abilities in combat. They can be used for more effective intelligence gathering, more powerful image analysis, faster threat detection, more accurate robotic weapons, and to satisfy many other military desires.

The problem is that there are more problematic desires than military ones. Research results can also be used to manipulate people’s thoughts and feelings for non-military purposes. They can be used to monitor populations and control their behaviour. It is impossible to say once and for all what problematic desires neuroscientific research can arouse, military and non-military. A single good idea can cause several bad ideas in many other areas.

Therefore, one prefers in HBP to talk about beneficial and harmful uses, rather than civilian and military. This more open understanding of “the dual” means that one cannot identify problematic areas of use once and for all. Instead, continuous discussion is required among researchers and other actors as well as the general public to increase awareness of various possible problematic uses of neuroscientific research. We need to help each other see real problems, which can occur in completely different places than we expect. Since the problems moreover move across borders, global cooperation is needed between brain projects around the world.

Within HBP, it was found that an additional thought pattern governed the regulation of the project and made it more difficult to take real responsibility. The definition of dual-use in the documents was taken from the EU export control regulation, which is not entirely relevant for research. Here, too, greater awareness is required, so that we do not get caught up in thought patterns about what it is that could possibly have dual uses.

My personal conclusion is that human challenges are not only caused by a lack of knowledge. They are also caused by how we are tempted to think, by how we unconsciously repeat seemingly obvious patterns of thought. Our tendency to become imprisoned in mental patterns makes us unaware of our real problems and opportunities. Therefore, we should take the human philosophical drama more seriously. We need to see the importance of philosophising ourselves free from our self-incurred captivity in enticing ways of thinking. This is what one did in the Human Brain Project, I suggest, when one felt challenged by the question of what it really means to take responsibility for dual-use research of concern.

Read Inga Ulnicane’s enlightening chapter, The governance of dual-use research in the EU. The case of neuroscience, which also mentions other patterns that can govern our thinking about governance of dual-use research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ulnicane, I. (2020). The governance of dual-use research in the EU: The case of neuroscience. In A. Calcara, R. Csernatoni, & C. Lavallée (Editors), Emerging security technologies and EU governance: Actors, practices and processes. London: Routledge / Taylor & Francis Group, pages 177-191.

This post in Swedish

Thinking about thinking

The hard problem of consciousness: please handle with care!

We face challenges every day. Some are more demanding than others, but it seems that there is not a day without some problem to handle. Unless they are too big to manage, problems are like the engines of our lives. They push us to always go beyond wherever we are and whatever we do, to look for new possibilities, to build new opportunities. In other words: problems make us stay alive.

The same is true for science and philosophy. There is a constant need to face new challenges. Consciousness research is no exception. There are, of course, several problems in the investigation of consciousness. However, one problem has emerged as the big problem, which the Australian philosopher David Chalmers baptised “the hard problem of consciousness.” This classical problem (discussed even before Chalmers coined this expression, actually since the early days of neuropsychology, notably by Alexander Luria and collaborators) refers to the enigma of subjective experience. To adapt a formulation by the philosopher Thomas Nagel, the basic question is: why do we have experiences of what it is like to be conscious, for example, why do we experience that pain and hunger feel the way they do?

The hard problem has a double nature. On the one hand, it refers to what Joseph Levine had qualified as an explanatory gap. The strategy to identify psychological experiences with physical features of the brain is in the end unable to explain why experiences are related to physical phenomena at all. On the other hand, the hard problem also refers to the question if subjective experience can be explained causally or if it is intrinsic to the world, that is to say: fundamentally there, from the beginning, rather than caused by something more primary.

This double nature of the problem has been a stumbling block in the attempt to explain consciousness. Yet in recent years, the hardness of the problem has been increasingly questioned. Among the arguments that appear relevant in order to soften the problem, there is one that I think merits specific attention. This argument describes consciousness as a cultural concept, meaning that both the way we conceive it and the way we experience it depend on our culture. There are different versions of this argument: some reduce consciousness as such to a cultural construction, while other, less radical arguments stress that consciousness has a neurological substrate that is importantly shaped by culture. The relevant point is that by characterising consciousness as a cultural construction, with reference both to how we conceptualise it and how we are conscious, this argument ultimately questions the hardness of the hard problem.

To illustrate, consider anthropological and neuroscientific arguments that appear to go in the direction of explaining away the hard problem of consciousness. Anthropological explanations give a crucial role to culture and its relationship with consciousness. Humans have an arguably unique capacity of symbolisation, which enables us to create an immaterial world both through the symbolisation of the actual world and through the construction of immaterial realities that are not experienced through the senses. This human symbolic capacity can be applied not only to the external world, but also to brain activity, resulting in the conceptual construction of notions like consciousness. We symbolise our brain activity, hypostatise our conscious activities, and infer supposedly immaterial causes behind them.

There are also neuroscientific and neuropsychological attempts to explain how consciousness and our understanding of it evolved, which ultimately appear to potentially explain away the hard problem. Attention Schema Theory, for instance, assumes that people tend to “attribute a mysterious consciousness to themselves and to others because of an inherently inaccurate model of mind, and especially a model of attention.” The origin of the attribution of this mysterious consciousness is in culture and in folk-psychological beliefs, for instance, ideas about “an energy-like substance inhabiting the body.” In other words, culturally based mistaken beliefs derived from implicit social-cognitive models affect and eventually distort our view of consciousness. Ultimately, consciousness does not really exist as a distinct property, and its appearance as a non-physical property is a kind of illusion. Thus, the hard problem does not originate from real objective features of the world, but rather from implicit subjective beliefs derived from internalised socio-cultural models, specifically from the intuition that mind is an invisible essence generated within an agent.

While I do not want to conceptually challenge the arguments above, I here only suggest potential ethical issues that might arise if we assume the validity of those arguments. What are the potential neuroethical implications of these ideas of consciousness as culturally constructed? Since the concept of consciousness traditionally played an important role in ethical reasoning, for example, in the notion of a person, questioning the objective status of conscious experience may have important ethical implications that should be adequately investigated. For instance, if consciousness depends on culture, then any definition of altered states of consciousness is culturally relative and context-dependent. This might have an impact on, for example, the ethical evaluation of the use of psychotropic substances, which for some cultures, as history tells us, can be considered legitimate and positive. Why should we limit the range of states of consciousness that are allowed to be experienced? What makes it legitimate for a culture to assert its own behavioural standards? To what extent can individuals justify their behaviour by appealing to their culture? 

In addition, if consciousness (i.e., the way we are conscious, what we are conscious of, and our understanding of consciousness) is dependent on culture, then some conscious experiences might be considered more or less valuable in different cultural contexts, which could affect, for example, end-of-life decisions. If the concept of consciousness, and thus its ethical relevance and value, depends on culture, then consciousness no longer offers a solid foundation for ethical deliberation. Softening the hard problem of consciousness might also soften the foundation of what I defined elsewhere as the consciousness-centred ethics of disorders of consciousness (vegetative states, unresponsive wakefulness states, minimally conscious states, and cognitive-motor dissociation).

Although a cultural approach to consciousness can soften the hard problem conceptually, it creates hard ethical problems that require specific attention. It seems that any attempt to challenge the hard problem of consciousness results in a situation similar to that of having a blanket that is too short: if you pull it to one side (in the direction of the conceptual problem), you leave the other side uncovered (ethical issues based on the notion of consciousness). It seems that we cannot soften the hard problem of consciousness without the risk of relativizing ethics.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We like challenging questions

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transcend disciplinary borders

Ethically responsible robot development

Development of new technologies sometimes draws inspiration from nature. How do plants and animals solve the problem? An example is robotics, where one wants to develop better robots based on what neuroscience knows about the brain. How does the brain solve the problem?

Neuroscience, in turn, sees new opportunities to test hypotheses about the brain by simulating them in robots. Perhaps one can simulate how areas of the brain interact in patients with Parkinson’s disease, to understand how their tremor and other difficulties are caused.

Neuroscience-inspired robotics, so-called neurorobotics, is still at an early stage. This makes neurorobotics an excellent area for being ethically and socially more proactive than we have been in previous technological developments. That is, we can already begin to identify possible ethical and social problems surrounding technological development and counteract them before they arise. For example, we cannot close our eyes to gender and equality issues, but must continuously reflect on how our own social and cultural patterns are reflected in the technology we develop. We need to open our eyes to our own blind spots!

You can read more about this ethical shift in technology development in an article in Science and Engineering Ethics (with Manuel Guerrero from CRB as one of the authors). The shift is called Responsible Research and Innovation, and is exemplified in the article by ongoing work in the European research project, Human Brain Project.

Not only neuroscientists and technology experts are collaborating in this project to develop neurorobotics. Scholars from the humanities and social sciences are also involved in the work. The article itself is an example of this broad collaboration. However, the implementation of responsible research and development is also at an early stage. It still needs to find more concrete forms of work that make it possible not only to anticipate ethical and social problems and reflect on them, but also to act and intervene to influence scientific and technological development.

From being a framework built around research and development, ethics is increasingly integrated into research and development. Read the article if you want to think about this transition to a more reflective and responsible technological development.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Aicardi, C., Akintoye, S., Fothergill, B.T. et al. Ethical and Social Aspects of Neurorobotics. Sci Eng Ethics 26, 2533–2546 (2020). https://doi.org/10.1007/s11948-020-00248-8

This post in Swedish

Approaching future issues

We shape the societies that shape us: our responsibility for human nature

Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.

For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?

As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.

Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.

Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.

The notion of ​​a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of ​​this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.

The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.

Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kathinka Evers. 2020. The Culture‐Bound Brain: Epigenetic Proaction Revisited. Theoria. doi:10.1111/theo.12264

This post in Swedish

We like challenging questions

Ethical frameworks for research

The word ethical framework evokes the idea of ​​something rigid and separating, like the fence around the garden. The research that emerges within the framework is dynamic and constantly new. However, to ensure safety, it is placed in an ethical framework that sets clear boundaries for what researchers are allowed to do in their work.

That this is an oversimplified picture is clear after reading an inventive discussion of ethical frameworks in neuroscientific research projects, such as the Human Brain Project. The article is written by Arleen Salles and Michele Farisco at CRB and is published in AJOB Neuroscience.

The article questions not only the image of ethical frameworks as static boundaries for dynamic research activities. Inspired by ideas within so-called responsible research and innovation (RRI), the image that research can be separated from ethics and society is also questioned.

Researchers tend to regard research as their own concern. However, there are tendencies towards increasing collaboration not only across disciplinary boundaries, but also with stakeholders such as patients, industry and various forms of extra-scientific expertise. These tendencies make research an increasingly dispersed, common concern. Not only in retrospect in the form of applications, which presupposes that the research effort can be separated, but already when research is initiated, planned and carried out.

This could sound threatening, as if foreign powers were influencing the free search for truth. Nevertheless, there may also be something hopeful in the development. To see the hopeful aspect, however, we need to free ourselves from the image of ethical frameworks as static boundaries, separate from dynamic research.

With examples from the Human Brain Project, Arleen Salles and Michele Farisco try to show how ethical challenges in neuroscience projects cannot always be controlled in advance, through declared principles, values ​​and guidelines. Even ethical work is dynamic and requires living intelligent attention. The authors also try to show how ethical attention reaches all he way into the neuroscientific issues, concepts and working conditions.

When research on the human brain is not aware of its own cultural and societal conditions, but takes them for granted, it may mean that relevant questions are not asked and that research results do not always have the validity that one assumes they have.

We thus have good reasons to see ethical and societal reflections as living parts of neuroscience, rather than as rigid frameworks around it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles & Michele Farisco (2020) Of Ethical Frameworks and Neuroethics in Big Neuroscience Projects: A View from the HBP, AJOB Neuroscience, 11:3, 167-175, DOI: 10.1080/21507740.2020.1778116

This post in Swedish

We like real-life ethics

« Older posts