A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Michele Farisco

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

The hard problem of consciousness: please handle with care!

We face challenges every day. Some are more demanding than others, but it seems that there is not a day without some problem to handle. Unless they are too big to manage, problems are like the engines of our lives. They push us to always go beyond wherever we are and whatever we do, to look for new possibilities, to build new opportunities. In other words: problems make us stay alive.

The same is true for science and philosophy. There is a constant need to face new challenges. Consciousness research is no exception. There are, of course, several problems in the investigation of consciousness. However, one problem has emerged as the big problem, which the Australian philosopher David Chalmers baptised “the hard problem of consciousness.” This classical problem (discussed even before Chalmers coined this expression, actually since the early days of neuropsychology, notably by Alexander Luria and collaborators) refers to the enigma of subjective experience. To adapt a formulation by the philosopher Thomas Nagel, the basic question is: why do we have experiences of what it is like to be conscious, for example, why do we experience that pain and hunger feel the way they do?

The hard problem has a double nature. On the one hand, it refers to what Joseph Levine had qualified as an explanatory gap. The strategy to identify psychological experiences with physical features of the brain is in the end unable to explain why experiences are related to physical phenomena at all. On the other hand, the hard problem also refers to the question if subjective experience can be explained causally or if it is intrinsic to the world, that is to say: fundamentally there, from the beginning, rather than caused by something more primary.

This double nature of the problem has been a stumbling block in the attempt to explain consciousness. Yet in recent years, the hardness of the problem has been increasingly questioned. Among the arguments that appear relevant in order to soften the problem, there is one that I think merits specific attention. This argument describes consciousness as a cultural concept, meaning that both the way we conceive it and the way we experience it depend on our culture. There are different versions of this argument: some reduce consciousness as such to a cultural construction, while other, less radical arguments stress that consciousness has a neurological substrate that is importantly shaped by culture. The relevant point is that by characterising consciousness as a cultural construction, with reference both to how we conceptualise it and how we are conscious, this argument ultimately questions the hardness of the hard problem.

To illustrate, consider anthropological and neuroscientific arguments that appear to go in the direction of explaining away the hard problem of consciousness. Anthropological explanations give a crucial role to culture and its relationship with consciousness. Humans have an arguably unique capacity of symbolisation, which enables us to create an immaterial world both through the symbolisation of the actual world and through the construction of immaterial realities that are not experienced through the senses. This human symbolic capacity can be applied not only to the external world, but also to brain activity, resulting in the conceptual construction of notions like consciousness. We symbolise our brain activity, hypostatise our conscious activities, and infer supposedly immaterial causes behind them.

There are also neuroscientific and neuropsychological attempts to explain how consciousness and our understanding of it evolved, which ultimately appear to potentially explain away the hard problem. Attention Schema Theory, for instance, assumes that people tend to “attribute a mysterious consciousness to themselves and to others because of an inherently inaccurate model of mind, and especially a model of attention.” The origin of the attribution of this mysterious consciousness is in culture and in folk-psychological beliefs, for instance, ideas about “an energy-like substance inhabiting the body.” In other words, culturally based mistaken beliefs derived from implicit social-cognitive models affect and eventually distort our view of consciousness. Ultimately, consciousness does not really exist as a distinct property, and its appearance as a non-physical property is a kind of illusion. Thus, the hard problem does not originate from real objective features of the world, but rather from implicit subjective beliefs derived from internalised socio-cultural models, specifically from the intuition that mind is an invisible essence generated within an agent.

While I do not want to conceptually challenge the arguments above, I here only suggest potential ethical issues that might arise if we assume the validity of those arguments. What are the potential neuroethical implications of these ideas of consciousness as culturally constructed? Since the concept of consciousness traditionally played an important role in ethical reasoning, for example, in the notion of a person, questioning the objective status of conscious experience may have important ethical implications that should be adequately investigated. For instance, if consciousness depends on culture, then any definition of altered states of consciousness is culturally relative and context-dependent. This might have an impact on, for example, the ethical evaluation of the use of psychotropic substances, which for some cultures, as history tells us, can be considered legitimate and positive. Why should we limit the range of states of consciousness that are allowed to be experienced? What makes it legitimate for a culture to assert its own behavioural standards? To what extent can individuals justify their behaviour by appealing to their culture? 

In addition, if consciousness (i.e., the way we are conscious, what we are conscious of, and our understanding of consciousness) is dependent on culture, then some conscious experiences might be considered more or less valuable in different cultural contexts, which could affect, for example, end-of-life decisions. If the concept of consciousness, and thus its ethical relevance and value, depends on culture, then consciousness no longer offers a solid foundation for ethical deliberation. Softening the hard problem of consciousness might also soften the foundation of what I defined elsewhere as the consciousness-centred ethics of disorders of consciousness (vegetative states, unresponsive wakefulness states, minimally conscious states, and cognitive-motor dissociation).

Although a cultural approach to consciousness can soften the hard problem conceptually, it creates hard ethical problems that require specific attention. It seems that any attempt to challenge the hard problem of consciousness results in a situation similar to that of having a blanket that is too short: if you pull it to one side (in the direction of the conceptual problem), you leave the other side uncovered (ethical issues based on the notion of consciousness). It seems that we cannot soften the hard problem of consciousness without the risk of relativizing ethics.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We like challenging questions

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transgress disciplinary borders

Drug addiction as a mental and social disorder

Michele FariscoCan the brain sciences help us to better understand and handle urgent social problems like drug addiction? Can they even help us understand how social disorder creates disorderly, addicted brains?

If, as seems to be the case, addiction has a strong cerebral base, then it follows that knowing the brain is the key to finding effective treatments for addiction. Yet, what aspects of the brain should be particularly investigated? In a recent article, co-authored with the philosopher Kathinka Evers and the neuroscientist Jean-Pierre Changeux, I suggest that we need to focus on both aware and unaware processes in the brain, trying to figure out how these are affected by environmental influences, and how they eventually affect individual behavior.

There is no doubt that drug addiction is one of the most urgent emergencies in contemporary society. Think, for instance, of the opioid crisis in the US. It has become a kind of social plague, affecting millions of people. How was that possible? What are the causes of such a disaster? Of course, several factors contributed to the present crisis. We suggest, however, that certain external factors influenced brain processes on an unaware level, inviting addictive behavior.

To give an example, one of the causes of the opioid crisis seems to be the false assumption that opioid drugs do not cause addiction. Taking this view of opioid drugs was an unfortunate choice, we argue, likely favored by the financial interests of pharmaceutical companies. It affected not only physicians’ aware opinions, but also their unaware views on opioid drugs, and eventually their inclination to prescribe them. But that is not all. Since there is a general disposition to trust medical doctors’ opinions and choices, the original false assumption that opioid drugs do not cause addiction spread and affected also public opinion, especially at the unaware level. In other words, we think that there is a social responsibility for the increase in drug addiction, if not in ethical terms, at least in terms of public policies.

This is just an example of how external factors contribute to a personal disposition to use potentially addictive drugs. Of course, the factors involved in creating addiction are multifarious and not limited to false views about the risk of addiction associated with certain drugs.

More generally, we argue that in addition to the internal bases of addiction in the central nervous system, socio-economic status modulates, through unaware processing, what can be described as a person’s subjective “global well-being,” raising in some individuals the need for additional rewards in the brain. In the light of the impact of external factors, we argue that some people are particularly vulnerable to the pressures of the political and socio-economical capitalistic system, and that this stressful condition, which has both aware and unaware components, is one of the main causes of addiction. For this reason, we conclude that addiction is not only a medical and mental disorder, but also a social disorder.

Michele Farisco

Farisco M, Evers K and Changeux J-P (2018) Drug Addiction: From Neuroscience to Ethics. Front. Psychiatry 9:595. doi: 10.3389/fpsyt.2018.00595

Searching for consciousness needs conceptual clarification

Michele FariscoWe can hardly think of ourselves as living persons without referring to consciousness. In fact, we normally define ourselves through two features of our life: we are awake (the level of our consciousness is more than zero), and we are aware of something (our consciousness is not empty).

While it is quite intuitive to think that our brains are necessary for us to be conscious, it is tempting to think that looking at what is going on in the brain is enough to understand consciousness. But empirical investigations are not enough.

Neuroscientific methods to investigate consciousness and its disorders have developed massively in the last decades. The scientific and clinical advancements that have resulted are impressive. But while the ethical and clinical impacts of these advancements are often debated and studied, there is little conceptual analysis.

I think of one example in particular, namely, the neuroscience of disorders of consciousness. These are states where a person’s consciousness is more or less severely damaged. Most commonly, we think of patients in vegetative state, who exhibit levels of consciousness without any content. But it could also be a minimally conscious state with fluctuating levels and contents of consciousness.

How can we explain these complex conditions? Empirical science is usually supposed to be authoritative and help to assess very important issues, such as consciousness. Such scientific knowledge is basically inferential: it is grounded in the comparative assessment of residual consciousness in brain-damaged patients.

But because of its inferential nature, neuroscience takes the form of an inductive reasoning: it infers the presence of consciousness starting from data extracted by neurotechnology. This is done by comparing data from brain damaged patients with data from healthy individuals. Yet this induction is valid only on the basis of a previous definition of consciousness, a definition we made within an implicit or explicit theoretical framework. Thus a conceptual assessment of consciousness that is defined within a well-developed conceptual framework is crucial, and it will affect the inference of consciousness from empirical data.

When it comes to disorders of consciousness, there is still no adequate conceptual analysis of the complexity of consciousness: its levels, modes and degrees. Neuroscience often takes a functionalist account of consciousness for granted in which consciousness is assumed to be equivalent to cognition or at least to be based in cognition. Yet findings from comatose patients suggest that this is not the case. Instead, consciousness seems to be grounded on the phenomenal functions of the brain as they are related to the resting state’s activity.

For empirical neuroscience to be able to contribute to an understanding of consciousness, neuroscientists need input from philosophy. Take the case of communication with speechless patients through neurotechnology (Conversations with seemingly unconscious patients), or the prospective simulation of the brain (The challenge to simulate the brain) for example: here scientists can give philosophers empirical data that need to be considered in order to develop a well-founded conceptual framework within which consciousness can be defined.

The alleged autonomy of empirical science as source of objective knowledge is problematic. This is the reason why philosophy needs to collaborate with scientists in order to conceptually refine their research methods. On the other hand, dialogue with science is essential for philosophy to be meaningful.

We need a conceptual strategy for clarifying the theoretical framework of neuroscientific inferences. This is what we are trying to do in our CRB neuroethics group as part of the Human Brain Project (Neuroethics and Neurophilosophy).

Michele Farisco

This post in Swedish

We want solid foundations - the Ethics Blog

The challenge to simulate the brain

Michele FariscoIs it possible to create a computer simulation of the human brain? Perhaps, perhaps not. But right now, a group of scientists is trying. But it is not only finding enough computer power that makes it difficult: there are some very real philosophical challenges too.

Computer simulation of the brain is one of the most ambitious goals of the European Human Brain Project. As a philosopher, I am part of a group that looks at the philosophical and ethical issues, such as: What is the impact of neuroscience on social practice, particularly on clinical practice? What are the conceptual underpinnings of neuroscientific investigation and its impact on traditional ideas, like the human subject, free will, and moral agency? If you follow the Ethics Blog, you might have heard of our work before (“Conversations with seemingly unconscious patients”; “Where is consciousness?”).

One of the questions we ask ourselves is: What is a simulation in general and what is a brain simulation in particular? Roughly, the idea is to create an object that resembles the functional and (if possible also) the structural characteristics of the brain in order to improve our understanding and ability to predict its future development. Simulating the brain could be defined as an attempt to develop a mathematical model of the cerebral functional architecture and to load it onto a computer in order to artificially reproduce its functioning. But why should we reproduce brain functioning?

I can see three reasons: describing, explaining and predicting cerebral activities. The implications are huge. In clinical practice with neurological and psychiatric patients, simulating the damaged brain could help us understand it better and predict its future developments, and also refine current diagnostic and prognostic criteria.

Great promises, but also great challenges ahead of us! But let me now turn to challenges that I believe can be envisaged from a philosophical and conceptual perspective.

A model is in some respects simplified and arbitrary: the selection of parameters to include depends on the goals of the model to be built. This is particularly challenging when the object being simulated is characterized by a high degree of complexity.

The main method used for building models of the brain is “reverse engineering.” This is a method that includes two main steps: dissecting a functional system at the physical level into component parts or subsystems; and then reconstructing the system virtually. Yet the brain hardly seems decomposable into independent modules with linear interactions. The brain rather appears as a nonlinear complex integrated system and the relationship between the brain’s components is non-linear. That means that their relationship cannot be described as a direct proportionality and their relative change is not related to a constant multiplier. To complicate things further, the brain is not completely definable by algorithmic methods. This means that it can show unpredicted behavior. And then to make it even more complex: The relationship between the brain’s subcomponents affects the behavior of the subcomponents.

The brain is a holistic system and despite being deterministic it is still not totally predictable. Simulating it is hardly conceivable. But even if it should be possible, I am afraid that a new “artificial” brain will have limited practical utility: for instance, the prospective general simulation of the brain risks to lose the specific characteristics of the particular brain under treatment.

Furthermore, it is impossible to simulate “the brain” simply because such an entity doesn’t exist. We have billions of different brains in the world. They are not completely similar, even if they are comparable. Abstracting from such diversity is the major limitation of brain simulation. Perhaps it would be possible to overcome this limitation by using a “general” brain simulation as a template to simulate “particular” brains. But maybe this would be even harder to conceive and realize.

Brain simulation is indeed one of the most promising contemporary scientific enterprises, but it needs a specific conceptual investigation in order to clarify its inspiring philosophy and avoid misinterpretations and disproportional expectations. Even, but not only, by lay people.

If you want to know more, I recommend having a look at a report of our publications so far.

Michele Farisco

We like challenging questions - the ethics blog

Where is consciousness?

 

Michele FariscoWould it be possible to use brain imaging techniques to detect consciousness and then “read” directly in people’s brains what they want or do not want? Could one, for example, ask a severely brain injured patient for consent to some treatment, and then obtain an answer through a brain scan?

Together with the philosopher Kathinka Evers and the neuroscientist Steven Laureys, I recently investigated ethical and clinical issues arising from this prospective “cerebral communication.”

Our brains are so astonishingly complex! The challenge is how to handle this complexity. To do that we need to develop our conceptual apparatus and create what we would like to call a “fundamental” neuroethics. Sound research needs solid theory, and in line with this I would like to comment upon the conceptual underpinnings of this ongoing endeavor of developing a “fundamental” neuroethics.

The assumption that visualizing activity in a certain brain area can mean reading the conscious intention of the scanned subject presupposes that consciousness can be identified with particular brain areas. While both science and philosophy widely accept that consciousness is a feature of the brain, recent developments in neuroscience problematize relating consciousness to specific areas of the brain.

Tricky logical puzzles arise here. The so called “mereological fallacy” is the error of attributing properties of the whole (the living human person) to its parts (the brain). In our case a special kind of mereological fallacy risks to be embraced: attributing features of the whole (the brain) to its parts (those visualized as more active in the scan). Consciousness is a feature of the whole brain: the sole fact that a particular area is more active than others does not imply conscious activity.

The reverse inference is another nice logical pitfall: the fact that a study reveals that a particular cerebral area, say A, is more active during a specific task, say T, does not imply that A always results in T, nor that T always presupposes A.

In short, we should avoid the conceptual temptation to view consciousness according to the so called “homunculus theory”: like an entity placed in a particular cerebral area. This is unlikely: consciousness does not reside in specific brain regions, but is rather equivalent to the activity of the brain as a whole.

But where is consciousness? To put it roughly, it is nowhere and everywhere in the brain. Consciousness is a feature of the brain and the brain is more than the sum of its parts: it is an open system, where external factors can influence its structure and function, which in turn affects our consciousness. Brain and consciousness are continually changing in deep relationships with the external environment.

We address these issues in more detail in a forthcoming book that I and Kathinka Evers are editing, involving leading researchers both in neuroscience and in philosophy:

Michele Farisco

We want solid foundations - the Ethics Blog

 

Neuroethics: new wine in old bottles?

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog