A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Michele Farisco (Page 2 of 2)

Searching for consciousness needs conceptual clarification

Michele FariscoWe can hardly think of ourselves as living persons without referring to consciousness. In fact, we normally define ourselves through two features of our life: we are awake (the level of our consciousness is more than zero), and we are aware of something (our consciousness is not empty).

While it is quite intuitive to think that our brains are necessary for us to be conscious, it is tempting to think that looking at what is going on in the brain is enough to understand consciousness. But empirical investigations are not enough.

Neuroscientific methods to investigate consciousness and its disorders have developed massively in the last decades. The scientific and clinical advancements that have resulted are impressive. But while the ethical and clinical impacts of these advancements are often debated and studied, there is little conceptual analysis.

I think of one example in particular, namely, the neuroscience of disorders of consciousness. These are states where a person’s consciousness is more or less severely damaged. Most commonly, we think of patients in vegetative state, who exhibit levels of consciousness without any content. But it could also be a minimally conscious state with fluctuating levels and contents of consciousness.

How can we explain these complex conditions? Empirical science is usually supposed to be authoritative and help to assess very important issues, such as consciousness. Such scientific knowledge is basically inferential: it is grounded in the comparative assessment of residual consciousness in brain-damaged patients.

But because of its inferential nature, neuroscience takes the form of an inductive reasoning: it infers the presence of consciousness starting from data extracted by neurotechnology. This is done by comparing data from brain damaged patients with data from healthy individuals. Yet this induction is valid only on the basis of a previous definition of consciousness, a definition we made within an implicit or explicit theoretical framework. Thus a conceptual assessment of consciousness that is defined within a well-developed conceptual framework is crucial, and it will affect the inference of consciousness from empirical data.

When it comes to disorders of consciousness, there is still no adequate conceptual analysis of the complexity of consciousness: its levels, modes and degrees. Neuroscience often takes a functionalist account of consciousness for granted in which consciousness is assumed to be equivalent to cognition or at least to be based in cognition. Yet findings from comatose patients suggest that this is not the case. Instead, consciousness seems to be grounded on the phenomenal functions of the brain as they are related to the resting state’s activity.

For empirical neuroscience to be able to contribute to an understanding of consciousness, neuroscientists need input from philosophy. Take the case of communication with speechless patients through neurotechnology (Conversations with seemingly unconscious patients), or the prospective simulation of the brain (The challenge to simulate the brain) for example: here scientists can give philosophers empirical data that need to be considered in order to develop a well-founded conceptual framework within which consciousness can be defined.

The alleged autonomy of empirical science as source of objective knowledge is problematic. This is the reason why philosophy needs to collaborate with scientists in order to conceptually refine their research methods. On the other hand, dialogue with science is essential for philosophy to be meaningful.

We need a conceptual strategy for clarifying the theoretical framework of neuroscientific inferences. This is what we are trying to do in our CRB neuroethics group as part of the Human Brain Project (Neuroethics and Neurophilosophy).

Michele Farisco

This post in Swedish

We want solid foundations - the Ethics Blog

The challenge to simulate the brain

Michele FariscoIs it possible to create a computer simulation of the human brain? Perhaps, perhaps not. But right now, a group of scientists is trying. But it is not only finding enough computer power that makes it difficult: there are some very real philosophical challenges too.

Computer simulation of the brain is one of the most ambitious goals of the European Human Brain Project. As a philosopher, I am part of a group that looks at the philosophical and ethical issues, such as: What is the impact of neuroscience on social practice, particularly on clinical practice? What are the conceptual underpinnings of neuroscientific investigation and its impact on traditional ideas, like the human subject, free will, and moral agency? If you follow the Ethics Blog, you might have heard of our work before (“Conversations with seemingly unconscious patients”; “Where is consciousness?”).

One of the questions we ask ourselves is: What is a simulation in general and what is a brain simulation in particular? Roughly, the idea is to create an object that resembles the functional and (if possible also) the structural characteristics of the brain in order to improve our understanding and ability to predict its future development. Simulating the brain could be defined as an attempt to develop a mathematical model of the cerebral functional architecture and to load it onto a computer in order to artificially reproduce its functioning. But why should we reproduce brain functioning?

I can see three reasons: describing, explaining and predicting cerebral activities. The implications are huge. In clinical practice with neurological and psychiatric patients, simulating the damaged brain could help us understand it better and predict its future developments, and also refine current diagnostic and prognostic criteria.

Great promises, but also great challenges ahead of us! But let me now turn to challenges that I believe can be envisaged from a philosophical and conceptual perspective.

A model is in some respects simplified and arbitrary: the selection of parameters to include depends on the goals of the model to be built. This is particularly challenging when the object being simulated is characterized by a high degree of complexity.

The main method used for building models of the brain is “reverse engineering.” This is a method that includes two main steps: dissecting a functional system at the physical level into component parts or subsystems; and then reconstructing the system virtually. Yet the brain hardly seems decomposable into independent modules with linear interactions. The brain rather appears as a nonlinear complex integrated system and the relationship between the brain’s components is non-linear. That means that their relationship cannot be described as a direct proportionality and their relative change is not related to a constant multiplier. To complicate things further, the brain is not completely definable by algorithmic methods. This means that it can show unpredicted behavior. And then to make it even more complex: The relationship between the brain’s subcomponents affects the behavior of the subcomponents.

The brain is a holistic system and despite being deterministic it is still not totally predictable. Simulating it is hardly conceivable. But even if it should be possible, I am afraid that a new “artificial” brain will have limited practical utility: for instance, the prospective general simulation of the brain risks to lose the specific characteristics of the particular brain under treatment.

Furthermore, it is impossible to simulate “the brain” simply because such an entity doesn’t exist. We have billions of different brains in the world. They are not completely similar, even if they are comparable. Abstracting from such diversity is the major limitation of brain simulation. Perhaps it would be possible to overcome this limitation by using a “general” brain simulation as a template to simulate “particular” brains. But maybe this would be even harder to conceive and realize.

Brain simulation is indeed one of the most promising contemporary scientific enterprises, but it needs a specific conceptual investigation in order to clarify its inspiring philosophy and avoid misinterpretations and disproportional expectations. Even, but not only, by lay people.

If you want to know more, I recommend having a look at a report of our publications so far.

Michele Farisco

We like challenging questions - the ethics blog

Where is consciousness?

 

Michele FariscoWould it be possible to use brain imaging techniques to detect consciousness and then “read” directly in people’s brains what they want or do not want? Could one, for example, ask a severely brain injured patient for consent to some treatment, and then obtain an answer through a brain scan?

Together with the philosopher Kathinka Evers and the neuroscientist Steven Laureys, I recently investigated ethical and clinical issues arising from this prospective “cerebral communication.”

Our brains are so astonishingly complex! The challenge is how to handle this complexity. To do that we need to develop our conceptual apparatus and create what we would like to call a “fundamental” neuroethics. Sound research needs solid theory, and in line with this I would like to comment upon the conceptual underpinnings of this ongoing endeavor of developing a “fundamental” neuroethics.

The assumption that visualizing activity in a certain brain area can mean reading the conscious intention of the scanned subject presupposes that consciousness can be identified with particular brain areas. While both science and philosophy widely accept that consciousness is a feature of the brain, recent developments in neuroscience problematize relating consciousness to specific areas of the brain.

Tricky logical puzzles arise here. The so called “mereological fallacy” is the error of attributing properties of the whole (the living human person) to its parts (the brain). In our case a special kind of mereological fallacy risks to be embraced: attributing features of the whole (the brain) to its parts (those visualized as more active in the scan). Consciousness is a feature of the whole brain: the sole fact that a particular area is more active than others does not imply conscious activity.

The reverse inference is another nice logical pitfall: the fact that a study reveals that a particular cerebral area, say A, is more active during a specific task, say T, does not imply that A always results in T, nor that T always presupposes A.

In short, we should avoid the conceptual temptation to view consciousness according to the so called “homunculus theory”: like an entity placed in a particular cerebral area. This is unlikely: consciousness does not reside in specific brain regions, but is rather equivalent to the activity of the brain as a whole.

But where is consciousness? To put it roughly, it is nowhere and everywhere in the brain. Consciousness is a feature of the brain and the brain is more than the sum of its parts: it is an open system, where external factors can influence its structure and function, which in turn affects our consciousness. Brain and consciousness are continually changing in deep relationships with the external environment.

We address these issues in more detail in a forthcoming book that I and Kathinka Evers are editing, involving leading researchers both in neuroscience and in philosophy:

Michele Farisco

We want solid foundations - the Ethics Blog

 

Neuroethics: new wine in old bottles?

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog

Newer posts »