The challenge to simulate the brain

October 7, 2015

Michele FariscoIs it possible to create a computer simulation of the human brain? Perhaps, perhaps not. But right now, a group of scientists is trying. But it is not only finding enough computer power that makes it difficult: there are some very real philosophical challenges too.

Computer simulation of the brain is one of the most ambitious goals of the European Human Brain Project. As a philosopher, I am part of a group that looks at the philosophical and ethical issues, such as: What is the impact of neuroscience on social practice, particularly on clinical practice? What are the conceptual underpinnings of neuroscientific investigation and its impact on traditional ideas, like the human subject, free will, and moral agency? If you follow the Ethics Blog, you might have heard of our work before (“Conversations with seemingly unconscious patients”; “Where is consciousness?”).

One of the questions we ask ourselves is: What is a simulation in general and what is a brain simulation in particular? Roughly, the idea is to create an object that resembles the functional and (if possible also) the structural characteristics of the brain in order to improve our understanding and ability to predict its future development. Simulating the brain could be defined as an attempt to develop a mathematical model of the cerebral functional architecture and to load it onto a computer in order to artificially reproduce its functioning. But why should we reproduce brain functioning?

I can see three reasons: describing, explaining and predicting cerebral activities. The implications are huge. In clinical practice with neurological and psychiatric patients, simulating the damaged brain could help us understand it better and predict its future developments, and also refine current diagnostic and prognostic criteria.

Great promises, but also great challenges ahead of us! But let me now turn to challenges that I believe can be envisaged from a philosophical and conceptual perspective.

A model is in some respects simplified and arbitrary: the selection of parameters to include depends on the goals of the model to be built. This is particularly challenging when the object being simulated is characterized by a high degree of complexity.

The main method used for building models of the brain is “reverse engineering.” This is a method that includes two main steps: dissecting a functional system at the physical level into component parts or subsystems; and then reconstructing the system virtually. Yet the brain hardly seems decomposable into independent modules with linear interactions. The brain rather appears as a nonlinear complex integrated system and the relationship between the brain’s components is non-linear. That means that their relationship cannot be described as a direct proportionality and their relative change is not related to a constant multiplier. To complicate things further, the brain is not completely definable by algorithmic methods. This means that it can show unpredicted behavior. And then to make it even more complex: The relationship between the brain’s subcomponents affects the behavior of the subcomponents.

The brain is a holistic system and despite being deterministic it is still not totally predictable. Simulating it is hardly conceivable. But even if it should be possible, I am afraid that a new “artificial” brain will have limited practical utility: for instance, the prospective general simulation of the brain risks to lose the specific characteristics of the particular brain under treatment.

Furthermore, it is impossible to simulate “the brain” simply because such an entity doesn’t exist. We have billions of different brains in the world. They are not completely similar, even if they are comparable. Abstracting from such diversity is the major limitation of brain simulation. Perhaps it would be possible to overcome this limitation by using a “general” brain simulation as a template to simulate “particular” brains. But maybe this would be even harder to conceive and realize.

Brain simulation is indeed one of the most promising contemporary scientific enterprises, but it needs a specific conceptual investigation in order to clarify its inspiring philosophy and avoid misinterpretations and disproportional expectations. Even, but not only, by lay people.

If you want to know more, I recommend having a look at a report of our publications so far.

Michele Farisco

We like challenging questions - the ethics blog


Our publications on neuroethics and philosophy of the brain

June 30, 2015

Pär SegerdahlAt CRB, an international, multidisciplinary research group works with ethical and philosophical questions that are associated with the neuroscientific exploration of the human mind and brain.

As part of the European Human Brain Project, they approach not only ethical questions that arise, or may arise, with the development and practical application of neuroscience. They also more fundamentally explore philosophical questions about, for example, the concepts of consciousness, human identity, and the self.

In order to give an overview of their extensive work, we recently compiled a report of their articles, books and book chapters. It is available online:

The report also contains abstracts of all the publications. – Have a look at the compilation; I’m sure you will find it fascinating!

I might add that we recently updated similar reports on our work in biobank ethics and in nursing ethics:

Here too you’ll find abstracts of our interesting publications in these fields.

Pär Segerdahl

Approaching future issues - the Ethics Blog


Where is consciousness?

May 26, 2015

 

Michele FariscoWould it be possible to use brain imaging techniques to detect consciousness and then “read” directly in people’s brains what they want or do not want? Could one, for example, ask a severely brain injured patient for consent to some treatment, and then obtain an answer through a brain scan?

Together with the philosopher Kathinka Evers and the neuroscientist Steven Laureys, I recently investigated ethical and clinical issues arising from this prospective “cerebral communication.”

Our brains are so astonishingly complex! The challenge is how to handle this complexity. To do that we need to develop our conceptual apparatus and create what we would like to call a “fundamental” neuroethics. Sound research needs solid theory, and in line with this I would like to comment upon the conceptual underpinnings of this ongoing endeavor of developing a “fundamental” neuroethics.

The assumption that visualizing activity in a certain brain area can mean reading the conscious intention of the scanned subject presupposes that consciousness can be identified with particular brain areas. While both science and philosophy widely accept that consciousness is a feature of the brain, recent developments in neuroscience problematize relating consciousness to specific areas of the brain.

Tricky logical puzzles arise here. The so called “mereological fallacy” is the error of attributing properties of the whole (the living human person) to its parts (the brain). In our case a special kind of mereological fallacy risks to be embraced: attributing features of the whole (the brain) to its parts (those visualized as more active in the scan). Consciousness is a feature of the whole brain: the sole fact that a particular area is more active than others does not imply conscious activity.

The reverse inference is another nice logical pitfall: the fact that a study reveals that a particular cerebral area, say A, is more active during a specific task, say T, does not imply that A always results in T, nor that T always presupposes A.

In short, we should avoid the conceptual temptation to view consciousness according to the so called “homunculus theory”: like an entity placed in a particular cerebral area. This is unlikely: consciousness does not reside in specific brain regions, but is rather equivalent to the activity of the brain as a whole.

But where is consciousness? To put it roughly, it is nowhere and everywhere in the brain. Consciousness is a feature of the brain and the brain is more than the sum of its parts: it is an open system, where external factors can influence its structure and function, which in turn affects our consciousness. Brain and consciousness are continually changing in deep relationships with the external environment.

We address these issues in more detail in a forthcoming book that I and Kathinka Evers are editing, involving leading researchers both in neuroscience and in philosophy:

Michele Farisco

We want solid foundations - the Ethics Blog

 


Neuroethics: new wine in old bottles?

April 7, 2015

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog


How can the brain be computer simulated?

October 29, 2014

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogA computer simulated human brain – that undoubtedly sounds like science fiction. But the EU flagship project, the Human Brain Project, actually has computer simulation of the brain as an objective.

What will be accomplished during the ten years that the project is financed will presumably be simulations of more limited brain functions (often in the mouse brain). But the proud objective to simulate the human brain has now been formulated in a serious research project.

But what does “computer simulation of the brain” mean?

In an article in the journal Neuron Kathinka Evers and Yadin Dudai discuss the meaning of simulation of the brain. Kathinka Evers from CRB leads the philosophical research in the EU Project and Yadin Dudai is a neuroscientist from the Weizmann Institute of Science who also works in the project.

The article combines philosophical and scientific vantage points to clarify the type of simulation that is relevant in neuroscience and what goals it may have. Several of the questions in the article are relevant also for the simulation of more limited brain functions. For example, the question if the ability to make a computer simulation of a brain function means that you understand it.

The most thought-provoking questions, however, concern the big (but distant) goal to simulate a whole human brain. Is it possible in principle, given that the brain is embedded in the body and is in constant interaction with it? Is it possible, given that the brain interacts not only with the body but also with a social environment?

Does simulating the brain require that one also simulates the brain’s interaction with the body and the social context in which it operates? Kathinka Evers thinks so. The attempt to simulate the brain is too limited if one does not start out from the fact that the brain is in constant interaction with an environment that constantly changes it.

The brain must be understood (and simulated) as an “experienced brain.”

Suppose that one day one manages to simulate an experienced human brain in intensive interaction with a bodily and social environment. Has one then simulated a brain so well that one created consciousness?

The questions in the article are many and breathtaking – read it!

Pär Segerdahl

We like challenging questions - the ethics blog


Conversations with seemingly unconscious patients

September 23, 2014

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogResearch and technology changes us: changes the way we live, speak and think. One area of ​​research that will change us in the future is brain research. Here are some remarkable discoveries about some seemingly unconscious patients; discoveries that we still don’t know how to make intelligible or relate to.

A young woman survived a car accident but got such serious injuries that she was judged to be in a vegetative state, without consciousness. When sentences were spoken to her and her neural responses were measured through fMRI, however, it was discovered that her brain responded equivalently to conscious control subjects’ brains. Was she conscious although she appeared to be in a coma?

To get more clarity the research team asked the woman to perform two different mental tasks. The first task was to imagine that she was playing tennis; the other that she visited her house. Once again the measured brain activation was equivalent to that of the conscious control subjects.

She is not the only case. Similar responses have been measured in other patients who according to international guidelines were unconscious. Some have learned to respond appropriately to yes/no questions, such as, “Is your mother’s name Yolande?” They respond by mentally performing different tasks – let’s say, imagine squeezing their right hand for “yes” and moving all their toes for “no.” Their neural responses are then measured.

There is already technology that connects brain and computer. People learn to use these “neuro-prosthetics” without muscle use. This raises the question if in the future one may be able to communicate with some patients who today would be diagnosed as unconscious.

– Should one then begin to ask these patients about informed consent for different treatments?

Here at the CRB researchers are working with such neuro-ethical issues within a big European research effort: the Human Brain Project. Within this project, Kathinka Evers leads the work on ethical and societal implications of brain research, and Michele Farisco writes his (second) thesis in the project, supervised by Kathinka.

Michele Farisco’s thesis deals with disorders of consciousness. I just read an exciting book chapter that Michele authored with Kathinka and Steven Laureys (one of neuro-scientists in the field):

They present developments in the field and discuss the possibility of informed consent from some seemingly unconscious patients. They point out that informed consent has meaning only if there is a relationship between doctor/researcher and patient, which requires communication. This condition may be met if the technology evolves and people learn to use it.

But it is still unclear, they argue, whether all requirements for informed consent are satisfied. In order to give informed consent, patients must understand what they agree to. This is usually checked by asking patients to describe with their own words what the doctor/researcher communicated. This cannot be done through yes/no-communication via neuroimaging. Furthermore, the patient must understand that the information applies to him or her at a certain time, and it is unclear if these patients, who are detached from the course of everyday life and have suffered serious brain injury, have that understanding. Finally, the patient must be emotionally able to evaluate different alternatives. Also this condition is unclear.

It may seem early to discuss ethical issues related to discoveries that we don’t even know how to make intelligible. I think on the contrary that it can pave the way for emerging intelligibility. A personal reflection explains what I mean.

It is tempting to think that neuroscience must first determine whether the patients above are unconscious or not, by answering “the big question” how consciousness arises and becomes disturbed or inhibited in the brain. Only then can we understand these remarkable discoveries, and only then can practical applications and ethical implications be developed.

My guess is that practical technological applications, and human responses to their use, rather are venues for the intelligibility that is required for further scientific development. A brain does not give consent, but perhaps a seemingly unconscious patient with neuro-prosthesis. How future technology supported communication with such patients takes shape – how it works in practice and changes what we meaningfully can do, say and think – will guide future research. It is on this science-and-technology supported playing field that we might be able to ask and determine what we thought neuroscience had to determine beforehand, and on its own, by answering a “big question.”

After all, isn’t it on this playing field that we now begin to ask if some seemingly unconscious patients are conscious?

Ethics does not always run behind research, developing its “implications.” Perhaps neuro-ethics and neuroscience walk hand in hand. Perhaps neuroscience needs neuro-ethics.

Pär Segerdahl

In dialogue with patients


Human and animal: where is the frontline?

January 7, 2013

Yesterday I read Lars Hertzberg’s thoughtful blog, Language is things we do. His latest post drew my attention to a militant humanist, Raymond Tallis (who resembles another militant humanist, Roger Scruton).

Tallis published Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. He summarizes his book in this presentation on YouTube.

Tallis gesticulates violently. As if he were a Knight of the Human Kingdom, he defends humanity against an invasion of foreign neuroscientific and biological terms. Such bio-barbarian discourses reduce us to the same level of organic life as that of the brutes, living far away from civilization, in the rainforest and on the savannah.

Tallis promises to restore our former glory. Courageously, he states what every sane person must admit: WE are not like THEM.

Tallis is right that there is an intellectual invasion of biological discourses, led by generals like Richard Dawkins and Daniel Dennett. There is a need to defend one. – But how? Who would I be defending? Who am I, as a human? And where do I find the front line?

The notions of human life that Tallis defends are the ordinary ones belonging to everyday language. I have the impression, though, that Tallis fails to see the material practices involved in language use. Instead, he abstracts and reifies these notions as if they denoted a sublime and self-contained sphere: a uniquely human subjectivity; one that hopefully will be explained in the future, when the proper civilized terms of human intentionality are discovered. – We just have not found them yet.

Only a future genius of human subjectivity can reveal the truth about consciousness. Peace in the Human Kingdom will be restored, after the wars of modernity and bio-barbarism.

Here are two examples of how Tallis reifies the human world as a nature-transcendent sphere:

  • “We have stepped out of our organic body.”
  • “The human world transcends the organism Homo sapiens as it was delivered by Darwinian evolution hundreds of thousands of years ago.”

Once upon a time we were just animals. Then we discovered how to make a human world out of mere animal lives. – Is this a fairy tale?

Let us leave this fantasy and return to the forms of language use that Tallis abstracts and reifies. A striking fact immediately appears: Tallis is happy to use bio-barbarian discourse to describe animal lives, as if such terms literally applied to animals. He uncritically accepts that animal eating can be reduced to “exhibiting feeding behavior,” while humans are said to “dine together.”

The fact, then, is that Tallis does not see any need to pay closer attention to the lives of animals, or to defend animals against the bio-barbarism that he fights as a Knight of the Human Kingdom.

This may make you think that Tallis at least succeeds to restore human glory; that he fails only on the animal front (being, after all, a humanist). But he fails to pay attention also to what is human. Since he abstracts and reifies the notions of human life, his dualistic vision combines bio-barbarian jargon about animals with phantasmagoric reifications of what is human.

The front line is in language. It arises in a failure to speak attentively.

When talking about animals is taken as seriously as talking about humans, we foster forms of sensitivity to hum-animal relations that are crushed in Raymond Tallis’ militant combination of bio-barbarian discourses for animals with fantasy-like elevations of a “uniquely human world.”

The human/animal dichotomy does not reflect how the human world transcends the animal organism. It reflects how humanism fails to speak responsibly.

Pär Segerdahl

Minding our language - the Ethics Blog


%d bloggers like this: