A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: consciousness (Page 4 of 5)

Searching for consciousness needs conceptual clarification

Michele FariscoWe can hardly think of ourselves as living persons without referring to consciousness. In fact, we normally define ourselves through two features of our life: we are awake (the level of our consciousness is more than zero), and we are aware of something (our consciousness is not empty).

While it is quite intuitive to think that our brains are necessary for us to be conscious, it is tempting to think that looking at what is going on in the brain is enough to understand consciousness. But empirical investigations are not enough.

Neuroscientific methods to investigate consciousness and its disorders have developed massively in the last decades. The scientific and clinical advancements that have resulted are impressive. But while the ethical and clinical impacts of these advancements are often debated and studied, there is little conceptual analysis.

I think of one example in particular, namely, the neuroscience of disorders of consciousness. These are states where a person’s consciousness is more or less severely damaged. Most commonly, we think of patients in vegetative state, who exhibit levels of consciousness without any content. But it could also be a minimally conscious state with fluctuating levels and contents of consciousness.

How can we explain these complex conditions? Empirical science is usually supposed to be authoritative and help to assess very important issues, such as consciousness. Such scientific knowledge is basically inferential: it is grounded in the comparative assessment of residual consciousness in brain-damaged patients.

But because of its inferential nature, neuroscience takes the form of an inductive reasoning: it infers the presence of consciousness starting from data extracted by neurotechnology. This is done by comparing data from brain damaged patients with data from healthy individuals. Yet this induction is valid only on the basis of a previous definition of consciousness, a definition we made within an implicit or explicit theoretical framework. Thus a conceptual assessment of consciousness that is defined within a well-developed conceptual framework is crucial, and it will affect the inference of consciousness from empirical data.

When it comes to disorders of consciousness, there is still no adequate conceptual analysis of the complexity of consciousness: its levels, modes and degrees. Neuroscience often takes a functionalist account of consciousness for granted in which consciousness is assumed to be equivalent to cognition or at least to be based in cognition. Yet findings from comatose patients suggest that this is not the case. Instead, consciousness seems to be grounded on the phenomenal functions of the brain as they are related to the resting state’s activity.

For empirical neuroscience to be able to contribute to an understanding of consciousness, neuroscientists need input from philosophy. Take the case of communication with speechless patients through neurotechnology (Conversations with seemingly unconscious patients), or the prospective simulation of the brain (The challenge to simulate the brain) for example: here scientists can give philosophers empirical data that need to be considered in order to develop a well-founded conceptual framework within which consciousness can be defined.

The alleged autonomy of empirical science as source of objective knowledge is problematic. This is the reason why philosophy needs to collaborate with scientists in order to conceptually refine their research methods. On the other hand, dialogue with science is essential for philosophy to be meaningful.

We need a conceptual strategy for clarifying the theoretical framework of neuroscientific inferences. This is what we are trying to do in our CRB neuroethics group as part of the Human Brain Project (Neuroethics and Neurophilosophy).

Michele Farisco

This post in Swedish

We want solid foundations - the Ethics Blog

Resignation syndrome in refugee children – a new hypothesis

Pär SegerdahlThere has been much discussion about the so-called “apathetic children” in families seeking asylum in Sweden. You read that right: in Sweden, not in other countries. By all accounts, these children are genuinely ill. They do not simulate total lack of willpower; like inability to eat, speak and move. They are in a life-threatening condition and show no reactions even to painful stimuli. But why do we have so many cases in Sweden and not in other countries?

Several hundred cases have been reported, which in 2014 led the Swedish National Board of Health and Welfare to introduce a new diagnosis: resignation syndrome. The “Swedish” syndrome appears to be a mystery, almost like a puzzle to crack. There are asylum seeking families all around the world: why does this syndrome occur to such an extent in a single country?

If you want to think more about this puzzling question, I recommended a new article in Frontiers in Behavioral Neuroscience, with Karl Sallin (PhD student at CRB) as first author. The article is long and technical, but for those interested, it is well worth the effort. It documents what is known about the syndrome and suggests a new hypothesis.

A common explanation of the syndrome is that it is a reaction to stress and depression. The explanation sounds intuitively reasonable, considering these children’s experiences. But if it were true, the syndrome should occur also in other countries. The mystery remains.

Another explanation is that the mother attempts to manage her trauma, her depression and her needs, by projecting her problems onto the child. The child, who experiences the mother as its only safety, adapts unconsciously and exhibits the symptoms that the mother treats the child as if it had. This explanation may also seem reasonable, especially considering another peculiarity of the syndrome: it does not affect unaccompanied refugee children, only children who arrive with their families. The problem is again: traumatized refugee families exist all around the world. So why is the syndrome common only in Sweden?

Now to Sallins’ hypothesis in the article. The hypothesis has two parts: one about the disease or diagnosis itself; and one about the cause of the disease, which may also explain the peculiar distribution.

After a review of symptoms and treatment response, Sallin suggests that we are not dealing with a new disease. The introduced diagnosis, “resignation syndrome,” is therefore inappropriate. We are dealing with a known diagnosis: catatonia, which is characterized by the same loss of motor skills. The children moreover seem to retain awareness, even though their immobility makes them seem unconscious. When they recover, they can often recall events that occurred while they were ill. They just cannot activate any motor skills. The catatonia hypothesis can be tested, Sallin suggests, by trying treatments with known responses in catatonic patients, and by performing PET scans of the brain.

The question then is: Why does catatonia arise only in refugee children in Sweden? That question brings us to the second part of the hypothesis, which has some similarities with the theory that the mother affects the child psychologically to exhibit symptoms: really have them, not only simulate them!

Here we might make a comparison with placebo and nocebo effects. If it is believed that a pill will have a certain impact on health – positive or negative – the effect can be produced even if the pill contains only a medically inactive substance. Probably, electromagnetic hypersensitivity is a phenomenon of this kind, having psychological causes: a nocebo effect.

The article enumerates cases where it can be suspected that catatonia-like conditions are caused psychologically: unexpected, unexplained sudden death after cancer diagnosis; death epidemics in situations of war and captivity characterized by hopelessness; acute or prolonged death after the utterance of magic death spells (known from several cultures).

The hypothesis is that life-threatening catatonia in refugee children is caused psychologically, in a certain cultural environment. Alternatively, one could say that catatonia is caused in the meeting between certain cultures and Swedish conditions, since it is more common in children from certain parts of the world. We are dealing with a culture bound psychogenesis.

Sallin compares with an outbreak of “hysteria” during the latter part of the 1800s, in connection with Jean-Martin Charcot’s famous demonstrations of hysterical patients, and where colorful symptom descriptions circulated in the press. Charcot first suggested that hysteria had organic causes. But when he later began to talk about psychological factors behind the symptoms, the number of cases of hysteria dropped.

(Perhaps I should point out that Sallin emphasizes that psychological causes are not to be understood in terms of a mind/body dualism.)

It remains to be examined exactly how meeting Swedish conditions contribute to psychologically caused catatonia in children in certain refugee families. But if I understand Sallin correctly, he thinks that the spread of symptom descriptions through mass media, and the ongoing practice of treating “children with resignation syndrome,” might be essential in this context.

If this is true, it creates an ethical problem mentioned in the article. There is no alternative to offering these children treatment: they cannot survive without tube feeding. But offering treatment also causes new cases.

Yes, these children must, of course, be offered care. But maybe Sallin, just by proposing psychological causes of the symptoms, has already contributed to reducing the number of cases in the future. Assuming that his hypothesis of a culture bound psychogenesis is true, of course.

What a fascinating interplay between belief and truth!

Pär Segerdahl

Sallin, K., Lagercrantz, H., Evers, K., Engström, I., Hjern, A., Petrovic, P., Resignation Syndrome: Catatonia? Culture-Bound? Frontiers in Behavioral Neuroscience 29, January 2016

This post in Swedish

We like challenging questions - the ethics blog

The challenge to simulate the brain

Michele FariscoIs it possible to create a computer simulation of the human brain? Perhaps, perhaps not. But right now, a group of scientists is trying. But it is not only finding enough computer power that makes it difficult: there are some very real philosophical challenges too.

Computer simulation of the brain is one of the most ambitious goals of the European Human Brain Project. As a philosopher, I am part of a group that looks at the philosophical and ethical issues, such as: What is the impact of neuroscience on social practice, particularly on clinical practice? What are the conceptual underpinnings of neuroscientific investigation and its impact on traditional ideas, like the human subject, free will, and moral agency? If you follow the Ethics Blog, you might have heard of our work before (“Conversations with seemingly unconscious patients”; “Where is consciousness?”).

One of the questions we ask ourselves is: What is a simulation in general and what is a brain simulation in particular? Roughly, the idea is to create an object that resembles the functional and (if possible also) the structural characteristics of the brain in order to improve our understanding and ability to predict its future development. Simulating the brain could be defined as an attempt to develop a mathematical model of the cerebral functional architecture and to load it onto a computer in order to artificially reproduce its functioning. But why should we reproduce brain functioning?

I can see three reasons: describing, explaining and predicting cerebral activities. The implications are huge. In clinical practice with neurological and psychiatric patients, simulating the damaged brain could help us understand it better and predict its future developments, and also refine current diagnostic and prognostic criteria.

Great promises, but also great challenges ahead of us! But let me now turn to challenges that I believe can be envisaged from a philosophical and conceptual perspective.

A model is in some respects simplified and arbitrary: the selection of parameters to include depends on the goals of the model to be built. This is particularly challenging when the object being simulated is characterized by a high degree of complexity.

The main method used for building models of the brain is “reverse engineering.” This is a method that includes two main steps: dissecting a functional system at the physical level into component parts or subsystems; and then reconstructing the system virtually. Yet the brain hardly seems decomposable into independent modules with linear interactions. The brain rather appears as a nonlinear complex integrated system and the relationship between the brain’s components is non-linear. That means that their relationship cannot be described as a direct proportionality and their relative change is not related to a constant multiplier. To complicate things further, the brain is not completely definable by algorithmic methods. This means that it can show unpredicted behavior. And then to make it even more complex: The relationship between the brain’s subcomponents affects the behavior of the subcomponents.

The brain is a holistic system and despite being deterministic it is still not totally predictable. Simulating it is hardly conceivable. But even if it should be possible, I am afraid that a new “artificial” brain will have limited practical utility: for instance, the prospective general simulation of the brain risks to lose the specific characteristics of the particular brain under treatment.

Furthermore, it is impossible to simulate “the brain” simply because such an entity doesn’t exist. We have billions of different brains in the world. They are not completely similar, even if they are comparable. Abstracting from such diversity is the major limitation of brain simulation. Perhaps it would be possible to overcome this limitation by using a “general” brain simulation as a template to simulate “particular” brains. But maybe this would be even harder to conceive and realize.

Brain simulation is indeed one of the most promising contemporary scientific enterprises, but it needs a specific conceptual investigation in order to clarify its inspiring philosophy and avoid misinterpretations and disproportional expectations. Even, but not only, by lay people.

If you want to know more, I recommend having a look at a report of our publications so far.

Michele Farisco

We like challenging questions - the ethics blog

Our publications on neuroethics and philosophy of the brain

Pär SegerdahlAt CRB, an international, multidisciplinary research group works with ethical and philosophical questions that are associated with the neuroscientific exploration of the human mind and brain.

As part of the European Human Brain Project, they approach not only ethical questions that arise, or may arise, with the development and practical application of neuroscience. They also more fundamentally explore philosophical questions about, for example, the concepts of consciousness, human identity, and the self.

In order to give an overview of their extensive work, we recently compiled a report of their articles, books and book chapters. It is available online:

The report also contains abstracts of all the publications. – Have a look at the compilation; I’m sure you will find it fascinating!

I might add that we recently updated similar reports on our work in biobank ethics and in nursing ethics:

Here too you’ll find abstracts of our interesting publications in these fields.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Where is consciousness?

 

Michele FariscoWould it be possible to use brain imaging techniques to detect consciousness and then “read” directly in people’s brains what they want or do not want? Could one, for example, ask a severely brain injured patient for consent to some treatment, and then obtain an answer through a brain scan?

Together with the philosopher Kathinka Evers and the neuroscientist Steven Laureys, I recently investigated ethical and clinical issues arising from this prospective “cerebral communication.”

Our brains are so astonishingly complex! The challenge is how to handle this complexity. To do that we need to develop our conceptual apparatus and create what we would like to call a “fundamental” neuroethics. Sound research needs solid theory, and in line with this I would like to comment upon the conceptual underpinnings of this ongoing endeavor of developing a “fundamental” neuroethics.

The assumption that visualizing activity in a certain brain area can mean reading the conscious intention of the scanned subject presupposes that consciousness can be identified with particular brain areas. While both science and philosophy widely accept that consciousness is a feature of the brain, recent developments in neuroscience problematize relating consciousness to specific areas of the brain.

Tricky logical puzzles arise here. The so called “mereological fallacy” is the error of attributing properties of the whole (the living human person) to its parts (the brain). In our case a special kind of mereological fallacy risks to be embraced: attributing features of the whole (the brain) to its parts (those visualized as more active in the scan). Consciousness is a feature of the whole brain: the sole fact that a particular area is more active than others does not imply conscious activity.

The reverse inference is another nice logical pitfall: the fact that a study reveals that a particular cerebral area, say A, is more active during a specific task, say T, does not imply that A always results in T, nor that T always presupposes A.

In short, we should avoid the conceptual temptation to view consciousness according to the so called “homunculus theory”: like an entity placed in a particular cerebral area. This is unlikely: consciousness does not reside in specific brain regions, but is rather equivalent to the activity of the brain as a whole.

But where is consciousness? To put it roughly, it is nowhere and everywhere in the brain. Consciousness is a feature of the brain and the brain is more than the sum of its parts: it is an open system, where external factors can influence its structure and function, which in turn affects our consciousness. Brain and consciousness are continually changing in deep relationships with the external environment.

We address these issues in more detail in a forthcoming book that I and Kathinka Evers are editing, involving leading researchers both in neuroscience and in philosophy:

Michele Farisco

We want solid foundations - the Ethics Blog

 

Neuroethics: new wine in old bottles?

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog

How can the brain be computer simulated?

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogA computer simulated human brain – that undoubtedly sounds like science fiction. But the EU flagship project, the Human Brain Project, actually has computer simulation of the brain as an objective.

What will be accomplished during the ten years that the project is financed will presumably be simulations of more limited brain functions (often in the mouse brain). But the proud objective to simulate the human brain has now been formulated in a serious research project.

But what does “computer simulation of the brain” mean?

In an article in the journal Neuron Kathinka Evers and Yadin Dudai discuss the meaning of simulation of the brain. Kathinka Evers from CRB leads the philosophical research in the EU Project and Yadin Dudai is a neuroscientist from the Weizmann Institute of Science who also works in the project.

The article combines philosophical and scientific vantage points to clarify the type of simulation that is relevant in neuroscience and what goals it may have. Several of the questions in the article are relevant also for the simulation of more limited brain functions. For example, the question if the ability to make a computer simulation of a brain function means that you understand it.

The most thought-provoking questions, however, concern the big (but distant) goal to simulate a whole human brain. Is it possible in principle, given that the brain is embedded in the body and is in constant interaction with it? Is it possible, given that the brain interacts not only with the body but also with a social environment?

Does simulating the brain require that one also simulates the brain’s interaction with the body and the social context in which it operates? Kathinka Evers thinks so. The attempt to simulate the brain is too limited if one does not start out from the fact that the brain is in constant interaction with an environment that constantly changes it.

The brain must be understood (and simulated) as an “experienced brain.”

Suppose that one day one manages to simulate an experienced human brain in intensive interaction with a bodily and social environment. Has one then simulated a brain so well that one created consciousness?

The questions in the article are many and breathtaking – read it!

Pär Segerdahl

We like challenging questions - the ethics blog

Conversations with seemingly unconscious patients

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogResearch and technology changes us: changes the way we live, speak and think. One area of ​​research that will change us in the future is brain research. Here are some remarkable discoveries about some seemingly unconscious patients; discoveries that we still don’t know how to make intelligible or relate to.

A young woman survived a car accident but got such serious injuries that she was judged to be in a vegetative state, without consciousness. When sentences were spoken to her and her neural responses were measured through fMRI, however, it was discovered that her brain responded equivalently to conscious control subjects’ brains. Was she conscious although she appeared to be in a coma?

To get more clarity the research team asked the woman to perform two different mental tasks. The first task was to imagine that she was playing tennis; the other that she visited her house. Once again the measured brain activation was equivalent to that of the conscious control subjects.

She is not the only case. Similar responses have been measured in other patients who according to international guidelines were unconscious. Some have learned to respond appropriately to yes/no questions, such as, “Is your mother’s name Yolande?” They respond by mentally performing different tasks – let’s say, imagine squeezing their right hand for “yes” and moving all their toes for “no.” Their neural responses are then measured.

There is already technology that connects brain and computer. People learn to use these “neuro-prosthetics” without muscle use. This raises the question if in the future one may be able to communicate with some patients who today would be diagnosed as unconscious.

– Should one then begin to ask these patients about informed consent for different treatments?

Here at the CRB researchers are working with such neuro-ethical issues within a big European research effort: the Human Brain Project. Within this project, Kathinka Evers leads the work on ethical and societal implications of brain research, and Michele Farisco writes his (second) thesis in the project, supervised by Kathinka.

Michele Farisco’s thesis deals with disorders of consciousness. I just read an exciting book chapter that Michele authored with Kathinka and Steven Laureys (one of neuro-scientists in the field):

They present developments in the field and discuss the possibility of informed consent from some seemingly unconscious patients. They point out that informed consent has meaning only if there is a relationship between doctor/researcher and patient, which requires communication. This condition may be met if the technology evolves and people learn to use it.

But it is still unclear, they argue, whether all requirements for informed consent are satisfied. In order to give informed consent, patients must understand what they agree to. This is usually checked by asking patients to describe with their own words what the doctor/researcher communicated. This cannot be done through yes/no-communication via neuroimaging. Furthermore, the patient must understand that the information applies to him or her at a certain time, and it is unclear if these patients, who are detached from the course of everyday life and have suffered serious brain injury, have that understanding. Finally, the patient must be emotionally able to evaluate different alternatives. Also this condition is unclear.

It may seem early to discuss ethical issues related to discoveries that we don’t even know how to make intelligible. I think on the contrary that it can pave the way for emerging intelligibility. A personal reflection explains what I mean.

It is tempting to think that neuroscience must first determine whether the patients above are unconscious or not, by answering “the big question” how consciousness arises and becomes disturbed or inhibited in the brain. Only then can we understand these remarkable discoveries, and only then can practical applications and ethical implications be developed.

My guess is that practical technological applications, and human responses to their use, rather are venues for the intelligibility that is required for further scientific development. A brain does not give consent, but perhaps a seemingly unconscious patient with neuro-prosthesis. How future technology supported communication with such patients takes shape – how it works in practice and changes what we meaningfully can do, say and think – will guide future research. It is on this science-and-technology supported playing field that we might be able to ask and determine what we thought neuroscience had to determine beforehand, and on its own, by answering a “big question.”

After all, isn’t it on this playing field that we now begin to ask if some seemingly unconscious patients are conscious?

Ethics does not always run behind research, developing its “implications.” Perhaps neuro-ethics and neuroscience walk hand in hand. Perhaps neuroscience needs neuro-ethics.

Pär Segerdahl

In dialogue with patients

Human and animal: where is the frontline?

Yesterday I read Lars Hertzberg’s thoughtful blog, Language is things we do. His latest post drew my attention to a militant humanist, Raymond Tallis (who resembles another militant humanist, Roger Scruton).

Tallis published Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. He summarizes his book in this presentation on YouTube.

Tallis gesticulates violently. As if he were a Knight of the Human Kingdom, he defends humanity against an invasion of foreign neuroscientific and biological terms. Such bio-barbarian discourses reduce us to the same level of organic life as that of the brutes, living far away from civilization, in the rainforest and on the savannah.

Tallis promises to restore our former glory. Courageously, he states what every sane person must admit: WE are not like THEM.

Tallis is right that there is an intellectual invasion of biological discourses, led by generals like Richard Dawkins and Daniel Dennett. There is a need to defend one. – But how? Who would I be defending? Who am I, as a human? And where do I find the front line?

The notions of human life that Tallis defends are the ordinary ones belonging to everyday language. I have the impression, though, that Tallis fails to see the material practices involved in language use. Instead, he abstracts and reifies these notions as if they denoted a sublime and self-contained sphere: a uniquely human subjectivity; one that hopefully will be explained in the future, when the proper civilized terms of human intentionality are discovered. – We just have not found them yet.

Only a future genius of human subjectivity can reveal the truth about consciousness. Peace in the Human Kingdom will be restored, after the wars of modernity and bio-barbarism.

Here are two examples of how Tallis reifies the human world as a nature-transcendent sphere:

  • “We have stepped out of our organic body.”
  • “The human world transcends the organism Homo sapiens as it was delivered by Darwinian evolution hundreds of thousands of years ago.”

Once upon a time we were just animals. Then we discovered how to make a human world out of mere animal lives. – Is this a fairy tale?

Let us leave this fantasy and return to the forms of language use that Tallis abstracts and reifies. A striking fact immediately appears: Tallis is happy to use bio-barbarian discourse to describe animal lives, as if such terms literally applied to animals. He uncritically accepts that animal eating can be reduced to “exhibiting feeding behavior,” while humans are said to “dine together.”

The fact, then, is that Tallis does not see any need to pay closer attention to the lives of animals, or to defend animals against the bio-barbarism that he fights as a Knight of the Human Kingdom.

This may make you think that Tallis at least succeeds to restore human glory; that he fails only on the animal front (being, after all, a humanist). But he fails to pay attention also to what is human. Since he abstracts and reifies the notions of human life, his dualistic vision combines bio-barbarian jargon about animals with phantasmagoric reifications of what is human.

The front line is in language. It arises in a failure to speak attentively.

When talking about animals is taken as seriously as talking about humans, we foster forms of sensitivity to hum-animal relations that are crushed in Raymond Tallis’ militant combination of bio-barbarian discourses for animals with fantasy-like elevations of a “uniquely human world.”

The human/animal dichotomy does not reflect how the human world transcends the animal organism. It reflects how humanism fails to speak responsibly.

Pär Segerdahl

Minding our language - the Ethics Blog

Interview with Kathinka Evers

One of my colleagues here at CRB, Kathinka Evers, recently returned from Barcelona, where she participated in the lecture series, The Origins of the Human Mind:

PS: Why did you participate in this series?

KE: I was invited by the Centre for Contemporary Culture to present the rise of neuroethics and my views on informed materialism.

PS: Why were you invited to talk on these issues?

KE: My last book was recently translated into Spanish (Quando la materia se despierta), and it has attracted interest amongst philosophers and neuroscientists in the Spanish speaking world. In that book, I extend a materialist theory of mind, called “informed materialism,” to neuroethical perspectives, discussing, for example, free will, self-conceptions and personal responsibility.

PS: In a previous blog post I commented upon Roger Scruton’s critical attitude to neuroscientific analyses of subjects that traditionally belong to the social and human sciences. What’s your opinion on his criticism?

KE: Contemporary neuroscience can enrich numerous areas of social science. But the reverse is also true. The brain is largely the result of socio-cultural influences. Understanding the brain also involves understanding its embodiment in a social context. The social and neurobiological perspectives dynamically interact in our development of a deeper understanding of the human mind, of consciousness, and of human identity.

PS: Do you mean that the criticism presupposes a one-sided view of the development of neuroscience?

KE: I suspect that the criticism is not well-informed, scientifically, since it fails to take this neuro-cultural symbiosis into account. But it is not uncommon for philosophers to take a rather defensive position against neuroscientific attempts to enter philosophical domains.

PS: Was this tension noticeable at the meeting in Barcelona?

KE: Not really. Rather, the debate focused on how interdisciplinary collaborations have at last achieved what the theoretical isolationism of the twentieth century – when philosophy of mind was purely a priori and empirical brain science refused to study consciousness – failed to achieve: the human brain is finally beginning to understand itself and its own mind.

Kathinka Evers has developed a course in neuroethics and is currently drafting a new book (in English) on brain and mind.

Pär Segerdahl

We transgress disciplinary borders - the Ethics Blog

« Older posts Newer posts »