A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 19 of 19)

Interview with Kathinka Evers

One of my colleagues here at CRB, Kathinka Evers, recently returned from Barcelona, where she participated in the lecture series, The Origins of the Human Mind:

PS: Why did you participate in this series?

KE: I was invited by the Centre for Contemporary Culture to present the rise of neuroethics and my views on informed materialism.

PS: Why were you invited to talk on these issues?

KE: My last book was recently translated into Spanish (Quando la materia se despierta), and it has attracted interest amongst philosophers and neuroscientists in the Spanish speaking world. In that book, I extend a materialist theory of mind, called “informed materialism,” to neuroethical perspectives, discussing, for example, free will, self-conceptions and personal responsibility.

PS: In a previous blog post I commented upon Roger Scruton’s critical attitude to neuroscientific analyses of subjects that traditionally belong to the social and human sciences. What’s your opinion on his criticism?

KE: Contemporary neuroscience can enrich numerous areas of social science. But the reverse is also true. The brain is largely the result of socio-cultural influences. Understanding the brain also involves understanding its embodiment in a social context. The social and neurobiological perspectives dynamically interact in our development of a deeper understanding of the human mind, of consciousness, and of human identity.

PS: Do you mean that the criticism presupposes a one-sided view of the development of neuroscience?

KE: I suspect that the criticism is not well-informed, scientifically, since it fails to take this neuro-cultural symbiosis into account. But it is not uncommon for philosophers to take a rather defensive position against neuroscientific attempts to enter philosophical domains.

PS: Was this tension noticeable at the meeting in Barcelona?

KE: Not really. Rather, the debate focused on how interdisciplinary collaborations have at last achieved what the theoretical isolationism of the twentieth century – when philosophy of mind was purely a priori and empirical brain science refused to study consciousness – failed to achieve: the human brain is finally beginning to understand itself and its own mind.

Kathinka Evers has developed a course in neuroethics and is currently drafting a new book (in English) on brain and mind.

Pär Segerdahl

We transgress disciplinary borders - the Ethics Blog

Do I have a self?

Viewing neuroscience as a box opener is tempting. The box conceals the human mind; opening the box reveals it.

According to this image, neuroscience uncovers reality. It lays bare the truth about our taken for granted notions of mind: about our concepts of ‘self,’ ‘will,’ ‘belief,’ ‘intention’… Neuroscience reveals the underlying facts about us humans.

How exciting…, and how terrifying! What will they find in the box? And what will they not find? Will they find my ‘self’ there – the entity that is me and that writes these words?

What if they don’t find my ‘self’ in the box! What if my ‘self’ turns out to be an illusion! Can they engineer one for me instead? My life would be so desolate without ‘me.’

But neuroscientists are clever. They control what’s in the box. They surely will be able to enhance my brain and create the ‘self’ that didn’t exist in the first place.

Ideas like these are discussed in a mind-boggling interview entitled,

What strikes me about the neurophilosophical discussion is that it does NOT question the notion of the self. The notion is discussed as if it were self-evident to all of us, as some sort of ‘entity.’ The notion is supposed to be present in ordinary (culturally shaped) self-understanding. What is lacking is the evidence for the notion of ‘the self.’

You’ve guessed where the evidence is hiding: it’s in the box!

Neuroscientists opening the box threaten to disclose that the brain is naked. It might not be garmented in a ‘self’ or in a ‘free will.’ That these ‘entities’ exist in the box were perhaps just illicit reifications of modes of speech present in everyday discourse.

But what is ‘reification’?

Is it not precisely the image of ‘the box’ concealing the realities of mind?

If the tempting ‘box’ image supplies the model of reification – the very form of reification – isn’t the notion that neuroscience, by opening the box, is exposing reifications in ordinary discourse a whirling dance with the same reifying tendency that it is supposed to expose?

The ‘box’ mode of thinking is a simplified use of psychological nouns and verbs as if they referred to ‘entities’ and ‘processes’ in a hidden realm. It is difficult to resist such simplified linguistic imagery.

I’m convinced that neuroscience is making important discoveries that will challenge our self-understanding. But I question the ‘box’ image of these developments as an oversimplification of the very modes of speech it makes it seem we can transcend.

Pär Segerdahl

Minding our language - the Ethics Blog

Can neuroscience modernize human self-understanding?

Tearing down old buildings and erecting new ones on the basis of modern science and technology – we are constantly doing it in our cities. But can similar ambitions to get rid of the old, to modernize, be realized even more thoroughly, with regard to us and the human condition?

Can we tear down “traditional” human self-understanding – the language we use when we reflect on life in literature, in philosophy, and in the humanities – and replace it by new neuroscientific terms?

Earlier this spring, the philosopher Roger Scruton published an essay in the Spectator where he eloquently attacks claims that neuroscience can and should replace the humanities by a set of brave new “neuro”-disciplines, like neuroethics, neuroaesthetics, and neuromusicology.

Not only will these purported new “sciences” fail to create the understanding that traditional ethics, aesthetics, and musicology, helped us towards (for example, of Bach’s music). They will even fail to achieve the scientific explanations that would justify the brave new “neuro”-prefix.

In order for there to be explanations at all, there must first of all be questions. What characterizes the purported “neuro”-sciences, however, is their lack of questions, Scruton remarks.

“Neuro-explanation” typically is no more than translation into neuro-jargon. The aim is neither understanding nor explanation, but the ideological one of replacing the traditional by the new, at any cost.

The result of these extreme modernization ambitions running amok in human self-understanding, Scruton claims, and I agree with him, is nonsense: neurononsense.

Yet, something worries me in Scruton’s essay. He almost seems to purify human self-understanding, or the human condition, as if it were a higher sphere that should not be affected by changing times, at least not if they are modern.

I agree that neuroscience cannot explain the human condition. I agree that it cannot replace human self-understanding. But it can change the human condition and challenge our self-understanding. It already does.

Science and technology cannot be abstracted from the human condition. We are continually becoming “modernized” by, for example, neuroscientific developments. These changing conditions are real, and not merely nonsense or jargon. They occur everywhere, not merely among intellectuals or academics. And they reach all the way to our language.

Neuroscience certainly cannot replace the humanities. But it can challenge the humanities to reflect on changed human conditions.

When attempts in the human sciences to understand modern human conditions focus on neuroscience, the prefix “neuro-” could denote a more responsible form of intellectual work than the one Scruton rightly criticizes. It could denote work that feels the challenge of neuroscientific developments and takes it seriously.

Here at CRB, Kathinka Evers works to develop such a responsible form of neuroethics: one that does not translate ethics into neuro-jargon, but sees neuroscientific findings about the brain as a philosophical challenge to understand and clarify, very often in opposition to the temptation of jargon.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Introspective genomics and the significance of one

As a philosopher, I am familiar with the image of the solitary thinker who studies the human mind though introspective study of his own. A recent article in the journal Cell reminds me of that image, but in unexpected “genomic” guise.

To achieve statistical significance, medical researchers typically engage large numbers of research subjects. The paper in Cell, however, has only one research subject: the lead author of the paper, Michael Snyder.

Snyder and colleagues studied how his body functioned molecularly and genetically over a 14-month period. Samples from Snyder were taken on 20 separate occasions. A personal “omics profile” was made by integrating information about his genomic sequence with other molecular patterns gathered from the samples, as these patterns changed over time.

Early results indicated that Snyder was genetically disposed to type 2 diabetes. Strangely enough, the disease began to develop during the course of the study. Snyder could follow in detail how two virus infections and the diabetes developed molecularly and genetically in his body.

Snyder changed his life style to handle his diabetes. When he informed his life-insurance company about the disease, however, his premiums became dramatically more expensive.

The introspective paper illustrates the potential usefulness, as well as the risks, of what has been dubbed “personalized medicine.” Here I want speculate, though, on how this new paradigm in medicine challenges scientific and intellectual ideals.

When philosophers introspectively studied the human mind, they took for granted that what they found within themselves was shared by all humans. The general could be found completely instantiated in the particular.

The particular was for philosophers no more than a mirror of the general. What they saw in the mirror was not the individual mirror (it was intellectually invisible). What they saw in the mirror was a reflection of the general (and only the general was intellectually visible).

That simple image of the relation between the particular and the general was discarded with Darwin’s theory of the origin of species. A species has no essence shared by all individuals. Therefore, to achieve scientific generality about what is human, you cannot rely on one human subject only. You need many subjects, and statistics, to achieve intellectual vison of general facts.

A noteworthy feature of the paper under discussion is that we seem partly to have returned to the era of introspective research. We return to it, however, without the discarded notion of the particular as mirror of the general.

New molecular techniques seem to open up for study of what previously were simply individual cases without significance in themselves. For personalized medicine, each subject unfolds as a universe; as a world full of significant processes.

By studying the “genomic universe” of one person and following it over a period of time time, Snyder and colleagues could discern processes that would have been invisible if they had superimposed data from several distinct research subjects.

This new significance of the particular is fascinating and novel from an intellectual perspective. Has the traditional contempt for the particular case been overcome in personalized medicine?

Speaking personally as a philosopher, I cannot avoid seeing this aspect of personalized medicine as congenial with certain philosophical tendencies.

I am thinking of tendencies to investigate (and compare) particular cases without magnifying them on a wall of philosophical abstraction, as if only the general was intellectually visible. I am thinking of serious attempts to overcome the traditional contempt for the particular case.

We seem to have arrived at a new conception of one and many; at a new conception of the particular case as visible and worthy of study.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

After-birth abortion as a logical scale exercise

How should one respond when ethicists publish arguments in favor of infanticide?

In the current issue of Journal of Medical Ethics, two philosophers argue that what they call “after-birth abortion” should be permissible in all cases where abortion is (even when the newborn is healthy).

Not surprisingly, soon after BioEdge covered the article, the news spread on the internet… and the authors of the article unfortunately even received death threats.

If you know the spirit of much current academic philosophy, you will not be surprised to know that the authors defended themselves by statements like:

  • “This was a theoretical and academic article.”
  • “I’m not in favour of infanticide. I’m just using logical arguments.”
  • “It was intended for an academic community.”
  • “I don’t think people outside bioethics should learn anything from this article.”

The editor of JME, Julian Savulescu, defended the decision to publish by emphasizing that JME “supports sound rational argument.”

In a similar vein, the philosopher John Harris, who developed basically the same rational considerations in support of infanticide, felt a need to clarify his position. He never defended infanticide as a policy proposal. – What did he do, then?

He engaged in “intellectual discussions.”

What I find remarkable is how some of our most significant human ideals – logic and rationality – seem to have acquired a technical and esoteric meaning for at least some professional philosophers.

Traditionally, if you build on logic and rationality, then your intellectual considerations ought to concern the whole of humanity. Your conclusions deserve to be taken seriously by anyone with an interest in the matter.

The article on after-birth abortion, however, is JUST using logical arguments. It is ONLY presenting a sound rational argument. It is MERELY an intellectual discussion. To me, this sounds like a contradiction in terms.

Moreover, because of this “merely” logical nature of the argument, it concerns no one except a select part of the academic community.

Still, logic and rationality are awe-inspiring ideals with a long human history. Philosophers draw heavily on the prestige of these ideals when they explain the seriousness of their arguments in a free liberal society.

When people in this free society are troubled by the formal reasoning, however, some philosophers seem surprised by this unwelcome attention from “outsiders” and explain that it is only a logical scale exercise, composed years ago by eminent philosophers like Singer, Tooley and Harris, before academic journals were accessible on the internet.

I repeat my question: how should one respond when ethicists publish what they present as “rational arguments” in favor of infanticide?

My answer is that one should take them seriously when they explain that one shouldn’t take their logical conclusions too seriously. Still, there is reason for concern, because the ideals they approach so technically are prestigious notions with a binding character for most of us.

Many persons think they should listen carefully when arguments are logical and rational.

Moreover, JME is not a purely philosophical journal. It is read by people with real and practical concerns. They are probably unaware that many professional philosophers, who seem to be discussing real issues, are only doing logical scale exercises.

This mechanized approach to the task of thinking, presented on days with better self-confidence as the epitome of what it means to be “serious and well-reasoned,” is what ought to concern us. It is problematic even when conclusions are less sensational.

Pär Segerdahl

Following the news - the ethics blog

Trapped in our humanity?

Being human, can I think nonhuman thoughts? Can the world I perceive be anything but a human world?

These philosophical questions arise when I read Cora Diamond’s and Bernard Williams’ humanistic portrayals of our relations to animals.

A certain form of “human self-centeredness” is often deemed unavoidable in philosophy. If I talk about a dog as being nervous, for example, I use language. But since this language is my language, and I am human, the dog’s “nervousness” would seem to have its ultimate reference point in my humanity.

When I try these thoughts, they make it look as if we, in some almost occult sense, were trapped in our humanity. The more we reach out toward other bodily beings, the more entrenched we become in our own spirituality. Language may open up an entire world for us. But since language is human, it makes us a solipsistic being that cannot but experience a fundamentally human world.

Believing in an “ineliminable white or male understanding of the world” would be prejudiced, Williams writes. But our humanity cannot, of course, be eliminated as if it were an old prejudice. Therefore: “A concern for nonhuman animals is indeed a proper part of human life, but we can acquire it, cultivate it, and teach it only in terms of our understanding of ourselves.”

Similar thoughts appear in Diamond’s notion that the kind of moral response to animals that can motivate vegetarianism (such as her own) is an “extension to animals of modes of thinking characteristic of our responses to human beings.”

Perhaps I misunderstand them. But the idea seems to be that we become human primarily with other humans, and only thereafter relate to a “nonhuman” world on the basis of the more primordial human one. Humanism, in such philosophical form, could be called: the idea of “humanistic immanence.”

What is valuable in the idea of humanistic immanence is what it has in common with all good philosophy: the self-critical occupation with our own thinking. What I find more questionable is what appears to be an unfounded assumption: that we become human primarily with other humans (a purification of what is human).

One does not have to be a “post-humanist” to make the following observation:

  • “… in the lives of many people animals occupy a place which is, in certain respects, as central as that occupied by other human beings. In particular, certain animals have a quite fundamental place in the lives of many young children; and a child’s use of the words ‘pain’, ‘fear’ and so on may be acquired as immediately in connection with the pet cat as in connection with human beings.” (David Cockburn)

Consider, in the light of this observation, Diamond’s important idea that, “we learn what a human being is in – among other ways – sitting at a table where WE eat THEM.” Take this notion of human becoming to ape language research, where apes and humans meet daily over food and have conversations that may concern such matters as what to eat, who eats what, and who eats with whom.

What happens when humans share food with apes, sitting down on the ground rather than around a dinner table? What happens to our “humanity” and to their “animality”? What happens to us as men and women when apes communicate attitudes to how humans of different sex and age should behave? What happens to our moral understanding when apes view some visitors as bad and urge their human friends to bite them?

My (human) notion of nervousness may in part have developed through interaction with our sensitive Great Dane when I was a child. What I learned through these interactions may only thereafter have been extended to human nervousness.

I am human and so is my language. But the manner in which I became human (and acquired language) transcends, I want to say, the purified human sphere of “humanistic immanence.”

My ineliminable humanity already is more than human. What are the consequences for philosophy?

Pär Segerdahl

The Ethics Blog - Thinking about thinking

Newer posts »