A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 19 of 20)

Genetic exceptionalism and unforgivingness

What fuels the tendency to view genetic information as exceptionally private and sensitive? Is information about an individual’s genetic disposition for eye color more sensitive than the fact that he has blue eyes?

In Rethinking Informed Consent in Bioethics, Neil C. Manson and Onora O’Neill make heroic efforts against an avalanche of arguments for genetic exceptionalism. For each argument meant to reveal how uniquely private, how exceptionally sensitive, and how extraordinarily risky genetic information is, Manson and O’Neill find elucidating examples, analogies and comparisons that cool down tendencies to exaggerate genetic information as incomparably dangerous.

What fuels the exceptionalism that Manson and O’Neill fight? They suggest that it has to do with metaphors that tempt us to reify information; temptations that, for various reasons, are intensified when we think about DNA. Once again, their analysis is clarifying.

Another form of genetic exceptionalism strikes me, however; one that has less to do with information. I’m thinking of GMO exceptionalism. For thousands of years, humans improved plants and animals through breeding them. This traditional way of modifying organisms is not without environmental risks. When analogous risks appear with GMO, however, they tend to change meaning and become seen as extraordinary risks, revealing the ineradicable riskiness of genetic manipulation.

Why are we prepared to embrace traditionally modified organisms, TMO, when basically the same risks with GMO make us want to exterminate every genetically manipulated bastard?

Unforgivingness. I believe that this all-too familiar emotional response drives genetic exceptionalism, and many other forms of exceptionalism.

Consider the response of becoming unforgiving. Yesterday we laughed with our friend. Today we learn that he spread rumors about us. His familar smile immediately acquires a different meaning. Yesterday it was shared joy. Today it is an ugly mask hiding an intrinsically untrustworthy individual who must be put in quarantine forever. Every trait of character turns into a defect of character. The whole person becomes an objection; an exception among humans.

Manson and O´Neill are right when they analyze a tendency to reify information in genetic exceptionalism. But I want to suggest that what fuels this tendency, what makes us more than willing to yield to the temptation, is an emotional state of mind that also produces many other forms of exceptionalism.

We need to acknowledge the emotional dimension of philosophical and ethical thinking. We don’t think well when we are unforgiving towards our subject matter. We think dogmatically and unjustly.

In their efforts to think well about genetic information, Manson and O’Neill can be understood as doing forgiveness work.

They calm us down and patiently show us that our friend, although he sometimes does wrong, is not that intrinsically bad character we want to see him as, when we are in our unfortunate unforgiving state of mind.

We are helped towards a state of mind where we can think more freely and justly about the risks and benefits of genetics.

Pär Segerdahl

We want to be just - the Ethics Blog

What is philosophy?

Someone asked me what philosophy is. I answered by trying to pinpoint the most frequently used word when one philosophizes.

What does a philosopher most often say? I believe he or she most often says, “But…”:

  • “But is that really true?”
  • “But shouldn’t then…?”
  • “But can’t one imagine that…?”
  • “But how can anyone know such a thing?”
  • Etc.

Always some unexpected obstacle! Just at the moment when your reasoning seems entirely spotless, an annoying “but…?” knocks you to the ground and you have to start all over again.

Confronted with our spontaneous reasoning, a philosopher’s head soon fills with objections. Perplexing questions lead into unknown territory. Maps must be drawn the need of which we never anticipated. A persistently repeated “but…?” reveals challenges for which we lack preparedness.

But the goal is not that of interminably objecting. Objecting and being perplexed are not intrinsic values.

Rather the contrary. The accumulation of objections is a precondition to there being a goal with philosophizing: that of putting an END to the annoying objections.

Philosophy is a fight with one’s own objections; the goal is to silence them.

But if that is so, what point can philosophy have? An activity that first raises annoying objections, and then tries to silence them: what’s that good for!?

Try to reason about what “consent to future research” means. Then you’ll probably notice that you soon start repeating “but…?” with regard to your own attempts to reason well. Your objections will annoy you and spur you to think even more clearly. You will draw maps the need of which you had not anticipated.

Even if we prefer that we never went astray, we do go astray. It pertains to being human. THEN we see the point with persistently asking “but…?”; THEN we see the purpose with crisscrossing confusing aspects of life until we survey them, haunted by objections from an unyielding form of sincerity.

When we finally manage to silence our irritating objections, philosophy has made itself as superfluous as a map would be when we cross our own street…

…until we go astray again.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Who, or what, becomes human?

Our long childhood and dependence on parental care seem to leave no doubt about it: we are not born as humans, we become human.

I want to highlight a particularly tempting metaphor for this process of “becoming human” – the metaphor of:

  • “Order out of chaos.”

According to this metaphor, human infancy is abundantly rich in possibilities; so abundant, in fact, that it is a formless chaos – a “blooming, buzzing confusion,” as William James characterized the infant’s experience of being alive.

To acquire recognizable human form, the child’s inner chaos must be tamed through the disciplining efforts of parents and society at large (the metaphor suggests). The child’s formlessly rich inner life must me narrowed down, hardened, made boring… until, finally, it becomes another obedient member of society.

Society does not acknowledge a real human subject until the norms of “being human” are confidently repeated: as if the child easily would slip back into its more original state of blooming, buzzing confusion, the moment the reiteration of the social norms of humanity terminates.

The “order out of chaos” metaphor makes life and growth look like death and atrophy. To become human means aborting limitless possibilities and gradually turning into that tragic effect of social forces that we know as “the mature adult.”

Perhaps the intriguing topic of the “deconstruction of the subject” is nothing but rigorous faithfulness to the logic of this tempting metaphor? If becoming human is anything like what the metaphor presents it as, then “no one” becomes human, strictly speaking, for before the disciplined human is formed, there is nameless chaos and no recognizable human subject.

But how can the proto-human chaos – I mean, the child – be so responsive to its non-chaotic parents that it reduces its inner chaos and becomes… human? Isn’t that responsiveness already a form of life, a way of being human?

Dare we entertain the hypothesis that the newborn already is active, and that her metamorphoses throughout life require her own creative participation?

I believe we need another understanding of human becoming than that of “order out of chaos.” – Or is human life really a form of colonization of the child?

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Neither innate nor learned

A child begins to speak; to say that it is hungry, or does not want to sleep. Where was the child’s language hiding before it began to speak? Did the child invent it?

Certainly not, experts on language development would insist. A child cannot create language. Language exists before the child starts to speak. All that is happening during language development is that language is being transported to the child.

The big question is: transported from where? There seem to be only two alternatives:

  1. Language is innate. It is prepared in our neural structures. When the child hears its parents speak, these structures are stimulated and soon start supporting the child’s own speech.
  2. Language is learned. It exists in society. Children have social learning skills; through these skills, language is transported from the social environment to the young pupil, soon supporting the child’s own speech.

These are the alternatives, then. Language is either inside or outside the newborn. Language development is either a process of “externalization” or a process of “internalization” of language. There can be no third alternative.

I have written about the ape Kanzi, who was raised by a human mother. I’ve written about him both on The Ethics Blog and in the book, Kanzi’s Primal Language. This bonobo and his half-sister Panbanisha developed language in a manner that does not clearly correspond to any of these two alternatives.

Since it is hardly credible that human language is innate in apes, ape language researchers typically try to teach apes language. These attempts fail.

Kanzi’s human mother, Sue Savage-Rumbaugh, avoided teaching Kanzi. Instead, she simply spoke to him, as parents do, in a shared Pan/Homo culture. As a result of this humanlike cultural rearing, he developed language as nativists believe only human children do: spontaneously, without the parent having to play the social role of a teacher.

The humble purpose of this blog post is to introduce the idea we have to think more carefully about human changeability than we have done so far. We tend to think that human changes are either lying dormant in our nature or are being taught to us by the society.

Kanzi entices us to think differently.

Spontaneous language development in a nonhuman suggests that being reared in culture is more than simply a matter of internalizing social norms. Being reared in culture means participating in the culture: a more creative and masterful role than that of a mere pupil.

I believe we are caught in an adult/child dichotomy. The creative role of the child becomes invisible because the dichotomy categorically portrays her as a novice, as a pupil, as a learner… as a vacuous not-yet-adult-human.

Perhaps, if we manage to liberate us from this dichotomy, we can see the possibility that language – together with much else in human life – is neither innate nor learned.

Pär Segerdahl

Understanding enculturated apes - the ethics blog

Absolute limits of a modern world?

A certain form of ethical thinking would like to draw absolute limits to human activity. The limits are often said to be natural: nature is replacing God as ultimate moral authority.

Nature is what we believe we still can believe in, when we no longer believe in God.

God thus moves into the human embryo. As its nature, as its potential to develop into a complete human being, he continues to lay down new holy commandments.

The irony is that this attempt to formulate nature’s commandments relies on the same forms of human activity that one wants to delimit. Without human embryo research, no one would know of the existence of the embryo: no one could speculate about its “moral status” and derive moral commandments from it.

This dependence on modern research activities threatens the attempt to discover absolute moral authority in nature. Modern research has disassociated itself from the old speculative ambition to stabilize scientific knowledge as a system. Our present notion of “the embryo” will be outdated tomorrow.

Anyone attempting to speculate about the nature of the embryo – inevitably relying on the existence of embryo research – will have to acknowledge the possibility that these speculations already are obsolete.

The changeability of the modern world thus haunts and destabilizes the tendency to find absolute moral authority in nature.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Interview with Kathinka Evers

One of my colleagues here at CRB, Kathinka Evers, recently returned from Barcelona, where she participated in the lecture series, The Origins of the Human Mind:

PS: Why did you participate in this series?

KE: I was invited by the Centre for Contemporary Culture to present the rise of neuroethics and my views on informed materialism.

PS: Why were you invited to talk on these issues?

KE: My last book was recently translated into Spanish (Quando la materia se despierta), and it has attracted interest amongst philosophers and neuroscientists in the Spanish speaking world. In that book, I extend a materialist theory of mind, called “informed materialism,” to neuroethical perspectives, discussing, for example, free will, self-conceptions and personal responsibility.

PS: In a previous blog post I commented upon Roger Scruton’s critical attitude to neuroscientific analyses of subjects that traditionally belong to the social and human sciences. What’s your opinion on his criticism?

KE: Contemporary neuroscience can enrich numerous areas of social science. But the reverse is also true. The brain is largely the result of socio-cultural influences. Understanding the brain also involves understanding its embodiment in a social context. The social and neurobiological perspectives dynamically interact in our development of a deeper understanding of the human mind, of consciousness, and of human identity.

PS: Do you mean that the criticism presupposes a one-sided view of the development of neuroscience?

KE: I suspect that the criticism is not well-informed, scientifically, since it fails to take this neuro-cultural symbiosis into account. But it is not uncommon for philosophers to take a rather defensive position against neuroscientific attempts to enter philosophical domains.

PS: Was this tension noticeable at the meeting in Barcelona?

KE: Not really. Rather, the debate focused on how interdisciplinary collaborations have at last achieved what the theoretical isolationism of the twentieth century – when philosophy of mind was purely a priori and empirical brain science refused to study consciousness – failed to achieve: the human brain is finally beginning to understand itself and its own mind.

Kathinka Evers has developed a course in neuroethics and is currently drafting a new book (in English) on brain and mind.

Pär Segerdahl

We transgress disciplinary borders - the Ethics Blog

Do I have a self?

Viewing neuroscience as a box opener is tempting. The box conceals the human mind; opening the box reveals it.

According to this image, neuroscience uncovers reality. It lays bare the truth about our taken for granted notions of mind: about our concepts of ‘self,’ ‘will,’ ‘belief,’ ‘intention’… Neuroscience reveals the underlying facts about us humans.

How exciting…, and how terrifying! What will they find in the box? And what will they not find? Will they find my ‘self’ there – the entity that is me and that writes these words?

What if they don’t find my ‘self’ in the box! What if my ‘self’ turns out to be an illusion! Can they engineer one for me instead? My life would be so desolate without ‘me.’

But neuroscientists are clever. They control what’s in the box. They surely will be able to enhance my brain and create the ‘self’ that didn’t exist in the first place.

Ideas like these are discussed in a mind-boggling interview entitled,

What strikes me about the neurophilosophical discussion is that it does NOT question the notion of the self. The notion is discussed as if it were self-evident to all of us, as some sort of ‘entity.’ The notion is supposed to be present in ordinary (culturally shaped) self-understanding. What is lacking is the evidence for the notion of ‘the self.’

You’ve guessed where the evidence is hiding: it’s in the box!

Neuroscientists opening the box threaten to disclose that the brain is naked. It might not be garmented in a ‘self’ or in a ‘free will.’ That these ‘entities’ exist in the box were perhaps just illicit reifications of modes of speech present in everyday discourse.

But what is ‘reification’?

Is it not precisely the image of ‘the box’ concealing the realities of mind?

If the tempting ‘box’ image supplies the model of reification – the very form of reification – isn’t the notion that neuroscience, by opening the box, is exposing reifications in ordinary discourse a whirling dance with the same reifying tendency that it is supposed to expose?

The ‘box’ mode of thinking is a simplified use of psychological nouns and verbs as if they referred to ‘entities’ and ‘processes’ in a hidden realm. It is difficult to resist such simplified linguistic imagery.

I’m convinced that neuroscience is making important discoveries that will challenge our self-understanding. But I question the ‘box’ image of these developments as an oversimplification of the very modes of speech it makes it seem we can transcend.

Pär Segerdahl

Minding our language - the Ethics Blog

Can neuroscience modernize human self-understanding?

Tearing down old buildings and erecting new ones on the basis of modern science and technology – we are constantly doing it in our cities. But can similar ambitions to get rid of the old, to modernize, be realized even more thoroughly, with regard to us and the human condition?

Can we tear down “traditional” human self-understanding – the language we use when we reflect on life in literature, in philosophy, and in the humanities – and replace it by new neuroscientific terms?

Earlier this spring, the philosopher Roger Scruton published an essay in the Spectator where he eloquently attacks claims that neuroscience can and should replace the humanities by a set of brave new “neuro”-disciplines, like neuroethics, neuroaesthetics, and neuromusicology.

Not only will these purported new “sciences” fail to create the understanding that traditional ethics, aesthetics, and musicology, helped us towards (for example, of Bach’s music). They will even fail to achieve the scientific explanations that would justify the brave new “neuro”-prefix.

In order for there to be explanations at all, there must first of all be questions. What characterizes the purported “neuro”-sciences, however, is their lack of questions, Scruton remarks.

“Neuro-explanation” typically is no more than translation into neuro-jargon. The aim is neither understanding nor explanation, but the ideological one of replacing the traditional by the new, at any cost.

The result of these extreme modernization ambitions running amok in human self-understanding, Scruton claims, and I agree with him, is nonsense: neurononsense.

Yet, something worries me in Scruton’s essay. He almost seems to purify human self-understanding, or the human condition, as if it were a higher sphere that should not be affected by changing times, at least not if they are modern.

I agree that neuroscience cannot explain the human condition. I agree that it cannot replace human self-understanding. But it can change the human condition and challenge our self-understanding. It already does.

Science and technology cannot be abstracted from the human condition. We are continually becoming “modernized” by, for example, neuroscientific developments. These changing conditions are real, and not merely nonsense or jargon. They occur everywhere, not merely among intellectuals or academics. And they reach all the way to our language.

Neuroscience certainly cannot replace the humanities. But it can challenge the humanities to reflect on changed human conditions.

When attempts in the human sciences to understand modern human conditions focus on neuroscience, the prefix “neuro-” could denote a more responsible form of intellectual work than the one Scruton rightly criticizes. It could denote work that feels the challenge of neuroscientific developments and takes it seriously.

Here at CRB, Kathinka Evers works to develop such a responsible form of neuroethics: one that does not translate ethics into neuro-jargon, but sees neuroscientific findings about the brain as a philosophical challenge to understand and clarify, very often in opposition to the temptation of jargon.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Introspective genomics and the significance of one

As a philosopher, I am familiar with the image of the solitary thinker who studies the human mind though introspective study of his own. A recent article in the journal Cell reminds me of that image, but in unexpected “genomic” guise.

To achieve statistical significance, medical researchers typically engage large numbers of research subjects. The paper in Cell, however, has only one research subject: the lead author of the paper, Michael Snyder.

Snyder and colleagues studied how his body functioned molecularly and genetically over a 14-month period. Samples from Snyder were taken on 20 separate occasions. A personal “omics profile” was made by integrating information about his genomic sequence with other molecular patterns gathered from the samples, as these patterns changed over time.

Early results indicated that Snyder was genetically disposed to type 2 diabetes. Strangely enough, the disease began to develop during the course of the study. Snyder could follow in detail how two virus infections and the diabetes developed molecularly and genetically in his body.

Snyder changed his life style to handle his diabetes. When he informed his life-insurance company about the disease, however, his premiums became dramatically more expensive.

The introspective paper illustrates the potential usefulness, as well as the risks, of what has been dubbed “personalized medicine.” Here I want speculate, though, on how this new paradigm in medicine challenges scientific and intellectual ideals.

When philosophers introspectively studied the human mind, they took for granted that what they found within themselves was shared by all humans. The general could be found completely instantiated in the particular.

The particular was for philosophers no more than a mirror of the general. What they saw in the mirror was not the individual mirror (it was intellectually invisible). What they saw in the mirror was a reflection of the general (and only the general was intellectually visible).

That simple image of the relation between the particular and the general was discarded with Darwin’s theory of the origin of species. A species has no essence shared by all individuals. Therefore, to achieve scientific generality about what is human, you cannot rely on one human subject only. You need many subjects, and statistics, to achieve intellectual vison of general facts.

A noteworthy feature of the paper under discussion is that we seem partly to have returned to the era of introspective research. We return to it, however, without the discarded notion of the particular as mirror of the general.

New molecular techniques seem to open up for study of what previously were simply individual cases without significance in themselves. For personalized medicine, each subject unfolds as a universe; as a world full of significant processes.

By studying the “genomic universe” of one person and following it over a period of time time, Snyder and colleagues could discern processes that would have been invisible if they had superimposed data from several distinct research subjects.

This new significance of the particular is fascinating and novel from an intellectual perspective. Has the traditional contempt for the particular case been overcome in personalized medicine?

Speaking personally as a philosopher, I cannot avoid seeing this aspect of personalized medicine as congenial with certain philosophical tendencies.

I am thinking of tendencies to investigate (and compare) particular cases without magnifying them on a wall of philosophical abstraction, as if only the general was intellectually visible. I am thinking of serious attempts to overcome the traditional contempt for the particular case.

We seem to have arrived at a new conception of one and many; at a new conception of the particular case as visible and worthy of study.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

After-birth abortion as a logical scale exercise

How should one respond when ethicists publish arguments in favor of infanticide?

In the current issue of Journal of Medical Ethics, two philosophers argue that what they call “after-birth abortion” should be permissible in all cases where abortion is (even when the newborn is healthy).

Not surprisingly, soon after BioEdge covered the article, the news spread on the internet… and the authors of the article unfortunately even received death threats.

If you know the spirit of much current academic philosophy, you will not be surprised to know that the authors defended themselves by statements like:

  • “This was a theoretical and academic article.”
  • “I’m not in favour of infanticide. I’m just using logical arguments.”
  • “It was intended for an academic community.”
  • “I don’t think people outside bioethics should learn anything from this article.”

The editor of JME, Julian Savulescu, defended the decision to publish by emphasizing that JME “supports sound rational argument.”

In a similar vein, the philosopher John Harris, who developed basically the same rational considerations in support of infanticide, felt a need to clarify his position. He never defended infanticide as a policy proposal. – What did he do, then?

He engaged in “intellectual discussions.”

What I find remarkable is how some of our most significant human ideals – logic and rationality – seem to have acquired a technical and esoteric meaning for at least some professional philosophers.

Traditionally, if you build on logic and rationality, then your intellectual considerations ought to concern the whole of humanity. Your conclusions deserve to be taken seriously by anyone with an interest in the matter.

The article on after-birth abortion, however, is JUST using logical arguments. It is ONLY presenting a sound rational argument. It is MERELY an intellectual discussion. To me, this sounds like a contradiction in terms.

Moreover, because of this “merely” logical nature of the argument, it concerns no one except a select part of the academic community.

Still, logic and rationality are awe-inspiring ideals with a long human history. Philosophers draw heavily on the prestige of these ideals when they explain the seriousness of their arguments in a free liberal society.

When people in this free society are troubled by the formal reasoning, however, some philosophers seem surprised by this unwelcome attention from “outsiders” and explain that it is only a logical scale exercise, composed years ago by eminent philosophers like Singer, Tooley and Harris, before academic journals were accessible on the internet.

I repeat my question: how should one respond when ethicists publish what they present as “rational arguments” in favor of infanticide?

My answer is that one should take them seriously when they explain that one shouldn’t take their logical conclusions too seriously. Still, there is reason for concern, because the ideals they approach so technically are prestigious notions with a binding character for most of us.

Many persons think they should listen carefully when arguments are logical and rational.

Moreover, JME is not a purely philosophical journal. It is read by people with real and practical concerns. They are probably unaware that many professional philosophers, who seem to be discussing real issues, are only doing logical scale exercises.

This mechanized approach to the task of thinking, presented on days with better self-confidence as the epitome of what it means to be “serious and well-reasoned,” is what ought to concern us. It is problematic even when conclusions are less sensational.

Pär Segerdahl

Following the news - the ethics blog

« Older posts Newer posts »