A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Tag: philosophy (Page 19 of 20)

Human and animal: where is the frontline?

Yesterday I read Lars Hertzberg’s thoughtful blog, Language is things we do. His latest post drew my attention to a militant humanist, Raymond Tallis (who resembles another militant humanist, Roger Scruton).

Tallis published Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. He summarizes his book in this presentation on YouTube.

Tallis gesticulates violently. As if he were a Knight of the Human Kingdom, he defends humanity against an invasion of foreign neuroscientific and biological terms. Such bio-barbarian discourses reduce us to the same level of organic life as that of the brutes, living far away from civilization, in the rainforest and on the savannah.

Tallis promises to restore our former glory. Courageously, he states what every sane person must admit: WE are not like THEM.

Tallis is right that there is an intellectual invasion of biological discourses, led by generals like Richard Dawkins and Daniel Dennett. There is a need to defend one. – But how? Who would I be defending? Who am I, as a human? And where do I find the front line?

The notions of human life that Tallis defends are the ordinary ones belonging to everyday language. I have the impression, though, that Tallis fails to see the material practices involved in language use. Instead, he abstracts and reifies these notions as if they denoted a sublime and self-contained sphere: a uniquely human subjectivity; one that hopefully will be explained in the future, when the proper civilized terms of human intentionality are discovered. – We just have not found them yet.

Only a future genius of human subjectivity can reveal the truth about consciousness. Peace in the Human Kingdom will be restored, after the wars of modernity and bio-barbarism.

Here are two examples of how Tallis reifies the human world as a nature-transcendent sphere:

  • “We have stepped out of our organic body.”
  • “The human world transcends the organism Homo sapiens as it was delivered by Darwinian evolution hundreds of thousands of years ago.”

Once upon a time we were just animals. Then we discovered how to make a human world out of mere animal lives. – Is this a fairy tale?

Let us leave this fantasy and return to the forms of language use that Tallis abstracts and reifies. A striking fact immediately appears: Tallis is happy to use bio-barbarian discourse to describe animal lives, as if such terms literally applied to animals. He uncritically accepts that animal eating can be reduced to “exhibiting feeding behavior,” while humans are said to “dine together.”

The fact, then, is that Tallis does not see any need to pay closer attention to the lives of animals, or to defend animals against the bio-barbarism that he fights as a Knight of the Human Kingdom.

This may make you think that Tallis at least succeeds to restore human glory; that he fails only on the animal front (being, after all, a humanist). But he fails to pay attention also to what is human. Since he abstracts and reifies the notions of human life, his dualistic vision combines bio-barbarian jargon about animals with phantasmagoric reifications of what is human.

The front line is in language. It arises in a failure to speak attentively.

When talking about animals is taken as seriously as talking about humans, we foster forms of sensitivity to hum-animal relations that are crushed in Raymond Tallis’ militant combination of bio-barbarian discourses for animals with fantasy-like elevations of a “uniquely human world.”

The human/animal dichotomy does not reflect how the human world transcends the animal organism. It reflects how humanism fails to speak responsibly.

Pär Segerdahl

Minding our language - the Ethics Blog

Moral tipping points

Yesterday, I read a thought-provoking article about biosecurity. It suggested novel ways of thinking about infectious diseases. According to traditional thinking, infectious diseases strike us from outside. Therefore, we protect us from such external threats by building more effective borders. We secure pure healthy spaces and protect these spaces from impure, diseased ones.

The alternative thinking is less geometrically oriented and does not make a sharp distinction between “pure” and “diseased” spaces. Here is an illustration. If I understood the article right, a certain microbe, Campylobacter, is typically present in the microbial flora of farmed chickens. This bacterium does not become a health threat until there is a balance shift in the chickens’ intense relations with their farm circumstances.

Campylobacter “infection” in chickens, then, does not necessarily occur from outside, since the microbe always is present, but through balance shifts at what the authors called “tipping points.”

I was struck by the notion of tipping points. They remind me of processes of moral change:

It is well-known, to most of us at least, that our moral perceptions sometimes undergo dramatic change. Consider the following example, discussed in our CRB seminar series earlier this autumn: sex disambiguation surgery on newborns, when their sex cannot be unequivocally determined by a doctor.

Our present social circumstances are such that being boy or girl, being man or woman is profoundly significant. Being neither, or both, is being in trouble. Legally, for example, you must be male or female, and that’s only one aspect of the demand.

If we live in happy balance with these circumstances, sex disambiguation surgery might strike us as a blessing. Through surgery, the child is “helped” towards becoming unambiguously boy or girl. This is of such importance that “correction surgery” can be allowed even on newborns that haven’t yet developed their way of being in the world. Early surgery might even be preferable.

If, in the other hand, there is a balance shift; if we open ourselves to the possibility that present circumstances can be troublesome and changed – must we legally be male or female? – a tipping point may occur where the helpful correction of a bodily deformation can start to look like… genital mutilation performed to adapt newborns to our culture’s heterosexual norms and dualistic beliefs.

The new ideas may appear foreign to the old ones, as if they came from outside: what have we been reading lately? But they need not be as foreign as they appear and they need not enter our thinking “from outside.” Moral thinking is in dynamic relationships with our circumstances: if these relationships shift, so may our moral perceptions.

At moral tipping points what previously was perceived as “helping” may suddenly look like “mutilating.” What previously was “reality” may turn into “culture” and further into “norms and beliefs.” Changes at moral tipping points can be dramatic, which fools us into thinking that the new ideas necessarily entered our territory from another moral space. But they emerged right here, in our exchanges with our own circumstances.

Why is this important?  I think it suggests paths beyond the age-old relativism-versus-absolutism controversy.

We habitually view opposed moralities as distinct; simply distinct. You have one view on the matter; I have another. When I heard about tipping points, it struck me that opposed moral views often are dynamically connected: one view becomes the other at the tipping point.

Thinking in terms of tipping points can negotiate some sort of peace between standpoints that otherwise are exaggerated as if they belonged to opposed metaphysics.

Someone who speaks of male and female as realities is not necessarily in the grips of the metaphysics of substance, as Judith Butler supposes, but may speak from the point of view of being in untroubled balance with present circumstances.

Someone who speaks of male and female as produced by norms is not necessarily in the grips of relativistic anti-metaphysical doctrines, as realist philosophers would suppose, but may speak at a tipping point where the balance with present circumstances shifted and became troubled.

My proposed tipping point negotiation of peace between apparently foreign moral views and stances does not make the opposition less real; it only avoids certain intellectualist exaggerations and purifications of it.

Moral language functions differently when the circumstances are untroubled compared to when they are troubled. Moral thinking is in dynamic relationships with the world (and with how we inhabit it).

Pär Segerdahl

The Ethics Blog - Thinking about thinking

Logical laws and ethical principles: appendices to human reasoning

We tend to view logical laws and ethical principles as foundational: as more basic than ordinary discourse, and “making possible” logical and ethical reasoning. They set us on the right intellectual path, so to speak, on the most fundamental level.

I want to suggest another possibility: logical laws and ethical principles are derived from ordinary discourse. They constitute a schematic, ideal  image of what it means to make truth claims, or ethical claims, in our language. They don’t make the claims and forms of reasoning possible, however, but reflect their familiar presence in daily discourse.

Consider the logical law of non-contradiction, which states that a proposition and its negation cannot both be true simultaneously. Does this law implicitly set us on the path of non-contradictory talk, from morning to night? Or does it have another function?

Here is an alternative way of thinking about this “law of thought”:

The impression that others contradict themselves is not uncommon. When this occurs, we become uncertain what they actually say. We ask for clarifications until the sense of contradiction disappears. Not until it disappears do we recognize that something is being said.

The law of non-contradiction reflects this general feature of language. As such a reflection, however, it is derived from language and doesn’t function as a foundation of human truth-telling.

I want to make a similar proposal for ethical principles. Ethical principles – for example, of beneficence or respect for persons – reflect how people already view certain aspects of life as morally important and use them as reasons.

Ethical principles don’t “make” these aspects of life moral reasons. They just highlight, in semi-bureaucratic language, the fact that they are such reasons for people.

Consider this way of reasoning, which is perfectly in order as it stands:

  • (A) “I helped you; therefore you should help me.”

This moral reasoning is familiar to all of us. Its presence could be acknowledged in form of an ethical principle, P; a Principle of Reciprocity (“Sacrifices require services in return” etc.).

According to the view I want to leave behind, the fact that I helped you doesn’t constitute a reason until it is linked to the ethical principle P:

  • (B) “I helped you; according to Principle P, you therefore should help me.”

Ethicists typically reason the latter way, (B). That is alright too, as long as we are aware of its derived nature and don’t believe that (B) uncovers the hidden form of (A).

Ethical principles summarize, in semi-legislative language, how humans already reason morally. They function as appendices to moral reasoning; not as its backbone.

Why do we need to be aware of the derived nature of ethical principles? Because when we genuinely don’t know how to reason morally – when there are no convincing arguments of kind (A) – it is tempting to use the principles to extrapolate moral arguments of kind (B)… appendices to claims that no one makes.

Viewing ethical principles as foundational, we’re almost forced to turn to them for guidance when we are in genuine moral uncertainty. But perhaps we should rather turn to the real-life features that are at stake. Perhaps we should focus our attention on them, try to understand them better, engage with them… and wait for them to become moral reasons for us in ways we might not be able to anticipate.

As a result of this open-ended process of attentive and patient moral thinking, ethicists may discover a need for new ethical principles to reflect how forms of moral reasoning change in the process, because new aspects of life became moral reasons for us when we attended to them.

Consider as an example the ethical problem whether incidental findings about individual participants in biobank research should be returned to them. At this very moment, ethicists are working hard to help biobankers solve this genuinely difficult problem. They do it by exploring how our present canon of ethical principles might apply to the case.

Is that not a little bit like consulting a phrase book when you discover that you have nothing to say?

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Genetic exceptionalism and unforgivingness

What fuels the tendency to view genetic information as exceptionally private and sensitive? Is information about an individual’s genetic disposition for eye color more sensitive than the fact that he has blue eyes?

In Rethinking Informed Consent in Bioethics, Neil C. Manson and Onora O’Neill make heroic efforts against an avalanche of arguments for genetic exceptionalism. For each argument meant to reveal how uniquely private, how exceptionally sensitive, and how extraordinarily risky genetic information is, Manson and O’Neill find elucidating examples, analogies and comparisons that cool down tendencies to exaggerate genetic information as incomparably dangerous.

What fuels the exceptionalism that Manson and O’Neill fight? They suggest that it has to do with metaphors that tempt us to reify information; temptations that, for various reasons, are intensified when we think about DNA. Once again, their analysis is clarifying.

Another form of genetic exceptionalism strikes me, however; one that has less to do with information. I’m thinking of GMO exceptionalism. For thousands of years, humans improved plants and animals through breeding them. This traditional way of modifying organisms is not without environmental risks. When analogous risks appear with GMO, however, they tend to change meaning and become seen as extraordinary risks, revealing the ineradicable riskiness of genetic manipulation.

Why are we prepared to embrace traditionally modified organisms, TMO, when basically the same risks with GMO make us want to exterminate every genetically manipulated bastard?

Unforgivingness. I believe that this all-too familiar emotional response drives genetic exceptionalism, and many other forms of exceptionalism.

Consider the response of becoming unforgiving. Yesterday we laughed with our friend. Today we learn that he spread rumors about us. His familar smile immediately acquires a different meaning. Yesterday it was shared joy. Today it is an ugly mask hiding an intrinsically untrustworthy individual who must be put in quarantine forever. Every trait of character turns into a defect of character. The whole person becomes an objection; an exception among humans.

Manson and O´Neill are right when they analyze a tendency to reify information in genetic exceptionalism. But I want to suggest that what fuels this tendency, what makes us more than willing to yield to the temptation, is an emotional state of mind that also produces many other forms of exceptionalism.

We need to acknowledge the emotional dimension of philosophical and ethical thinking. We don’t think well when we are unforgiving towards our subject matter. We think dogmatically and unjustly.

In their efforts to think well about genetic information, Manson and O’Neill can be understood as doing forgiveness work.

They calm us down and patiently show us that our friend, although he sometimes does wrong, is not that intrinsically bad character we want to see him as, when we are in our unfortunate unforgiving state of mind.

We are helped towards a state of mind where we can think more freely and justly about the risks and benefits of genetics.

Pär Segerdahl

We want to be just - the Ethics Blog

What is philosophy?

Someone asked me what philosophy is. I answered by trying to pinpoint the most frequently used word when one philosophizes.

What does a philosopher most often say? I believe he or she most often says, “But…”:

  • “But is that really true?”
  • “But shouldn’t then…?”
  • “But can’t one imagine that…?”
  • “But how can anyone know such a thing?”
  • Etc.

Always some unexpected obstacle! Just at the moment when your reasoning seems entirely spotless, an annoying “but…?” knocks you to the ground and you have to start all over again.

Confronted with our spontaneous reasoning, a philosopher’s head soon fills with objections. Perplexing questions lead into unknown territory. Maps must be drawn the need of which we never anticipated. A persistently repeated “but…?” reveals challenges for which we lack preparedness.

But the goal is not that of interminably objecting. Objecting and being perplexed are not intrinsic values.

Rather the contrary. The accumulation of objections is a precondition to there being a goal with philosophizing: that of putting an END to the annoying objections.

Philosophy is a fight with one’s own objections; the goal is to silence them.

But if that is so, what point can philosophy have? An activity that first raises annoying objections, and then tries to silence them: what’s that good for!?

Try to reason about what “consent to future research” means. Then you’ll probably notice that you soon start repeating “but…?” with regard to your own attempts to reason well. Your objections will annoy you and spur you to think even more clearly. You will draw maps the need of which you had not anticipated.

Even if we prefer that we never went astray, we do go astray. It pertains to being human. THEN we see the point with persistently asking “but…?”; THEN we see the purpose with crisscrossing confusing aspects of life until we survey them, haunted by objections from an unyielding form of sincerity.

When we finally manage to silence our irritating objections, philosophy has made itself as superfluous as a map would be when we cross our own street…

…until we go astray again.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Who, or what, becomes human?

Our long childhood and dependence on parental care seem to leave no doubt about it: we are not born as humans, we become human.

I want to highlight a particularly tempting metaphor for this process of “becoming human” – the metaphor of:

  • “Order out of chaos.”

According to this metaphor, human infancy is abundantly rich in possibilities; so abundant, in fact, that it is a formless chaos – a “blooming, buzzing confusion,” as William James characterized the infant’s experience of being alive.

To acquire recognizable human form, the child’s inner chaos must be tamed through the disciplining efforts of parents and society at large (the metaphor suggests). The child’s formlessly rich inner life must me narrowed down, hardened, made boring… until, finally, it becomes another obedient member of society.

Society does not acknowledge a real human subject until the norms of “being human” are confidently repeated: as if the child easily would slip back into its more original state of blooming, buzzing confusion, the moment the reiteration of the social norms of humanity terminates.

The “order out of chaos” metaphor makes life and growth look like death and atrophy. To become human means aborting limitless possibilities and gradually turning into that tragic effect of social forces that we know as “the mature adult.”

Perhaps the intriguing topic of the “deconstruction of the subject” is nothing but rigorous faithfulness to the logic of this tempting metaphor? If becoming human is anything like what the metaphor presents it as, then “no one” becomes human, strictly speaking, for before the disciplined human is formed, there is nameless chaos and no recognizable human subject.

But how can the proto-human chaos – I mean, the child – be so responsive to its non-chaotic parents that it reduces its inner chaos and becomes… human? Isn’t that responsiveness already a form of life, a way of being human?

Dare we entertain the hypothesis that the newborn already is active, and that her metamorphoses throughout life require her own creative participation?

I believe we need another understanding of human becoming than that of “order out of chaos.” – Or is human life really a form of colonization of the child?

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Neither innate nor learned

A child begins to speak; to say that it is hungry, or does not want to sleep. Where was the child’s language hiding before it began to speak? Did the child invent it?

Certainly not, experts on language development would insist. A child cannot create language. Language exists before the child starts to speak. All that is happening during language development is that language is being transported to the child.

The big question is: transported from where? There seem to be only two alternatives:

  1. Language is innate. It is prepared in our neural structures. When the child hears its parents speak, these structures are stimulated and soon start supporting the child’s own speech.
  2. Language is learned. It exists in society. Children have social learning skills; through these skills, language is transported from the social environment to the young pupil, soon supporting the child’s own speech.

These are the alternatives, then. Language is either inside or outside the newborn. Language development is either a process of “externalization” or a process of “internalization” of language. There can be no third alternative.

I have written about the ape Kanzi, who was raised by a human mother. I’ve written about him both on The Ethics Blog and in the book, Kanzi’s Primal Language. This bonobo and his half-sister Panbanisha developed language in a manner that does not clearly correspond to any of these two alternatives.

Since it is hardly credible that human language is innate in apes, ape language researchers typically try to teach apes language. These attempts fail.

Kanzi’s human mother, Sue Savage-Rumbaugh, avoided teaching Kanzi. Instead, she simply spoke to him, as parents do, in a shared Pan/Homo culture. As a result of this humanlike cultural rearing, he developed language as nativists believe only human children do: spontaneously, without the parent having to play the social role of a teacher.

The humble purpose of this blog post is to introduce the idea we have to think more carefully about human changeability than we have done so far. We tend to think that human changes are either lying dormant in our nature or are being taught to us by the society.

Kanzi entices us to think differently.

Spontaneous language development in a nonhuman suggests that being reared in culture is more than simply a matter of internalizing social norms. Being reared in culture means participating in the culture: a more creative and masterful role than that of a mere pupil.

I believe we are caught in an adult/child dichotomy. The creative role of the child becomes invisible because the dichotomy categorically portrays her as a novice, as a pupil, as a learner… as a vacuous not-yet-adult-human.

Perhaps, if we manage to liberate us from this dichotomy, we can see the possibility that language – together with much else in human life – is neither innate nor learned.

Pär Segerdahl

Understanding enculturated apes - the ethics blog

Absolute limits of a modern world?

A certain form of ethical thinking would like to draw absolute limits to human activity. The limits are often said to be natural: nature is replacing God as ultimate moral authority.

Nature is what we believe we still can believe in, when we no longer believe in God.

God thus moves into the human embryo. As its nature, as its potential to develop into a complete human being, he continues to lay down new holy commandments.

The irony is that this attempt to formulate nature’s commandments relies on the same forms of human activity that one wants to delimit. Without human embryo research, no one would know of the existence of the embryo: no one could speculate about its “moral status” and derive moral commandments from it.

This dependence on modern research activities threatens the attempt to discover absolute moral authority in nature. Modern research has disassociated itself from the old speculative ambition to stabilize scientific knowledge as a system. Our present notion of “the embryo” will be outdated tomorrow.

Anyone attempting to speculate about the nature of the embryo – inevitably relying on the existence of embryo research – will have to acknowledge the possibility that these speculations already are obsolete.

The changeability of the modern world thus haunts and destabilizes the tendency to find absolute moral authority in nature.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Interview with Kathinka Evers

One of my colleagues here at CRB, Kathinka Evers, recently returned from Barcelona, where she participated in the lecture series, The Origins of the Human Mind:

PS: Why did you participate in this series?

KE: I was invited by the Centre for Contemporary Culture to present the rise of neuroethics and my views on informed materialism.

PS: Why were you invited to talk on these issues?

KE: My last book was recently translated into Spanish (Quando la materia se despierta), and it has attracted interest amongst philosophers and neuroscientists in the Spanish speaking world. In that book, I extend a materialist theory of mind, called “informed materialism,” to neuroethical perspectives, discussing, for example, free will, self-conceptions and personal responsibility.

PS: In a previous blog post I commented upon Roger Scruton’s critical attitude to neuroscientific analyses of subjects that traditionally belong to the social and human sciences. What’s your opinion on his criticism?

KE: Contemporary neuroscience can enrich numerous areas of social science. But the reverse is also true. The brain is largely the result of socio-cultural influences. Understanding the brain also involves understanding its embodiment in a social context. The social and neurobiological perspectives dynamically interact in our development of a deeper understanding of the human mind, of consciousness, and of human identity.

PS: Do you mean that the criticism presupposes a one-sided view of the development of neuroscience?

KE: I suspect that the criticism is not well-informed, scientifically, since it fails to take this neuro-cultural symbiosis into account. But it is not uncommon for philosophers to take a rather defensive position against neuroscientific attempts to enter philosophical domains.

PS: Was this tension noticeable at the meeting in Barcelona?

KE: Not really. Rather, the debate focused on how interdisciplinary collaborations have at last achieved what the theoretical isolationism of the twentieth century – when philosophy of mind was purely a priori and empirical brain science refused to study consciousness – failed to achieve: the human brain is finally beginning to understand itself and its own mind.

Kathinka Evers has developed a course in neuroethics and is currently drafting a new book (in English) on brain and mind.

Pär Segerdahl

We transgress disciplinary borders - the Ethics Blog

Do I have a self?

Viewing neuroscience as a box opener is tempting. The box conceals the human mind; opening the box reveals it.

According to this image, neuroscience uncovers reality. It lays bare the truth about our taken for granted notions of mind: about our concepts of ‘self,’ ‘will,’ ‘belief,’ ‘intention’… Neuroscience reveals the underlying facts about us humans.

How exciting…, and how terrifying! What will they find in the box? And what will they not find? Will they find my ‘self’ there – the entity that is me and that writes these words?

What if they don’t find my ‘self’ in the box! What if my ‘self’ turns out to be an illusion! Can they engineer one for me instead? My life would be so desolate without ‘me.’

But neuroscientists are clever. They control what’s in the box. They surely will be able to enhance my brain and create the ‘self’ that didn’t exist in the first place.

Ideas like these are discussed in a mind-boggling interview entitled,

What strikes me about the neurophilosophical discussion is that it does NOT question the notion of the self. The notion is discussed as if it were self-evident to all of us, as some sort of ‘entity.’ The notion is supposed to be present in ordinary (culturally shaped) self-understanding. What is lacking is the evidence for the notion of ‘the self.’

You’ve guessed where the evidence is hiding: it’s in the box!

Neuroscientists opening the box threaten to disclose that the brain is naked. It might not be garmented in a ‘self’ or in a ‘free will.’ That these ‘entities’ exist in the box were perhaps just illicit reifications of modes of speech present in everyday discourse.

But what is ‘reification’?

Is it not precisely the image of ‘the box’ concealing the realities of mind?

If the tempting ‘box’ image supplies the model of reification – the very form of reification – isn’t the notion that neuroscience, by opening the box, is exposing reifications in ordinary discourse a whirling dance with the same reifying tendency that it is supposed to expose?

The ‘box’ mode of thinking is a simplified use of psychological nouns and verbs as if they referred to ‘entities’ and ‘processes’ in a hidden realm. It is difficult to resist such simplified linguistic imagery.

I’m convinced that neuroscience is making important discoveries that will challenge our self-understanding. But I question the ‘box’ image of these developments as an oversimplification of the very modes of speech it makes it seem we can transcend.

Pär Segerdahl

Minding our language - the Ethics Blog

« Older posts Newer posts »