Being humans when we are animals

March 25, 2015

Pär SegerdahlMost people know that humans are animals, a primate species. Still, it is difficult to apply that knowledge directly to oneself: “I’m an animal”; “My parents are apes.”

– Can you say it without feeling embarrassed and slightly dizzy?

In a recent paper I explore this difficulty of “bringing home” an easily cited scientific fact:

Why does the scientific “fact” crumble when we apply it directly to ourselves?

I approach this difficulty philosophically. We cannot run ahead of ourselves, but I believe that’s what we attempt if we approach the difficulty theoretically. Say, by theorizing the contrast between humans and animals as an absolute presupposition of human language that science cannot displace.

Such a theory would be as easy to cite as the “fact” and wouldn’t touch our difficulty, the dizziness we feel.

Instead, I explore a personal experience. When I visited a laboratory for ape language research, an ape named Panbanisha told me to be QUIET and later called me a MONSTER. Being reprimanded by an ape made me dizzy about my humanness and about her animality.

How did the dizziness arise? After spending some time with the apes, the vertigo disappeared. How did it disappear?

That’s investigated in the paper by asking further questions, and by recollecting aspects of the meeting with Panbanisha to which those questions drew my attention. The paper offers a philosophical alternative to theory.

Trust your uncertainty and follow your questions!

Pär Segerdahl

Understanding enculturated apes - the ethics blog

Is it human fan club mentality?

February 26, 2014

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogPhilosophers often put humans on display as beings that have some unique quality, like rationality or conceptual powers. And conversely they present animals as beings that lack that quality.

What comparison underlies such a notion of “human positivity” and “animal negativity”?

One could suspect that the dualism arises through a human-centered comparison. As if intellectual football fans treated football as the sport with which all sports are to be compared, which would turn football into the sport that has the unique qualities of full-fledged sport, while all other sports are grouped together as hollow sports that lack what football has.

One could thus suspect that philosophy implicitly employs a human standard for its comparisons, as if philosophy was a human fan club, busy to secure power and exclusive membership rights.

I have my doubts, though, since football can be surveyed in a way that human life cannot be. It is hardly possible to place “us” at the center, since we don’t know who “we” are as football fans know what football is.

Whatever is placed at the center, it will have to be an idealization; not actual human lives.

This implies that the philosophical dualism might be unjust not only to animals, but also to humans who breathe and talk and live independently of philosophical ideals and claims about their essence.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Beware of the vanity of “autonomy”

November 26, 2013

Important words easily become totalitarian. They begin with communicating some humanly important point, so we listen with attention. But then it is as if the words suffered from vanity and assumed that our attention was directed at them; not at what they were used to say.

Over time, the words become like grammatical codes of importance in human life.

A word that underwent such a process in bioethics is autonomy. It was first used to communicate an urgency, namely, that patients and research participants must be respected. They have a right to information about what is about to happen, and to decide whether they want to undergo some treatment or participate in some experiment.

Patients and research participants have this understandable right to autonomy.

But as the word was used to communicate this urgency, the importance seemed to move into the word. If patients have a right to “autonomy,” mustn’t autonomy be a valuable trait that can be supported so that we increase the value?

Is autonomy perhaps even the most valuable aspect of the human: our characteristic when we are in our most rational state as rational animals. Perhaps autonomy is human essence?

From having been a comprehensible right, autonomy assumed the appearance of a super important value to constantly look for, like for a holy grail.

The question arose: Should we restrict people’s freedom to make own choices, if the choices threaten future autonomy?

We occasionally do disrespect people’s choices: for their sake. What I’m blogging about today is the tendency to replace “for their sake” with “for the sake of future autonomy.”

A new article in the Journal of Medicine and Philosophy deals with the question. You find the article by clicking the link below:

The article is written by Manne Sjöstrand, Stefan Eriksson, Niklas Juth and Gert Helgesson. They criticize the idea of a paternalistic policy to restrict people’s freedom in order to support their future autonomy.

The authors choose to argue from the opponent’s point of view. They thus start out from the interpretation of autonomy as super important value, and then try to show that such a policy becomes self-defeating. Future autonomy will be threatened by such a policy, much like the dictatorship of the proletariat never liberated humans but chained them to a totalitarian order.

The article is well-argued and should alert those enchanted by the word “autonomy” to the need of checking their claims.

Even though the article does not disenchant the concept of autonomy through the philosophical humor that I described in a previous post, I was struck by the tragicomedy of claiming that the ultimate reason why healthcare staff should not comply with a patient’s request for help to die is that… assisted death would destroy the patient’s autonomy.

Pär Segerdahl

Minding our language - the Ethics Blog

Humorous and comical thinkers

November 5, 2013

In my philosophical reading experience it is striking that some thinkers crack really good jokes. They are humorous and I laugh with them. Others are comical in their unyielding seriousness: difficult not to make jokes of.

Humor is not exactly what you think of when you think of philosophy. Hardly anyone reads philosophy to get a good laugh, and neither do I. But when philosophizing, joking surprisingly often lies just around the corner.

Those unexpected jokes often pinpoint the really sensitive issues.

Philosophy approaches you with such extreme demands. Demands for absolute certainty; demands for complete universality: demands for vantage points so primordial that they don’t even belong to life, but “precede” all tying of shoelaces and other trivialities that people are busy doing without reflecting.

The need to joke arises under the pressure of these demands.

The contrast between the absolute demands and the life that you nonetheless live becomes comical. You can then either persist in making the demands even more rigorously, becoming a comical thinker, or you can become a humorous thinker who cracks jokes under the pressure of the demands – to return you to life.

In this spirit, Derrida made the following joke of the absolutely certain human vantage point that Descartes thought he found in his cogito ergo sum:

  • “I breathe therefore I am,” as such, does not produce any certainty. By contrast, “I think that I am breathing” is always certain and indubitable, even if I am mistaken. And therefore I can deduce “therefore I am” from “I think that I am breathing.”

“Even if I am mistaken”: even if I am dead. Derrida’s joke opens up Cartesian certainty to doubt. Absolute certainty about my human essence that is compatible with my no longer being alive: how can it be “what I am”!?

Wittgenstein said that he could imagine a serious and good philosophical work that consisted entirely of jokes. I could imagine such a work beginning with Derrida’s joke.

The need to think can be a need to joke!

Pär Segerdahl

The Ethics Blog - Thinking about thinking

Human and animal: where is the frontline?

January 7, 2013

Yesterday I read Lars Hertzberg’s thoughtful blog, Language is things we do. His latest post drew my attention to a militant humanist, Raymond Tallis (who resembles another militant humanist, Roger Scruton).

Tallis published Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. He summarizes his book in this presentation on YouTube.

Tallis gesticulates violently. As if he were a Knight of the Human Kingdom, he defends humanity against an invasion of foreign neuroscientific and biological terms. Such bio-barbarian discourses reduce us to the same level of organic life as that of the brutes, living far away from civilization, in the rainforest and on the savannah.

Tallis promises to restore our former glory. Courageously, he states what every sane person must admit: WE are not like THEM.

Tallis is right that there is an intellectual invasion of biological discourses, led by generals like Richard Dawkins and Daniel Dennett. There is a need to defend one. – But how? Who would I be defending? Who am I, as a human? And where do I find the front line?

The notions of human life that Tallis defends are the ordinary ones belonging to everyday language. I have the impression, though, that Tallis fails to see the material practices involved in language use. Instead, he abstracts and reifies these notions as if they denoted a sublime and self-contained sphere: a uniquely human subjectivity; one that hopefully will be explained in the future, when the proper civilized terms of human intentionality are discovered. – We just have not found them yet.

Only a future genius of human subjectivity can reveal the truth about consciousness. Peace in the Human Kingdom will be restored, after the wars of modernity and bio-barbarism.

Here are two examples of how Tallis reifies the human world as a nature-transcendent sphere:

  • “We have stepped out of our organic body.”
  • “The human world transcends the organism Homo sapiens as it was delivered by Darwinian evolution hundreds of thousands of years ago.”

Once upon a time we were just animals. Then we discovered how to make a human world out of mere animal lives. – Is this a fairy tale?

Let us leave this fantasy and return to the forms of language use that Tallis abstracts and reifies. A striking fact immediately appears: Tallis is happy to use bio-barbarian discourse to describe animal lives, as if such terms literally applied to animals. He uncritically accepts that animal eating can be reduced to “exhibiting feeding behavior,” while humans are said to “dine together.”

The fact, then, is that Tallis does not see any need to pay closer attention to the lives of animals, or to defend animals against the bio-barbarism that he fights as a Knight of the Human Kingdom.

This may make you think that Tallis at least succeeds to restore human glory; that he fails only on the animal front (being, after all, a humanist). But he fails to pay attention also to what is human. Since he abstracts and reifies the notions of human life, his dualistic vision combines bio-barbarian jargon about animals with phantasmagoric reifications of what is human.

The front line is in language. It arises in a failure to speak attentively.

When talking about animals is taken as seriously as talking about humans, we foster forms of sensitivity to hum-animal relations that are crushed in Raymond Tallis’ militant combination of bio-barbarian discourses for animals with fantasy-like elevations of a “uniquely human world.”

The human/animal dichotomy does not reflect how the human world transcends the animal organism. It reflects how humanism fails to speak responsibly.

Pär Segerdahl

Minding our language - the Ethics Blog

Who, or what, becomes human?

July 31, 2012

Our long childhood and dependence on parental care seem to leave no doubt about it: we are not born as humans, we become human.

I want to highlight a particularly tempting metaphor for this process of “becoming human” – the metaphor of:

  • “Order out of chaos.”

According to this metaphor, human infancy is abundantly rich in possibilities; so abundant, in fact, that it is a formless chaos – a “blooming, buzzing confusion,” as William James characterized the infant’s experience of being alive.

To acquire recognizable human form, the child’s inner chaos must be tamed through the disciplining efforts of parents and society at large (the metaphor suggests). The child’s formlessly rich inner life must me narrowed down, hardened, made boring… until, finally, it becomes another obedient member of society.

Society does not acknowledge a real human subject until the norms of “being human” are confidently repeated: as if the child easily would slip back into its more original state of blooming, buzzing confusion, the moment the reiteration of the social norms of humanity terminates.

The “order out of chaos” metaphor makes life and growth look like death and atrophy. To become human means aborting limitless possibilities and gradually turning into that tragic effect of social forces that we know as “the mature adult.”

Perhaps the intriguing topic of the “deconstruction of the subject” is nothing but rigorous faithfulness to the logic of this tempting metaphor? If becoming human is anything like what the metaphor presents it as, then “no one” becomes human, strictly speaking, for before the disciplined human is formed, there is nameless chaos and no recognizable human subject.

But how can the proto-human chaos – I mean, the child – be so responsive to its non-chaotic parents that it reduces its inner chaos and becomes… human? Isn’t that responsiveness already a form of life, a way of being human?

Dare we entertain the hypothesis that the newborn already is active, and that her metamorphoses throughout life require her own creative participation?

I believe we need another understanding of human becoming than that of “order out of chaos.” – Or is human life really a form of colonization of the child?

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Neither innate nor learned

July 11, 2012

A child begins to speak; to say that it is hungry, or does not want to sleep. Where was the child’s language hiding before it began to speak? Did the child invent it?

Certainly not, experts on language development would insist. A child cannot create language. Language exists before the child starts to speak. All that is happening during language development is that language is being transported to the child.

The big question is: transported from where? There seem to be only two alternatives:

  1. Language is innate. It is prepared in our neural structures. When the child hears its parents speak, these structures are stimulated and soon start supporting the child’s own speech.
  2. Language is learned. It exists in society. Children have social learning skills; through these skills, language is transported from the social environment to the young pupil, soon supporting the child’s own speech.

These are the alternatives, then. Language is either inside or outside the newborn. Language development is either a process of “externalization” or a process of “internalization” of language. There can be no third alternative.

I have written about the ape Kanzi, who was raised by a human mother. I’ve written about him both on The Ethics Blog and in the book, Kanzi’s Primal Language. This bonobo and his half-sister Panbanisha developed language in a manner that does not clearly correspond to any of these two alternatives.

Since it is hardly credible that human language is innate in apes, ape language researchers typically try to teach apes language. These attempts fail.

Kanzi’s human mother, Sue Savage-Rumbaugh, avoided teaching Kanzi. Instead, she simply spoke to him, as parents do, in a shared Pan/Homo culture. As a result of this humanlike cultural rearing, he developed language as nativists believe only human children do: spontaneously, without the parent having to play the social role of a teacher.

The humble purpose of this blog post is to introduce the idea we have to think more carefully about human changeability than we have done so far. We tend to think that human changes are either lying dormant in our nature or are being taught to us by the society.

Kanzi entices us to think differently.

Spontaneous language development in a nonhuman suggests that being reared in culture is more than simply a matter of internalizing social norms. Being reared in culture means participating in the culture: a more creative and masterful role than that of a mere pupil.

I believe we are caught in an adult/child dichotomy. The creative role of the child becomes invisible because the dichotomy categorically portrays her as a novice, as a pupil, as a learner… as a vacuous not-yet-adult-human.

Perhaps, if we manage to liberate us from this dichotomy, we can see the possibility that language – together with much else in human life – is neither innate nor learned.

Pär Segerdahl

Understanding enculturated apes - the ethics blog

%d bloggers like this: