A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Author: Pär Segerdahl (Page 39 of 43)

Who, or what, becomes human?

Our long childhood and dependence on parental care seem to leave no doubt about it: we are not born as humans, we become human.

I want to highlight a particularly tempting metaphor for this process of “becoming human” – the metaphor of:

  • “Order out of chaos.”

According to this metaphor, human infancy is abundantly rich in possibilities; so abundant, in fact, that it is a formless chaos – a “blooming, buzzing confusion,” as William James characterized the infant’s experience of being alive.

To acquire recognizable human form, the child’s inner chaos must be tamed through the disciplining efforts of parents and society at large (the metaphor suggests). The child’s formlessly rich inner life must me narrowed down, hardened, made boring… until, finally, it becomes another obedient member of society.

Society does not acknowledge a real human subject until the norms of “being human” are confidently repeated: as if the child easily would slip back into its more original state of blooming, buzzing confusion, the moment the reiteration of the social norms of humanity terminates.

The “order out of chaos” metaphor makes life and growth look like death and atrophy. To become human means aborting limitless possibilities and gradually turning into that tragic effect of social forces that we know as “the mature adult.”

Perhaps the intriguing topic of the “deconstruction of the subject” is nothing but rigorous faithfulness to the logic of this tempting metaphor? If becoming human is anything like what the metaphor presents it as, then “no one” becomes human, strictly speaking, for before the disciplined human is formed, there is nameless chaos and no recognizable human subject.

But how can the proto-human chaos – I mean, the child – be so responsive to its non-chaotic parents that it reduces its inner chaos and becomes… human? Isn’t that responsiveness already a form of life, a way of being human?

Dare we entertain the hypothesis that the newborn already is active, and that her metamorphoses throughout life require her own creative participation?

I believe we need another understanding of human becoming than that of “order out of chaos.” – Or is human life really a form of colonization of the child?

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

What do donors need to know about future research?

I’m reading a Scientific American Guest Blog, on the ethics of future-use DNA sampling. Donating DNA to research is described as a more lasting donation than donating organs or embryos: DNA is information and information can last longer.

That donating DNA is such a lasting donation seems to imply that the future use to which the DNA can be put to use is more open. Who knows what information future researchers might be able to obtain from DNA donated today?

The author of the guest blog, Ricki Lewis, asks how consent can be obtained for DNA sampling intended for future genetic research.

She rejects the view that researchers must know in advance where the research might lead and inform donors about it; and if research takes unforeseen directions years or decades after the donation, researchers must contact donors again for renewed consent.

This view is rejected because knowing where research might lead “is not how science works.” And renewed consent would be “confusing, disturbing, and likely expensive.” – I agree.

Ricki Lewis’s own solution is the following:

  • “…informed consent documents should state that the sample might be used in the future to get information unknown today. Participants or patients can agree, or not sign.”

Both solutions seem to operate on a level that strikes me as less relevant to DNA donors.

People who donate DNA to science probably want to contribute to research that can improve prevention, diagnosis and treatment of various diseases. That is the level at which they are concerned about the future use of their DNA: the level of the practical significance of the research.

The exact scientific path that future research takes is less relevant to donors, I believe, as long as the research has the kind of practical significance that motivates their donation. And to ask for consent to do science as science is done – without knowing in advance where it might lead – could be confusing.

I also wonder: could a consent form that emphasizes the open and unpredictable nature of scientific research be misused on the practical level that probably concern donors more?

Pär Segerdahl

Approaching future issues - the Ethics Blog

Neither innate nor learned

A child begins to speak; to say that it is hungry, or does not want to sleep. Where was the child’s language hiding before it began to speak? Did the child invent it?

Certainly not, experts on language development would insist. A child cannot create language. Language exists before the child starts to speak. All that is happening during language development is that language is being transported to the child.

The big question is: transported from where? There seem to be only two alternatives:

  1. Language is innate. It is prepared in our neural structures. When the child hears its parents speak, these structures are stimulated and soon start supporting the child’s own speech.
  2. Language is learned. It exists in society. Children have social learning skills; through these skills, language is transported from the social environment to the young pupil, soon supporting the child’s own speech.

These are the alternatives, then. Language is either inside or outside the newborn. Language development is either a process of “externalization” or a process of “internalization” of language. There can be no third alternative.

I have written about the ape Kanzi, who was raised by a human mother. I’ve written about him both on The Ethics Blog and in the book, Kanzi’s Primal Language. This bonobo and his half-sister Panbanisha developed language in a manner that does not clearly correspond to any of these two alternatives.

Since it is hardly credible that human language is innate in apes, ape language researchers typically try to teach apes language. These attempts fail.

Kanzi’s human mother, Sue Savage-Rumbaugh, avoided teaching Kanzi. Instead, she simply spoke to him, as parents do, in a shared Pan/Homo culture. As a result of this humanlike cultural rearing, he developed language as nativists believe only human children do: spontaneously, without the parent having to play the social role of a teacher.

The humble purpose of this blog post is to introduce the idea we have to think more carefully about human changeability than we have done so far. We tend to think that human changes are either lying dormant in our nature or are being taught to us by the society.

Kanzi entices us to think differently.

Spontaneous language development in a nonhuman suggests that being reared in culture is more than simply a matter of internalizing social norms. Being reared in culture means participating in the culture: a more creative and masterful role than that of a mere pupil.

I believe we are caught in an adult/child dichotomy. The creative role of the child becomes invisible because the dichotomy categorically portrays her as a novice, as a pupil, as a learner… as a vacuous not-yet-adult-human.

Perhaps, if we manage to liberate us from this dichotomy, we can see the possibility that language – together with much else in human life – is neither innate nor learned.

Pär Segerdahl

Understanding enculturated apes - the ethics blog

“The Route” is taking shape

Our plans for the interactive part of the conference program for HandsOn: Biobanks, in Uppsala 20-21 September 2012, are taking shape. This part of the program is called “the Route.”

During coffee and lunch breaks, participants can walk through an interactive exhibition illustrating the process of informed consent, data and sample sharing, and new legislation.

Within the Route, participants can also meet law scholars, ethicists, biobank researchers and journalists. They can listen to and participate in conversations on a broad range of issues, such as the role or trust in biobank research, handling of incidental findings, patents, and regulatory processes.

Finally, the LifeGene debate will be discussed with representatives from LifeGene, EpiHealth, the Swedish Data Inspection Board, and the Central Ethical Review Board.

Curious? Do you want to partake in the Route?

Registration is open until September 11.

Pär Segerdahl

Absolute limits of a modern world?

A certain form of ethical thinking would like to draw absolute limits to human activity. The limits are often said to be natural: nature is replacing God as ultimate moral authority.

Nature is what we believe we still can believe in, when we no longer believe in God.

God thus moves into the human embryo. As its nature, as its potential to develop into a complete human being, he continues to lay down new holy commandments.

The irony is that this attempt to formulate nature’s commandments relies on the same forms of human activity that one wants to delimit. Without human embryo research, no one would know of the existence of the embryo: no one could speculate about its “moral status” and derive moral commandments from it.

This dependence on modern research activities threatens the attempt to discover absolute moral authority in nature. Modern research has disassociated itself from the old speculative ambition to stabilize scientific knowledge as a system. Our present notion of “the embryo” will be outdated tomorrow.

Anyone attempting to speculate about the nature of the embryo – inevitably relying on the existence of embryo research – will have to acknowledge the possibility that these speculations already are obsolete.

The changeability of the modern world thus haunts and destabilizes the tendency to find absolute moral authority in nature.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Handling mistaken trust when doctors recruit patients as research participants

Patients seem more willing to participate in biobank research than the general public. A possible explanation is the doctor-patient relationship. Patients’ trust in health care professionals might help doctors to recruit them as research participants, perhaps making the task too easy.

That trust in doctors can induce a willingness to participate in research seems threatening to the notion of well-informed autonomous decision making. Can sentiments of trust be allowed to play such a prominent role in these processes?

Rather than dismissing trust as a naïve and irrational sentiment, a new article distinguishes between adequate and mistaken trust, and argues that being trusted implies a duty to compensate for mistaken trust.

The article in Bioethics is written by Linus Johnsson at CRB, together with Gert Helgesson, Mats G. Hansson and Stefan Eriksson.

The article discusses tree forms of mistaken trust:

  1. Misplaced trust: Trusted doctors may lack relevant knowledge of biobank research (for example, about the protection of privacy).
  2. Irrational trust: Patients may be mistaken about why they trust the doctor (the doctor may actually be a form of father or mother figure for the patient).
  3. Inappropriate trust: Patients may inappropriately expect doctors always to play the role of therapists and fail to see that doctors sometimes play the role of research representatives who ask patients to contribute to the common good.

The idea in the paper, if I understand it, is that instead of dismissing trust because it might easily be mistaken in these ways, we need to acknowledge that being trusted implies a duty to handle the potentiality of mistaken trust.

Trust is not a one-sided sentiment: it creates responsibilities in the person who is trusted. If doctors take these responsibilities seriously, the relationship of trust immediately begins to look… well, more trustworthy and rational.

How can mistaken forms of trust be compensated for?

Misplaced trust in doctors can be compensated for by developing the relevant expertise (or by dispelling the illusion that one has it). Irrational trust can be compensated for by supporting the patient’s reasoning and moral agency. Inappropriate trust can be compensated for by nurturing a culture with normative expectations that doctors play more than one role; a culture where patients can expect to be asked by the doctor if they want to contribute to the common good.

If patients’ trust is seen in conjunction with these corresponding moral responsibilities of doctors, the relationship of trust can be understood as supporting the patients’ own decision making rather than undermining it.

That, at least, is how I understood this subtle philosophical treatment of trust and its role when patients are recruited by doctors as participants in biobank research.

Pär Segerdahl

We recommend readings - the Ethics Blog

Interview with Kathinka Evers

One of my colleagues here at CRB, Kathinka Evers, recently returned from Barcelona, where she participated in the lecture series, The Origins of the Human Mind:

PS: Why did you participate in this series?

KE: I was invited by the Centre for Contemporary Culture to present the rise of neuroethics and my views on informed materialism.

PS: Why were you invited to talk on these issues?

KE: My last book was recently translated into Spanish (Quando la materia se despierta), and it has attracted interest amongst philosophers and neuroscientists in the Spanish speaking world. In that book, I extend a materialist theory of mind, called “informed materialism,” to neuroethical perspectives, discussing, for example, free will, self-conceptions and personal responsibility.

PS: In a previous blog post I commented upon Roger Scruton’s critical attitude to neuroscientific analyses of subjects that traditionally belong to the social and human sciences. What’s your opinion on his criticism?

KE: Contemporary neuroscience can enrich numerous areas of social science. But the reverse is also true. The brain is largely the result of socio-cultural influences. Understanding the brain also involves understanding its embodiment in a social context. The social and neurobiological perspectives dynamically interact in our development of a deeper understanding of the human mind, of consciousness, and of human identity.

PS: Do you mean that the criticism presupposes a one-sided view of the development of neuroscience?

KE: I suspect that the criticism is not well-informed, scientifically, since it fails to take this neuro-cultural symbiosis into account. But it is not uncommon for philosophers to take a rather defensive position against neuroscientific attempts to enter philosophical domains.

PS: Was this tension noticeable at the meeting in Barcelona?

KE: Not really. Rather, the debate focused on how interdisciplinary collaborations have at last achieved what the theoretical isolationism of the twentieth century – when philosophy of mind was purely a priori and empirical brain science refused to study consciousness – failed to achieve: the human brain is finally beginning to understand itself and its own mind.

Kathinka Evers has developed a course in neuroethics and is currently drafting a new book (in English) on brain and mind.

Pär Segerdahl

We transgress disciplinary borders - the Ethics Blog

I want to contribute to research, not subscribe to genetic information

What do researchers owe participants in biobank research?

One answer is that researchers should share relevant incidental findings about participants with these helpful individuals. Returning such information could support a sense of partnership and acknowledge participants’ extremely valuable contribution to research.

I’m doubtful about this answer, however. I’m inclined to think that return of information might estrange participants from the research to which they want to contribute.

Certainly, if researchers discover a tumor but don’t identify and contact the participant, that would be problematic. But incidental findings in biobank research typically concern difficult to interpret genetic risk factors. Should these elusive figures be communicated to participants?

Samples may moreover be reused many times in different biobank projects. A relevant incidental finding about me may not be made until a decade after I gave the sample. By then I may have forgotten that I gave it.

Do I want to be seen as a biobank partner that long after I gave the sample? Do I want my contribution to research to be acknowledged years afterwards in the form of percentages concerning increased disease risks? Wasn’t it sufficient with the attention and the health information that I received when I gave the sample: when I actually MADE my contribution?

Personally, I’m willing to contribute to research by giving blood samples, answering questions, and undergoing health examinations. But if that means also getting a lifelong subscription to genetic information about me, I’m beginning to hesitate.

That’s not what I wanted, when I wanted to contribute to research.

Realizing that my blood sample rendered a lifelong subscription to genetic information would estrange me from what I thought I was doing. Can’t one simply contribute to research?

But other participants might want the information. Should biobank research then offer them subscription services?

Pär Segerdahl

We like challenging questions - the ethics blog

Do I have a self?

Viewing neuroscience as a box opener is tempting. The box conceals the human mind; opening the box reveals it.

According to this image, neuroscience uncovers reality. It lays bare the truth about our taken for granted notions of mind: about our concepts of ‘self,’ ‘will,’ ‘belief,’ ‘intention’… Neuroscience reveals the underlying facts about us humans.

How exciting…, and how terrifying! What will they find in the box? And what will they not find? Will they find my ‘self’ there – the entity that is me and that writes these words?

What if they don’t find my ‘self’ in the box! What if my ‘self’ turns out to be an illusion! Can they engineer one for me instead? My life would be so desolate without ‘me.’

But neuroscientists are clever. They control what’s in the box. They surely will be able to enhance my brain and create the ‘self’ that didn’t exist in the first place.

Ideas like these are discussed in a mind-boggling interview entitled,

What strikes me about the neurophilosophical discussion is that it does NOT question the notion of the self. The notion is discussed as if it were self-evident to all of us, as some sort of ‘entity.’ The notion is supposed to be present in ordinary (culturally shaped) self-understanding. What is lacking is the evidence for the notion of ‘the self.’

You’ve guessed where the evidence is hiding: it’s in the box!

Neuroscientists opening the box threaten to disclose that the brain is naked. It might not be garmented in a ‘self’ or in a ‘free will.’ That these ‘entities’ exist in the box were perhaps just illicit reifications of modes of speech present in everyday discourse.

But what is ‘reification’?

Is it not precisely the image of ‘the box’ concealing the realities of mind?

If the tempting ‘box’ image supplies the model of reification – the very form of reification – isn’t the notion that neuroscience, by opening the box, is exposing reifications in ordinary discourse a whirling dance with the same reifying tendency that it is supposed to expose?

The ‘box’ mode of thinking is a simplified use of psychological nouns and verbs as if they referred to ‘entities’ and ‘processes’ in a hidden realm. It is difficult to resist such simplified linguistic imagery.

I’m convinced that neuroscience is making important discoveries that will challenge our self-understanding. But I question the ‘box’ image of these developments as an oversimplification of the very modes of speech it makes it seem we can transcend.

Pär Segerdahl

Minding our language - the Ethics Blog

Can neuroscience modernize human self-understanding?

Tearing down old buildings and erecting new ones on the basis of modern science and technology – we are constantly doing it in our cities. But can similar ambitions to get rid of the old, to modernize, be realized even more thoroughly, with regard to us and the human condition?

Can we tear down “traditional” human self-understanding – the language we use when we reflect on life in literature, in philosophy, and in the humanities – and replace it by new neuroscientific terms?

Earlier this spring, the philosopher Roger Scruton published an essay in the Spectator where he eloquently attacks claims that neuroscience can and should replace the humanities by a set of brave new “neuro”-disciplines, like neuroethics, neuroaesthetics, and neuromusicology.

Not only will these purported new “sciences” fail to create the understanding that traditional ethics, aesthetics, and musicology, helped us towards (for example, of Bach’s music). They will even fail to achieve the scientific explanations that would justify the brave new “neuro”-prefix.

In order for there to be explanations at all, there must first of all be questions. What characterizes the purported “neuro”-sciences, however, is their lack of questions, Scruton remarks.

“Neuro-explanation” typically is no more than translation into neuro-jargon. The aim is neither understanding nor explanation, but the ideological one of replacing the traditional by the new, at any cost.

The result of these extreme modernization ambitions running amok in human self-understanding, Scruton claims, and I agree with him, is nonsense: neurononsense.

Yet, something worries me in Scruton’s essay. He almost seems to purify human self-understanding, or the human condition, as if it were a higher sphere that should not be affected by changing times, at least not if they are modern.

I agree that neuroscience cannot explain the human condition. I agree that it cannot replace human self-understanding. But it can change the human condition and challenge our self-understanding. It already does.

Science and technology cannot be abstracted from the human condition. We are continually becoming “modernized” by, for example, neuroscientific developments. These changing conditions are real, and not merely nonsense or jargon. They occur everywhere, not merely among intellectuals or academics. And they reach all the way to our language.

Neuroscience certainly cannot replace the humanities. But it can challenge the humanities to reflect on changed human conditions.

When attempts in the human sciences to understand modern human conditions focus on neuroscience, the prefix “neuro-” could denote a more responsible form of intellectual work than the one Scruton rightly criticizes. It could denote work that feels the challenge of neuroscientific developments and takes it seriously.

Here at CRB, Kathinka Evers works to develop such a responsible form of neuroethics: one that does not translate ethics into neuro-jargon, but sees neuroscientific findings about the brain as a philosophical challenge to understand and clarify, very often in opposition to the temptation of jargon.

Pär Segerdahl

Approaching future issues - the Ethics Blog

« Older posts Newer posts »