A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: Musings (Page 17 of 18)

Genetic exceptionalism and unforgivingness

What fuels the tendency to view genetic information as exceptionally private and sensitive? Is information about an individual’s genetic disposition for eye color more sensitive than the fact that he has blue eyes?

In Rethinking Informed Consent in Bioethics, Neil C. Manson and Onora O’Neill make heroic efforts against an avalanche of arguments for genetic exceptionalism. For each argument meant to reveal how uniquely private, how exceptionally sensitive, and how extraordinarily risky genetic information is, Manson and O’Neill find elucidating examples, analogies and comparisons that cool down tendencies to exaggerate genetic information as incomparably dangerous.

What fuels the exceptionalism that Manson and O’Neill fight? They suggest that it has to do with metaphors that tempt us to reify information; temptations that, for various reasons, are intensified when we think about DNA. Once again, their analysis is clarifying.

Another form of genetic exceptionalism strikes me, however; one that has less to do with information. I’m thinking of GMO exceptionalism. For thousands of years, humans improved plants and animals through breeding them. This traditional way of modifying organisms is not without environmental risks. When analogous risks appear with GMO, however, they tend to change meaning and become seen as extraordinary risks, revealing the ineradicable riskiness of genetic manipulation.

Why are we prepared to embrace traditionally modified organisms, TMO, when basically the same risks with GMO make us want to exterminate every genetically manipulated bastard?

Unforgivingness. I believe that this all-too familiar emotional response drives genetic exceptionalism, and many other forms of exceptionalism.

Consider the response of becoming unforgiving. Yesterday we laughed with our friend. Today we learn that he spread rumors about us. His familar smile immediately acquires a different meaning. Yesterday it was shared joy. Today it is an ugly mask hiding an intrinsically untrustworthy individual who must be put in quarantine forever. Every trait of character turns into a defect of character. The whole person becomes an objection; an exception among humans.

Manson and O´Neill are right when they analyze a tendency to reify information in genetic exceptionalism. But I want to suggest that what fuels this tendency, what makes us more than willing to yield to the temptation, is an emotional state of mind that also produces many other forms of exceptionalism.

We need to acknowledge the emotional dimension of philosophical and ethical thinking. We don’t think well when we are unforgiving towards our subject matter. We think dogmatically and unjustly.

In their efforts to think well about genetic information, Manson and O’Neill can be understood as doing forgiveness work.

They calm us down and patiently show us that our friend, although he sometimes does wrong, is not that intrinsically bad character we want to see him as, when we are in our unfortunate unforgiving state of mind.

We are helped towards a state of mind where we can think more freely and justly about the risks and benefits of genetics.

Pär Segerdahl

We want to be just - the Ethics Blog

Ethics before the event

It is easy to be wise after the event. This easily accessible form of wisdom is also a painful accusation: you should have been wise before the event.

If you are extremely sensitive to the pain of these attacks, you might want to become someone who always is “wise before the event.” If you let your life be governed by such an ideal, you’ll become an ethical perfectionist.

Ethical perfectionism may seem like the most demanding form of ethical attitude. If it derives from oversensitivity to the pain of being wise after the event, however, which is ridiculously easy, I’m more doubtful about the value of this attitude.

The ethical perfectionist runs the risk of avoiding life altogether, until even the slightest chance of moral complexity has been eliminated. “Postpone life; I’ve discovered another possible ethical problem!”

My reason for bringing up this subject is that research ethics seems to be in continual danger of succumbing to problematic forms of ethical perfectionism. The dependence on research scandals in the past and the demand to avoid them in the future makes it especially vulnerable to this strange ideal.

Don’t for a moment believe that I recommend living without reflection. But ethical problems must be confronted while we live and develop our activities: “as we go along.” We cannot postpone life until all ethical complexity has been eliminated.

The risk is that we fancy ethical problems without reality and postpone urgent research initiatives on the basis of derailed demands, while we fail to face the real ethical challenges.

Pär Segerdahl

We think about bioethics : www.ethicsblog.crb.uu.se

What is philosophy?

Someone asked me what philosophy is. I answered by trying to pinpoint the most frequently used word when one philosophizes.

What does a philosopher most often say? I believe he or she most often says, “But…”:

  • “But is that really true?”
  • “But shouldn’t then…?”
  • “But can’t one imagine that…?”
  • “But how can anyone know such a thing?”
  • Etc.

Always some unexpected obstacle! Just at the moment when your reasoning seems entirely spotless, an annoying “but…?” knocks you to the ground and you have to start all over again.

Confronted with our spontaneous reasoning, a philosopher’s head soon fills with objections. Perplexing questions lead into unknown territory. Maps must be drawn the need of which we never anticipated. A persistently repeated “but…?” reveals challenges for which we lack preparedness.

But the goal is not that of interminably objecting. Objecting and being perplexed are not intrinsic values.

Rather the contrary. The accumulation of objections is a precondition to there being a goal with philosophizing: that of putting an END to the annoying objections.

Philosophy is a fight with one’s own objections; the goal is to silence them.

But if that is so, what point can philosophy have? An activity that first raises annoying objections, and then tries to silence them: what’s that good for!?

Try to reason about what “consent to future research” means. Then you’ll probably notice that you soon start repeating “but…?” with regard to your own attempts to reason well. Your objections will annoy you and spur you to think even more clearly. You will draw maps the need of which you had not anticipated.

Even if we prefer that we never went astray, we do go astray. It pertains to being human. THEN we see the point with persistently asking “but…?”; THEN we see the purpose with crisscrossing confusing aspects of life until we survey them, haunted by objections from an unyielding form of sincerity.

When we finally manage to silence our irritating objections, philosophy has made itself as superfluous as a map would be when we cross our own street…

…until we go astray again.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Who, or what, becomes human?

Our long childhood and dependence on parental care seem to leave no doubt about it: we are not born as humans, we become human.

I want to highlight a particularly tempting metaphor for this process of “becoming human” – the metaphor of:

  • “Order out of chaos.”

According to this metaphor, human infancy is abundantly rich in possibilities; so abundant, in fact, that it is a formless chaos – a “blooming, buzzing confusion,” as William James characterized the infant’s experience of being alive.

To acquire recognizable human form, the child’s inner chaos must be tamed through the disciplining efforts of parents and society at large (the metaphor suggests). The child’s formlessly rich inner life must me narrowed down, hardened, made boring… until, finally, it becomes another obedient member of society.

Society does not acknowledge a real human subject until the norms of “being human” are confidently repeated: as if the child easily would slip back into its more original state of blooming, buzzing confusion, the moment the reiteration of the social norms of humanity terminates.

The “order out of chaos” metaphor makes life and growth look like death and atrophy. To become human means aborting limitless possibilities and gradually turning into that tragic effect of social forces that we know as “the mature adult.”

Perhaps the intriguing topic of the “deconstruction of the subject” is nothing but rigorous faithfulness to the logic of this tempting metaphor? If becoming human is anything like what the metaphor presents it as, then “no one” becomes human, strictly speaking, for before the disciplined human is formed, there is nameless chaos and no recognizable human subject.

But how can the proto-human chaos – I mean, the child – be so responsive to its non-chaotic parents that it reduces its inner chaos and becomes… human? Isn’t that responsiveness already a form of life, a way of being human?

Dare we entertain the hypothesis that the newborn already is active, and that her metamorphoses throughout life require her own creative participation?

I believe we need another understanding of human becoming than that of “order out of chaos.” – Or is human life really a form of colonization of the child?

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

Neither innate nor learned

A child begins to speak; to say that it is hungry, or does not want to sleep. Where was the child’s language hiding before it began to speak? Did the child invent it?

Certainly not, experts on language development would insist. A child cannot create language. Language exists before the child starts to speak. All that is happening during language development is that language is being transported to the child.

The big question is: transported from where? There seem to be only two alternatives:

  1. Language is innate. It is prepared in our neural structures. When the child hears its parents speak, these structures are stimulated and soon start supporting the child’s own speech.
  2. Language is learned. It exists in society. Children have social learning skills; through these skills, language is transported from the social environment to the young pupil, soon supporting the child’s own speech.

These are the alternatives, then. Language is either inside or outside the newborn. Language development is either a process of “externalization” or a process of “internalization” of language. There can be no third alternative.

I have written about the ape Kanzi, who was raised by a human mother. I’ve written about him both on The Ethics Blog and in the book, Kanzi’s Primal Language. This bonobo and his half-sister Panbanisha developed language in a manner that does not clearly correspond to any of these two alternatives.

Since it is hardly credible that human language is innate in apes, ape language researchers typically try to teach apes language. These attempts fail.

Kanzi’s human mother, Sue Savage-Rumbaugh, avoided teaching Kanzi. Instead, she simply spoke to him, as parents do, in a shared Pan/Homo culture. As a result of this humanlike cultural rearing, he developed language as nativists believe only human children do: spontaneously, without the parent having to play the social role of a teacher.

The humble purpose of this blog post is to introduce the idea we have to think more carefully about human changeability than we have done so far. We tend to think that human changes are either lying dormant in our nature or are being taught to us by the society.

Kanzi entices us to think differently.

Spontaneous language development in a nonhuman suggests that being reared in culture is more than simply a matter of internalizing social norms. Being reared in culture means participating in the culture: a more creative and masterful role than that of a mere pupil.

I believe we are caught in an adult/child dichotomy. The creative role of the child becomes invisible because the dichotomy categorically portrays her as a novice, as a pupil, as a learner… as a vacuous not-yet-adult-human.

Perhaps, if we manage to liberate us from this dichotomy, we can see the possibility that language – together with much else in human life – is neither innate nor learned.

Pär Segerdahl

Understanding enculturated apes - the ethics blog

Absolute limits of a modern world?

A certain form of ethical thinking would like to draw absolute limits to human activity. The limits are often said to be natural: nature is replacing God as ultimate moral authority.

Nature is what we believe we still can believe in, when we no longer believe in God.

God thus moves into the human embryo. As its nature, as its potential to develop into a complete human being, he continues to lay down new holy commandments.

The irony is that this attempt to formulate nature’s commandments relies on the same forms of human activity that one wants to delimit. Without human embryo research, no one would know of the existence of the embryo: no one could speculate about its “moral status” and derive moral commandments from it.

This dependence on modern research activities threatens the attempt to discover absolute moral authority in nature. Modern research has disassociated itself from the old speculative ambition to stabilize scientific knowledge as a system. Our present notion of “the embryo” will be outdated tomorrow.

Anyone attempting to speculate about the nature of the embryo – inevitably relying on the existence of embryo research – will have to acknowledge the possibility that these speculations already are obsolete.

The changeability of the modern world thus haunts and destabilizes the tendency to find absolute moral authority in nature.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

I want to contribute to research, not subscribe to genetic information

What do researchers owe participants in biobank research?

One answer is that researchers should share relevant incidental findings about participants with these helpful individuals. Returning such information could support a sense of partnership and acknowledge participants’ extremely valuable contribution to research.

I’m doubtful about this answer, however. I’m inclined to think that return of information might estrange participants from the research to which they want to contribute.

Certainly, if researchers discover a tumor but don’t identify and contact the participant, that would be problematic. But incidental findings in biobank research typically concern difficult to interpret genetic risk factors. Should these elusive figures be communicated to participants?

Samples may moreover be reused many times in different biobank projects. A relevant incidental finding about me may not be made until a decade after I gave the sample. By then I may have forgotten that I gave it.

Do I want to be seen as a biobank partner that long after I gave the sample? Do I want my contribution to research to be acknowledged years afterwards in the form of percentages concerning increased disease risks? Wasn’t it sufficient with the attention and the health information that I received when I gave the sample: when I actually MADE my contribution?

Personally, I’m willing to contribute to research by giving blood samples, answering questions, and undergoing health examinations. But if that means also getting a lifelong subscription to genetic information about me, I’m beginning to hesitate.

That’s not what I wanted, when I wanted to contribute to research.

Realizing that my blood sample rendered a lifelong subscription to genetic information would estrange me from what I thought I was doing. Can’t one simply contribute to research?

But other participants might want the information. Should biobank research then offer them subscription services?

Pär Segerdahl

We like challenging questions - the ethics blog

Can neuroscience modernize human self-understanding?

Tearing down old buildings and erecting new ones on the basis of modern science and technology – we are constantly doing it in our cities. But can similar ambitions to get rid of the old, to modernize, be realized even more thoroughly, with regard to us and the human condition?

Can we tear down “traditional” human self-understanding – the language we use when we reflect on life in literature, in philosophy, and in the humanities – and replace it by new neuroscientific terms?

Earlier this spring, the philosopher Roger Scruton published an essay in the Spectator where he eloquently attacks claims that neuroscience can and should replace the humanities by a set of brave new “neuro”-disciplines, like neuroethics, neuroaesthetics, and neuromusicology.

Not only will these purported new “sciences” fail to create the understanding that traditional ethics, aesthetics, and musicology, helped us towards (for example, of Bach’s music). They will even fail to achieve the scientific explanations that would justify the brave new “neuro”-prefix.

In order for there to be explanations at all, there must first of all be questions. What characterizes the purported “neuro”-sciences, however, is their lack of questions, Scruton remarks.

“Neuro-explanation” typically is no more than translation into neuro-jargon. The aim is neither understanding nor explanation, but the ideological one of replacing the traditional by the new, at any cost.

The result of these extreme modernization ambitions running amok in human self-understanding, Scruton claims, and I agree with him, is nonsense: neurononsense.

Yet, something worries me in Scruton’s essay. He almost seems to purify human self-understanding, or the human condition, as if it were a higher sphere that should not be affected by changing times, at least not if they are modern.

I agree that neuroscience cannot explain the human condition. I agree that it cannot replace human self-understanding. But it can change the human condition and challenge our self-understanding. It already does.

Science and technology cannot be abstracted from the human condition. We are continually becoming “modernized” by, for example, neuroscientific developments. These changing conditions are real, and not merely nonsense or jargon. They occur everywhere, not merely among intellectuals or academics. And they reach all the way to our language.

Neuroscience certainly cannot replace the humanities. But it can challenge the humanities to reflect on changed human conditions.

When attempts in the human sciences to understand modern human conditions focus on neuroscience, the prefix “neuro-” could denote a more responsible form of intellectual work than the one Scruton rightly criticizes. It could denote work that feels the challenge of neuroscientific developments and takes it seriously.

Here at CRB, Kathinka Evers works to develop such a responsible form of neuroethics: one that does not translate ethics into neuro-jargon, but sees neuroscientific findings about the brain as a philosophical challenge to understand and clarify, very often in opposition to the temptation of jargon.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Research with my data, but not about me

It is perplexing how the websites of large internet companies continuously adapt to me. It looks like the entire business activity of Amazon was about the musical artists I listened to yesterday.

These companies evidently collect data about what I search out on their websites and automatically adapt to my computer, making the presentation of products as attractive as possible to me.

It is rather annoying to get one’s own internet history in the face like that.

The example illustrates a common property of personal data. When data about me are collected, the data sooner or later return to me: in the form of an adapted website; in the form of a demand to pay tax arrears; or in the form of more expensive insurance premiums.

No one would bother to collect my data if they did not intend to return to me on the basis of the data.

Me, me, me: my data are about me. Sooner or later they come back to me.

There is, however, one brilliant exception from my data’s stubborn tendency to return to me: research. When researchers collect my blood sample or ask questions about my health, they are not interested in my person. My data will not return to me in any form.

Researchers are interested in general patterns that can be discerned in data from thousands of people. If researchers should return to participants, it is to collect further data that (for example) can make the patterns of ageing appear.

Patterns, patterns, patterns: research is about patterns. It is not about any one of us who supplied the data.

I’m therefore inclined to see research registers as categorically distinct from the tax authorities’ data about my incomes. Researchers launch my data up into a depersonalized scientific space. Up there, my data hover weightlessly and my person cannot attract them back to me. They do research with my data. But it is not about me.

I don’t primarily have in mind the fact that researchers code my data so that the connection to me is obscured. I’m thinking of the elementary fact that they collect my data without any intention of returning to me on the basis of the data.

When the integrity of research participants is debated, it is important to keep this unique status of research registers in mind. The purpose of collecting scientific data about me is not at all about me. The purpose “scientific research” disentangles me from my own data.

Biobank research here encounters a difficulty.

Suppose that researchers discover in my blood sample a genetic disposition for a disease that can be prevented if measures are taken in advance. Should they then take down my data from their depersonalized orbit in scientific space, and inform me about the disposition?

It may seem obvious that they should inform me. But it would simultaneously be a departure from how science typically treats personal data without intention of returning to participants on the basis of the data.

How should biobank researchers handle discoveries about individual participants that may save their future health? This important and difficult question will be investigated in the dissertation work of our most recent doctoral student at CRB, Jennifer Viberg.

I’m certain that the Ethics Blog will return many times to Jennifer’s work on incidental findings in biobank research.

Pär Segerdahl

We like challenging questions - the ethics blog

Political ambitions threaten the intellectual integrity of bioethics

Is there a need to enhance the way bioethicists discuss enhancement?

ConAshkan Atry defended his PhD thesis on doping in 2013temporary ethical debates on human enhancement sometimes resemble bitter political debates in a city council. Implicit or explicit political agendas are expressed as normative claims and are passed as “moral” arguments because they serve “the right cause.”

Consider, for instance, James Watson who said that “we’ve got to go ahead and not worry whether we’re going to offend some fundamentalist from Tulsa, Oklahoma.”

Another example is James Hughes, who almost ridicules moral worries about enhancement by reducing them to some sort of semi-religious “irrational” technophobia.

Liberal proponents of enhancement stress the value of individual autonomy and the freedom too choose one’s lifestyle. In this perspective, any attempt to prohibit enhancement is considered to encroach upon political liberty, hence as being unjust.

Opponents to enhancement, on the other hand, stressing values such as fairness and social justice, argue that without implementing regulations and proper measures, human enhancement will widen the already existing social divide and create a further gap between those who have the means to enhance themselves and those who don’t.

Thus, what drives both parties in the ethical debate on enhancement are more general political conceptions of what social justice is or ought to be.

Human enhancement admittedly raises many important political questions. Concerns about social justice will certainly continue to play a major part in debates on enhancement. Moreover, the political and the ethical spheres admittedly may, to some extent, overlap.

However, here I wish to raise the question whether political concerns fully exhaust what one may call genuine ethical reflection upon the phenomenon of human enhancement, and to what extent political agendas are to be allowed to determine the direction of ethical debates.

What is worrying is a situation where moral philosophical debates on enhancement reach some kind of deadlock position where bioethicists, acting as mouthpieces for rigid political perspectives, simply block their ears and shout at each other as loud as they can.

Arguably, what we may understand as genuine philosophical reflection also includes hearing the other and, more importantly, critically questioning rigid perspectives which limit the ethical horizon.

Indeed, the phenomenon of human enhancement provides a platform for doing so. Human enhancement will not only transform our lives but also necessitate a continuous re-formulation of key philosophical conceptions such as autonomy, freedom, and human nature.

In this regard, the dimension of unpredictability involved in new scientific and technological innovations challenges intellectual habits and requires development of new ways of doing ethics that would enable us to cope with these rapid transformations and perhaps even to foresee upcoming issues.

Reflecting on enhancement beyond the horizon of political ideologies would be a good starting point in this direction.

Ashkan Atry

We like critical thinking : www.ethicsblog.crb.uu.se

« Older posts Newer posts »