A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 16 of 20)

Norm fever

 

Pär Segerdahl

How does one become a Platonist; a person who believes in a world of pure ideas? This blog post tries to give an answer.

If I were to use one word to sum up the character of everything that agitates people, it would be: normativity.

As soon as we are engaged by someone’s hairstyle, by a political program, or by how some researchers treated their research participants, we perform some form of normative activity.

Think of all the things we say daily, or hear others say:

  • – It looks better if you comb it like this
  • – What a beautiful coat
  • – Do you still buy and listen to CDs?
  • – That’s not a proper way of treating people
  • – To deny women abortion violates human rights

All these normative attitudes about the tiniest and the greatest matters! Then add to this normative murmuring the more ambitious attempts to speak authoritatively about these engaging issues: attempts by hair stylists, by orators, by politicians, by ethicists, by the Pope, by sect leaders, and by activist organizations to make themselves heard above the murmuring.

A person who was troubled precisely by the latter attempts to speak more authoritatively about the issues that engage people was Socrates. He asked: Are these wise guys truly wise or just cheeky types who learned to speak with an authoritative voice?

Socrates wandered around in Athens, approaching the cockerels and examining their claims to know what is right and proper, genuine and true. These examinations often ended in acknowledgement of lack of knowledge: neither the cockerel nor Socrates himself actually knew.

Socrates’ examinations look like a series of failures. No one knows not what he claims to know. None of us even know what knowledge is!

For Socrates, however, failure is success. He converted another mortal and helped his soul discover a more ideal orientation towards pure normativity: the eternal standards of all that is. No mortal has normative authority, only the norms themselves have. You must search for them, rather than follow orators or sect leaders who just want to make themselves heard. You must orient yourself towards normativity as such, and strive towards perfection.

Socrates was feverishly attracted to this dream of pure normativity. He called his dream “love of wisdom”: philosophy. But for the dream to be more than a feverish dream the dream must be real and reality must be a dream. Another aspect of Socrates’ art of conversation was, therefore, a series of myths, parables and stories, which suggested a more real world beyond this one: a realm of eternal pure norms, the ultimate standards of all things.

One such story is about a slave boy who, although he was illiterate, could be made to “see” a truth in geometry. How was this possible? Of course, because the slave boy’s immortal soul beheld the norms of geometry before he was born among us mortals! Reminiscence of more original normative authority, truer than any mortal’s loud-voiced pretentiousness, made it possible for the slave to “see.”

Something similar occurs, Socrates implied, each time we see, for example, a beautiful building or a brave soldier. Something more primordially real than the house or the soldier – pure norms of beauty, courage, buildings, soldiers – shine through and enable us to see what we naively take for granted as reality. Primordial reality – a realm of pure norms – illuminates all things and enables us to see the beautiful building or the brave soldier (if they resemble their standards).

If normativity sums up the character of everything that engages us, it is perhaps not surprising to find that it easily makes us dream feverishly about a realm of ultimate normative authorities, called “pure ideas.”

Pär Segerdahl

We like challenging questions - the ethics blog

The Ethics Blog is now available as a book!

Pär SegerdahlDuring the autumn, Josepine Fernow and I selected texts from the Ethics Blog and compiled them into a book. Last week we had the book release!

When blog posts end up on paper, in a book, they can be read like aphorisms: slower than when surfing the net.

I hope that also the PDF version of the book will support slow reading.

We also compiled a Swedish book – here are links to both books:

Welcome to download and read – Merry Christmas!

Pär Segerdahl

(Note: If you read the PDF books via the web browser, fonts and formatting are sometimes affected. If this happens, please download the files on the hard drive.)

We think about bioethics : www.ethicsblog.crb.uu.se

Conversations with seemingly unconscious patients

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogResearch and technology changes us: changes the way we live, speak and think. One area of ​​research that will change us in the future is brain research. Here are some remarkable discoveries about some seemingly unconscious patients; discoveries that we still don’t know how to make intelligible or relate to.

A young woman survived a car accident but got such serious injuries that she was judged to be in a vegetative state, without consciousness. When sentences were spoken to her and her neural responses were measured through fMRI, however, it was discovered that her brain responded equivalently to conscious control subjects’ brains. Was she conscious although she appeared to be in a coma?

To get more clarity the research team asked the woman to perform two different mental tasks. The first task was to imagine that she was playing tennis; the other that she visited her house. Once again the measured brain activation was equivalent to that of the conscious control subjects.

She is not the only case. Similar responses have been measured in other patients who according to international guidelines were unconscious. Some have learned to respond appropriately to yes/no questions, such as, “Is your mother’s name Yolande?” They respond by mentally performing different tasks – let’s say, imagine squeezing their right hand for “yes” and moving all their toes for “no.” Their neural responses are then measured.

There is already technology that connects brain and computer. People learn to use these “neuro-prosthetics” without muscle use. This raises the question if in the future one may be able to communicate with some patients who today would be diagnosed as unconscious.

– Should one then begin to ask these patients about informed consent for different treatments?

Here at the CRB researchers are working with such neuro-ethical issues within a big European research effort: the Human Brain Project. Within this project, Kathinka Evers leads the work on ethical and societal implications of brain research, and Michele Farisco writes his (second) thesis in the project, supervised by Kathinka.

Michele Farisco’s thesis deals with disorders of consciousness. I just read an exciting book chapter that Michele authored with Kathinka and Steven Laureys (one of neuro-scientists in the field):

They present developments in the field and discuss the possibility of informed consent from some seemingly unconscious patients. They point out that informed consent has meaning only if there is a relationship between doctor/researcher and patient, which requires communication. This condition may be met if the technology evolves and people learn to use it.

But it is still unclear, they argue, whether all requirements for informed consent are satisfied. In order to give informed consent, patients must understand what they agree to. This is usually checked by asking patients to describe with their own words what the doctor/researcher communicated. This cannot be done through yes/no-communication via neuroimaging. Furthermore, the patient must understand that the information applies to him or her at a certain time, and it is unclear if these patients, who are detached from the course of everyday life and have suffered serious brain injury, have that understanding. Finally, the patient must be emotionally able to evaluate different alternatives. Also this condition is unclear.

It may seem early to discuss ethical issues related to discoveries that we don’t even know how to make intelligible. I think on the contrary that it can pave the way for emerging intelligibility. A personal reflection explains what I mean.

It is tempting to think that neuroscience must first determine whether the patients above are unconscious or not, by answering “the big question” how consciousness arises and becomes disturbed or inhibited in the brain. Only then can we understand these remarkable discoveries, and only then can practical applications and ethical implications be developed.

My guess is that practical technological applications, and human responses to their use, rather are venues for the intelligibility that is required for further scientific development. A brain does not give consent, but perhaps a seemingly unconscious patient with neuro-prosthesis. How future technology supported communication with such patients takes shape – how it works in practice and changes what we meaningfully can do, say and think – will guide future research. It is on this science-and-technology supported playing field that we might be able to ask and determine what we thought neuroscience had to determine beforehand, and on its own, by answering a “big question.”

After all, isn’t it on this playing field that we now begin to ask if some seemingly unconscious patients are conscious?

Ethics does not always run behind research, developing its “implications.” Perhaps neuro-ethics and neuroscience walk hand in hand. Perhaps neuroscience needs neuro-ethics.

Pär Segerdahl

In dialogue with patients

Philosophers and their predecessors

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogPhilosophy is often seen as a tradition. Each significant philosopher studied his significant predecessors, found them faulty in various respects, and embarked to correct them. Aristotle corrected Plato, Descartes corrected the scholastics, and Heidegger corrected the whole history of thought since the pre-Socratics.

Philosophy appears as a long backward movement into the future, driven by close reading of predecessors. Such an image is understandable in a time when philosophy is being eaten up by the study of it. We are like archaeologists of thought, trying to reconstruct philosophy through the traces it left behind in our bookshelves. We thus imagine that philosophers were above all readers of philosophical texts: super-scholars with amazing skills of close reading, enabling them to identify the weak points of their predecessors’ work.

The paradox of this view of philosophy is that the textual residues we study don’t look like scholarly texts. Perhaps because philosophers weren’t moving backwards into the future, meticulously studying earlier texts, but were above all sensitive to the times in which they lived and tried to face the future well. That is how they “read” their predecessors.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Philosophical scholarship defuses new ways of thinking

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogWhat is called “philosophy” is pursued today mostly by scholars who study philosophical authors and texts, and who learn to produce certain types of comments on philosophical ideas and concepts. Such study is interesting and important, and can be compared with literary scholarship.

A problem that I highlighted in my latest post, however, is a tendency to conflate the scholarly study of philosophy with… philosophy. Today, I want to exemplify three consequences of such conflation.

A first consequence is a taboo against thinking for oneself, like the canonized philosophers of the past, who legitimize the study of philosophy, once did. Only “great” philosophers, whose names can be found as entries in philosophical encyclopedias, can be excused for having philosophized for themselves, and without proper citation methods.

A related consequence is a sense of scandalous arrogance when philosophy is carried out as once upon a time. Since only great and already canonized philosophers are allowed to think for themselves, people who tenaciously pursue thinking will appear like pretentious bastards who believe they already have a name in the history of philosophy and, worst of all, claim to be studied!

A third and more serious consequence is that philosophical scholarship, if it is conflated with philosophy, defuses new ways of thinking. New ways of thinking are primarily meant to be adopted, or to provoke people to think better. Learned commentaries on new and original ways of thinking are interesting and important. However, if the scholarly comments are developed as if they brought out the real philosophical content of the proposed thoughts, the new thinking will be reduced to just another occasion to develop the study of philosophy… as if one did the thoughts a favor by bringing them safely home to “the history of philosophy.”

You don’t have to be great, canonized or dead to think. That is fortunate, since thinking is needed right now, in the midst of life. It just appears essentially homeless, or at home wherever it is.

Pär Segerdahl

We transgress disciplinary borders - the Ethics Blog

Doing philosophy and studying philosophy

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogLiterary scholars don’t claim that they became novelists or poets because they studied such authors and such literature. They know what they became: they became scholars who learned to produce certain kinds of commentaries on literary works. The distinction between the works they produce and the works they study is salient and most often impossible to overlook.

Things are not that obvious in what is called philosophy. Typically, people who study philosophical authors, texts, ideas and concepts and who receive a doctor’s degree in philosophy will call themselves philosophers.

They could also, and in most cases more appropriately, be called philosophical scholars who learned to produce certain types of commentaries on philosophical authors, texts, ideas and concepts.

Has philosophy been eaten up by the study of it? There seems to be a belief that philosophy exists in the scholarly format of commentaries on philosophical authors, texts, ideas and concepts, and that philosophy thrives and develops through the development of such comments.

A problem with this learned “façade conception” of philosophy is that the great canonized thinkers, who legitimize the study of philosophy, never produced that kind of scholarly literature when they philosophized.

An even greater problem is that if you try to philosophize and think for yourself today, as they did, the work you produce will be deemed “unphilosophical” or “lacking philosophically interesting thoughts,” because it isn’t written in the scholarly format of a commentary on canonized authors, texts, ideas and concepts.

Thank God literature isn’t that easily eaten up by the study of it. No one would call a novel “unliterary” because it wasn’t produced according to the canons of literary scholarship.

Pär Segerdahl

The Ethics Blog - Thinking about thinking

Intellectualizing morality

There is a prevalent idea that moral considerations presuppose ethical principles. But how does it arise? It makes our ways of talking about difficult issues resemble consultations between states at the negotiating table, invoking various solemn declarations:

  1. “Under the principle of happy consequences, you should lie here; otherwise, many will be hurt.”
  2. “According to the principle of always telling the truth, it is right to tell; even if many will be hurt.”

This is not how we talk, but maybe:

  1. “I don’t like to lie, but I have to, otherwise many will be hurt.”
  2. “It’s terrible that many will suffer, but the truth must be told.”

As we actually talk, without invoking principles, we ourselves take responsibility for how we decide to act. Lying, or telling the truth, is a burden even when we see it as the right thing to do. But if moral considerations presuppose ethical principles of moral rightness, there is no responsibility to carry. We refer to the principles!

The principles give us the right to lie, or to speak the truth, and we can live on with a self-righteous smile. But how does the idea of moral principles arise?

My answer: Through the need to intellectually control how we debate and reach conclusions about important societal issues in the public sphere.

Just as Indian grammarians made rules for the correct pronunciation of holy words, ethicists make principles of correct moral reasoning. According to the first principle, the first person reasons correctly; the other one incorrectly. According to the second principle, it’s the other way round.

But no one would even dream of formulating these principles, if we didn’t already talk as we do about important matters. The principles are second-rate goods, reconstructions, scaffolding on life, which subsequently can have a certain social and intellectual control function.

Moral principles may thus play a significant role in the public sphere, like grammatical rules codifying how to write and speak correctly. We agree on the principles that should govern public negotiations; the kind of concerns that should be considered in good arguments.

The problem is that the principles are ingeniously expounded as the essence and foundation of morality more generally, in treatises that are revered as intellectual bibles.

The truth must be told: it’s the other way round. The principles are auxiliary constructions that codify how we already bear the words and the responsibility. Don’t let the principles’ function in the public sphere distort this fact.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

From tree of knowledge to knowledge landscapes

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogScience was long revered as free and self-critical pursuit of knowledge. There were hopes of useful applications, of course, but as fruits that once in a while fall from the tree of knowledge.

In a thought-provoking article in the Croatian Medical Journal, Anna Lydia Svalastog describes how traditional reverence for science and devout hope of fruits from above in practice disappeared with World War II.

Researchers who saw science as their calling instead found themselves called up for service in multidisciplinary projects, solving scientific problems for politically defined aims. Most famous is the Manhattan project, intended to develop an atomic bomb to alter relative military strengths.

This way of organizing research has since then become the rule, in a post-war condition in which research initiatives aim towards altering relative economic strengths between nations. Rather than revering science, we value research in project format. We value research not only in economic terms, I want to add, but also in terms of welfare, health and environment.

From the late 1970s, political and economic interest in research shifted from physics to the life sciences and biotechnology. Svalastog mentions examples such as genetically modified organisms (GMO), energy wood and biological solutions to pollution. It is difficult to say where research ends and applications begin, when interest in applications governs the organization of research from the outset.

The main question in the article is how to understand and handle the new condition. How can we understand the life sciences if society no longer piously anticipates applications as fruits from above but calculates with them from the beginning?

Svalastog uses a new concept for these calculated fruits: bio-objects. They are what we talk about when we talk about biotechnology: energy wood, GMO, cultivated stem cells, vaccines, genetic tests and therapies, and so on.

The point is that science doesn’t define these objects on its own, as if they still belonged to science. Bio-objects are what they become, in the intersection of science, politics and society. After all, vaccines don’t exist and aren’t talked about exclusively in laboratories, but a parent can take the child to the hospital for vaccination that was decided politically to be tax-financed.

Instead of a tree of knowledge stretching its fruit-bearing branches above society, we thus have flatter knowledge landscapes in which a variety of actors contribute to what is described in the article as bio-objectification. The parent who takes the child to the hospital is such an actor, as is the nurse who gives the vaccine, the politicians who debate the vaccination program, the journalists who write about it… and the research team that develop the vaccine.

Why do we need a concept of bio-objectification, which doesn’t reverently let the life sciences define these objects in their own terms? I believe, to understand and handle our post-war condition.

Svalastog mentions as an example controversies about GMO. Resistance to GMO is often described as scientifically ignorant, as if people lived in the shadow of the tree of knowledge and the solution had to consist in dropping more science information from the tree. But no links with levels of knowledge have been established, Svalastog writes, but rather with worldviews, ethics and religion.

What we need to handle our condition, Svalastog maintains, is thus the kind of research that was neglected in the post-war way of organizing research. We need humanistic research about knowledge landscapes, rather than instinctive reactions from a bygone era when the tree of knowledge was still revered.

I presume that this humanistic research too will be performed in project format, where humanistic scholars are called up for research service, studying the contexts within which bio-objects are understood, handled and valued.

Undeniably, however, some interesting thoughts about our condition here hover more freely above the knowledge landscapes.

Pär Segerdahl

Part of international collaborations - the Ethics Blog

Perplexed by autonomy

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogDuring the seminar this week we discussed an elusive concept. The concept is supposed to be about ordinary people, but it is a concept that ordinary people hardly use about themselves.

We talked about autonomy, which is a central notion in ethical discussions about how patients and research participants should be treated. They should be respected as persons who make their own decisions on the basis of information about the options.

The significance of this is evident if we consider cases where patients are given risky treatments without being informed about the risks and given the opportunity to refuse treatment. Or cases where vulnerable persons are forced to function as research subjects in various experiments.

“Respect people’s autonomy!” is comprehensible as a slogan against such tendencies.

What makes the concept more elusive, however, is that increasingly it is used more speculatively as the name of a valuable quality in the human, perhaps even the superior and most distinctive one. Instead of functioning as a comprehensible slogan in a real context, the notion becomes utopian, demanding that individuals constantly be informed about options and making decisions.

Autonomy becomes the superior imperative in all areas of human life.

Such a totalized imperative displaces the meaning of these areas of life, for example, the meaning of health care. Health care no longer seems being primarily about treating people’s diseases (while respecting their autonomy), but as being about developing diagnoses and treatments that give individual patients more information and options to choose between.

The concept of autonomy becomes a utopian construct that does not face the real-life challenges that made the slogan comprehensible, because it aims towards an ideal solution without need of the slogan. Every human practice is turned into an arena that first of all supports autonomy.

The speculative concept is somewhat self-contradictory, however, since it is imposed paternalistically as the essence of the human, while the humans concerned hardly use it to understand themselves. Well, then we’ll have to turn them into such individuals!

No, I confess I’m quite perplexed by the utopian-intellectual refinement of otherwise comprehensible slogans like autonomy, justice and freedom. These efforts appear like the noblest efforts of humankind, and yet they run amok with our words and displace the meaning of every human practice.

Pär Segerdahl

We like real-life ethics : www.ethicsblog.crb.uu.se

The claim of thoughtfulness

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogPhilosophy has an aura of pretentiousness. Philosophers seem to make such ambitious claims about the essence of everything. About morality, about mind, about language… usually without doing any empirical research!

From where do they derive their claims? Are they sitting in armchairs just awaiting “truths” from out of nowhere? Is philosophy a form of “easy science” where one goes straight to the results without doing the research work needed to substantiate them?

But there are certain peculiarities in the claims, and in the style of address, which disappear in this image of philosophy as “easy science.”

Researchers can write didactically, informing the reader about results of their research. Science writers thus typically adopt a “von oben” attitude that is perfectly legitimate, since research sheds light on states of things that are unknown to the reader.

If philosophers adopt the didactic style of a science writer, the result is comical: “My thought processes during the past ten years demonstrate that morality basically is…,” and then follows information about the essence of morality!

The image of philosophers as pretentious “armchair researchers” expresses this comedy.

Philosophers certainly make claims, but these are claims that can be questioned by a reader who thinks further than the author. Philosophical writers expect readers to make objections that possibly are as powerful as the writer’s own. This “detail” is overlooked in the image of the pretentious armchair philosopher.

Philosophical writers expose their entire thought processes, so that the reader can think with – and against – the author. Philosophical writers address readers as peers in thinking. Together, we think for ourselves.

Perhaps the claim of scientific expertise has become so dominant that we no longer hear the claim of thoughtfulness.

Pär Segerdahl

The Ethics Blog - Thinking about thinking

« Older posts Newer posts »