A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: Musings (Page 12 of 18)

All you need is law? The ethics of legal scholarship (By Moa Kindström Dahlin)

Moa Kindström DahlinWorking as a lawyer in a multidisciplinary centre for research ethics and bioethics, as I do, often brings up to date questions regarding the relationship between law and ethics. What kind of ethical competence does academic lawyers need, and what kind of ethical challenges do we face? I will try to address some aspects of these challenges.

First, I must confess. I am a believer, a believer of law.

That does not mean that I automatically like all regulations, it is just that I cannot see a better way to run the world, but through a common system of legal norms. Believing in law means that I accept living in a different universe. I know the non-lawyers cannot always see my universe, but I see it clearly, and I believe in it. You’ll have to trust me – and all other lawyers – through training and education, we see this parallel universe and believe in it.

I do not always like what I see, but I do accept that it exists.

I think that understanding a lawyer’s understanding of what law is, is a necessary precondition for going deeper into the understanding of what I here refer to as the ethics of legal scholarship. So, what is law? This question has a thousand answers, stemming from different philosophical theories, but I choose to put it like this:

Law is an idea as well as a practical reality and a practice.

As a reality, law is the sum of all regulation, locally (e.g. Sweden), regionally (e.g. Europe) and internationally. For example, the statutes, the preparatory works, court decisions, the academic legal literature, the general legal principles and other legal sources where we find the answers to questions such as “Is it legal to do this or that?” or “Might I be responsible for this specific act in some way?”

The practice of law has to do with the application of general legal knowledge (whatever that means) to a specific case, and this application always involves interpretation. This means that law is contextual. The result of its application differs depending on situation, time and place.

Law as an idea is the illusion that there are legal answers out there somewhere, ready to be discovered, described and applied. Lawyers live in a universe where this illusion is accepted, although every lawyer knows that this is oversimplified. There is rarely an obvious answer to a posed question, and there are often several different interpretations that can be made.

The legal universe is a universe of planets and orbits: different legal sources and jurisdictions, different legal traditions and ideas on how to interpret legal sources. There are numerous legal theories, perspectives and ideologies: legal positivism, critical legal studies, law and economics and therapeutic jurisprudence to name a few. The way we, the lawyers, choose to look at the law – the lens of our telescope if you like – affects how we perceive and decipher what we see.

Law is sometimes described as codified ethics. The legal system of a state often provides structures and systems for new technologies and medical progress. Therefore, law plays an important role when analyzing a state’s political system or the organization of its welfare system.

Law, in short, is a significant piece of a puzzle in the world as we know it.

This means that the idea of law as something concrete, something we can discover and describe, creates our perception of reality. Yet, we must be aware of the fact that the law itself is intangible, and answers to legal questions might differ, depending on whom (which lawyer) is making the analysis and which lens is being used.

Sometimes the answer is clear and precise, but many times the answer is vague and blurry. When the law seems unclear, it is up to us, the lawyers, to heal it.

We cannot accept “legal gaps”.

The very idea that law is a system that provides all the answers means that we must try to find all the answers within the system. If we cannot find them, we have to create them. Therefore, proposing and creating legal answers is one of the tasks for legal scholars. With this task comes great power. If a lawyer states that something is a description of what law is, such a description may be used as an argument for a political development in that direction.

Therefore the descriptions of what law is and what is legal within a field – especially if the regulation in the field is new or under revision – must always be nuanced and clearly motivated. If the statement as to what law is emanates from certain starting points, this should be clarified in order to make the reasoning transparent.

This is what I would like to call the ethics of legal scholarship.

It is worth repeating: Research within legal scholarship always requires thoughtfulness. We, the scholars, have to be careful and ethically aware all the time. Our answers and statements as to legal answers are always normative, never just descriptive. Every time an academic lawyer answers a question, the answer or statement might itself become a legal source and be referred to as a part of the law.

Law is constantly reconstructing itself and is, to some extent, self-sufficient. But if law is law, does that mean that all you need is law?

Moa Kindström Dahlin

Thinking about law - the Ethics Blog

 

Where is consciousness?

 

Michele FariscoWould it be possible to use brain imaging techniques to detect consciousness and then “read” directly in people’s brains what they want or do not want? Could one, for example, ask a severely brain injured patient for consent to some treatment, and then obtain an answer through a brain scan?

Together with the philosopher Kathinka Evers and the neuroscientist Steven Laureys, I recently investigated ethical and clinical issues arising from this prospective “cerebral communication.”

Our brains are so astonishingly complex! The challenge is how to handle this complexity. To do that we need to develop our conceptual apparatus and create what we would like to call a “fundamental” neuroethics. Sound research needs solid theory, and in line with this I would like to comment upon the conceptual underpinnings of this ongoing endeavor of developing a “fundamental” neuroethics.

The assumption that visualizing activity in a certain brain area can mean reading the conscious intention of the scanned subject presupposes that consciousness can be identified with particular brain areas. While both science and philosophy widely accept that consciousness is a feature of the brain, recent developments in neuroscience problematize relating consciousness to specific areas of the brain.

Tricky logical puzzles arise here. The so called “mereological fallacy” is the error of attributing properties of the whole (the living human person) to its parts (the brain). In our case a special kind of mereological fallacy risks to be embraced: attributing features of the whole (the brain) to its parts (those visualized as more active in the scan). Consciousness is a feature of the whole brain: the sole fact that a particular area is more active than others does not imply conscious activity.

The reverse inference is another nice logical pitfall: the fact that a study reveals that a particular cerebral area, say A, is more active during a specific task, say T, does not imply that A always results in T, nor that T always presupposes A.

In short, we should avoid the conceptual temptation to view consciousness according to the so called “homunculus theory”: like an entity placed in a particular cerebral area. This is unlikely: consciousness does not reside in specific brain regions, but is rather equivalent to the activity of the brain as a whole.

But where is consciousness? To put it roughly, it is nowhere and everywhere in the brain. Consciousness is a feature of the brain and the brain is more than the sum of its parts: it is an open system, where external factors can influence its structure and function, which in turn affects our consciousness. Brain and consciousness are continually changing in deep relationships with the external environment.

We address these issues in more detail in a forthcoming book that I and Kathinka Evers are editing, involving leading researchers both in neuroscience and in philosophy:

Michele Farisco

We want solid foundations - the Ethics Blog

 

Risks are not just about numbers

Jessica Nihlén FahlquistOn a daily basis, we are informed about risks. The media tell us that obesity increases the risk of cardiovascular diseases and that we can reduce the risk of Alzheimers by eating the right kind of food. We are confronted with the potential danger of nanoparticles and mobile phone radiation. Not to mention the never ending discussion about nuclear power. Some news are more serious than others, but we cannot avoid risk information as such.

In addition to the media, government agencies inform the public about risks. The Swedish National Food Agency encourages people to eat fish because of its potential to reduce the risk of cardiovascular disease. But we should also reduce the intake of wild-caught salmon and herring due to the health risks associated with mercury.

Contemporary society has been described as a risk society, simply put a society preoccupied with risks. We invest a great amount of our common resources in risk management and communication. Sometimes, it appears as though risks are communicated in a hasty way. As soon as a risk is “found,” it is assumed that the responsibility of the government and possibly of the media is to inform the public. It is not acknowledged that what is considered to be a risk is not always straightforward and value neutral.

Whereas experts define risk as probability multiplied by negative outcome and weigh risks against benefits, several studies have shown that lay people conceive of risk in a much more complex and nuanced way. According to the expert notion, a risk is acceptable if the benefits outweigh the risks. However, individual lay people include other factors, for example, whether risks and benefits are distributed fairly and whether the risk has been taken voluntarily or it is one person exposing another to the risk. Studies in risk perception have also been acknowledged by ethicists and philosophers, who point out that not only do factors like voluntariness and fairness de facto influence people’s notion of the acceptability of risk, but we should care about these values. They are normatively important.

These insights about risk as ethically relevant and value-laden should influence how risks are managed and communicated in society. One example is how government agencies view risks and benefits in the case of infant feeding. Breastfeeding is seen as the best option in terms of risks and benefits. Mothers are expected to breastfeed their babies if they want to do what is best for their baby. Scientific and value-laden statements are mixed in the information provided to new parents. Women, adoptive parents and male gay couples who cannot breastfeed are negatively affected by this message. Women who cannot breastfeed oftentimes feel guilty and think that they are harming their babies for life by not breastfeeding. This should be taken into account when communicating with parents-to-be and new parents. The relationship between government agencies and ordinary people is inevitably unequal and the former should take responsibility for the effects of risk communication.

Another example is the H1N1 virus and the Pandemrix vaccination program in Sweden in 2009. The government informed the public that the vaccine was completely safe and that everybody should get vaccinated for solidarity reasons. After some time, it turned out that a group of teenagers had their lives more or less destroyed because they got narcolepsy probably due to the vaccination. This deserves a thorough ethical discussion.

There are currently signs that some people now hesitate to have their children take part in the regular vaccination program, including protection against, for example, measles. The regular vaccines are much more tested and substantially safer than Pandemrix. The opposition against vaccines are generally based on misconceptions and deficient studies. However, instead of mocking “ignorant” people and thinking that it is possible to change the perception and attitude of anxious parents by informing more about numbers, the anxiety and the lacking trust should be taken seriously. A respectful dialogue is needed.

This does not mean that the opponents of vaccination have the same and as accurate information as proponents of vaccination, who have science on their side. However, risks are not just about numbers!

Read more:

Jessica Nihlén Fahlquist

We care about communication - the Ethics Blog

 

Teaching the child the concept of what it learns

Pär SegerdahlIt is natural to think that a child, who learns to speak, learns precisely that: simply to speak. And a child who learns addition learns precisely that: simply to add.

But is speaking “simply speaking” and is adding “simply adding”?

Imagine a very young child who is beginning to say what its parents recognize as the word “mummy.” The parents probably respond, enthusiastically:

  • “Oh, you said mummy!”

By repeating “mummy,” the parents naturally assume they support the child to say mummy again. Their focus is entirely on “mummy”: on the child’s saying of “mummy” and on their repetitions of “mummy.” By encouraging the child to say “mummy” again (and more clearly), they are teaching the child to speak.

No doubt their encouraging repetitions do support the child. However, the parents didn’t merely repeat “mummy.” They also said:

  • “Oh, you said mummy!”

From the very first words a child utters, parents respond not only by repeating what the child says, but also by speaking about speaking:

  • Say daddy!”
  • “Do you want to speak to mummy?”
  • “You said you wanted cookies”
  • “Which cookie did you mean?”
  • “What’s your name?”
  • “What you said isn’t true”
  • “Don’t use that word!”

Parents’ natural attitude is that they teach the child simply to speak. But, more spontaneously, without intending or noticing it, they initiate the child into the notions of speaking. One might call this neglected dimension of teaching: the reflexive dimension. When we teach the child X, we simultaneously initiate it into the reflexive notions of X: into the concept of what it learns.

This should apply also to learning addition, and I assume to just about anything we learn. There is an easily neglected initiation into a reflexive dimension of what is learned.

I suppose one reason why the reflexive dimension is neglected is that it is what enables talk about what the child learns. Reflexivity draws our attention away from itself, and thus from the fact that the child not simply learns what learns, but also the concept of what it learns.

If you want to read more about reflexive practices – how they are acquired, how they practically contribute to making language what it is (said to be); how they tend to be intellectually sublimated as theories of language – I want to recommend the writings of Talbot J. Taylor.

One article by Taylor that especially clearly demonstrates the early onset of reflexive language use in children  is:

Taylor’s work on reflexivity challenges me to reconsider the nature of philosophy. For philosophy seems to be concerned with the kind of notions we fail to notice we initiate children into, when we say, “You said mummy!”

Philosophy is “about” what we don’t notice we learn as children.

Pär Segerdahl

Minding our language - the Ethics Blog

Experts on assignment in the real world

Pär SegerdahlExperts on assignment in the real world cease in part to be experts. Just consider computer experts who create a computer system for the tax authorities, or for a bank, or for a hospital.

In order for these systems to work on location, the computer experts need to be open to what they don’t know much about: the unique activities at the tax authorities, or at the bank, or at the hospital.

Computer experts who aren’t open to their non-expertise on the site where they are on assignment perform worse as experts and will deliver inferior systems.

Experts can therefore not in practice be only experts. If one exaggerates one’s role as an expert, one fails on assignment in the real world.

This should apply also to other forms of expertise. My guess is that legal experts almost always find themselves in this precarious situation of being experts in a reality that constantly forces them to open themselves to their non-expertise. In fact, law appears to be an occupation that to an unusually high degree develops this openness systematically. I admire how legal experts constantly learn about the multifarious realities they act in.

Jurists should be a role model for computer experts and economic experts: because they methodically manage their inevitable non-expertise.

This post indicates the spirit in which I (as legal non-expert) took the liberty to question the Swedish Data Inspection Board’s shutting down of LifeGene and more recent rejection of a proposed law on research databases.

Can one be an expert “purely” on data protection? I think not. My impression is that the Data Inspection Board, on assignment in the world of research, didn’t open itself to its non-expertise in this reality. They acted (it seems to me) as if data protection issues could be handled as a separate field of expertise, without carefully considering the unique conditions of contemporary research and the kinds of aims that research initiatives can have.

Perhaps the temptation resides in the Board’s role as a public body: as an authority with a seemingly “pure” mission.

Pär Segerdahl

We like broad perspectives : www.ethicsblog.crb.uu.se

Neuroethics: new wine in old bottles?

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog

Is it ethical that uninformed members of the public decide just how bad your disability is? (By Terry Flynn)

Terry FlynnLast time I raised the possibility of changing child health policy because teenagers are more likely than adults to view mental health impairments as being the worst type of disability. However, today I consider adults only in order to address a more fundamental issue.

Imagine you had an uncommon, but not rare, incurable disease that caused you to suffer from both “moderate” pain and “moderate” depression and neither had responded to existing treatments. If policy makers decided there were only enough funds to try to help one of these symptoms, who decides which should get priority?

In most of Europe, perhaps surprisingly, it would not be you the patient, nor even the wider patient group suffering from this condition. It is the general population. Why? The most often quoted reason will be familiar to those who know the history of the USA: “no taxation without representation”. Tax-payers supposedly fund most health care and their views should decide where this money is most needed. If they consider pain to be worse than depression, then health services should prioritise treatment for pain.

Thus, many European countries have conducted nationally representative surveys to quantify their general public’s views on various health states. Unfortunately Swedish population values were only published last year, almost two decades after the first European country published theirs. Although late, these Swedish population values raise a disturbing issue.

Suppose the general population is wrong?

Why might this be? Many people surveyed are, and always have been, basically healthy. How do they know whether depression is better or worse than pain? In fact, these people tend to say pain would be worse, whilst patients who have experienced both say the opposite.

The Swedish general population study was large and relatively well equipped to investigate how people in ill health value disability. And, indeed, they do value it differently than the average healthy Swedish person.

So is it ethical to disenfranchise patients in order that all citizens, informed or not, have a say?

Why not use the views of patients instead?

Well actually the stated policy in Sweden is that the health values ideally should come from the individuals affected by the health intervention (patients). So Sweden now has the information required to follow its own health policy aims. Perhaps it’s time politicians were asked if it is ethical to prioritise pain over mental health, just because various general populations thought this is so.

As a final thought, I return to the issue of “what funds healthcare”? You may be surprised to learn that the “general taxation” answer is wrong here too. But that strays beyond health care and ethics and into the dark heart of economics, which I will therefore discuss elsewhere next week!

Terry Flynn

We like challenging questions - the ethics blog

Being humans when we are animals

Pär SegerdahlMost people know that humans are animals, a primate species. Still, it is difficult to apply that knowledge directly to oneself: “I’m an animal”; “My parents are apes.”

– Can you say it without feeling embarrassed and slightly dizzy?

In a recent paper I explore this difficulty of “bringing home” an easily cited scientific fact:

Why does the scientific “fact” crumble when we apply it directly to ourselves?

I approach this difficulty philosophically. We cannot run ahead of ourselves, but I believe that’s what we attempt if we approach the difficulty theoretically. Say, by theorizing the contrast between humans and animals as an absolute presupposition of human language that science cannot displace.

Such a theory would be as easy to cite as the “fact” and wouldn’t touch our difficulty, the dizziness we feel.

Instead, I explore a personal experience. When I visited a laboratory for ape language research, an ape named Panbanisha told me to be QUIET and later called me a MONSTER. Being reprimanded by an ape made me dizzy about my humanness and about her animality.

How did the dizziness arise? After spending some time with the apes, the vertigo disappeared. How did it disappear?

That’s investigated in the paper by asking further questions, and by recollecting aspects of the meeting with Panbanisha to which those questions drew my attention. The paper offers a philosophical alternative to theory.

Trust your uncertainty and follow your questions!

Pär Segerdahl

Understanding enculturated apes - the ethics blog

Openness as a norm

Pär SegerdahlWhy should scientists save their code keys as long as 20 years after they conducted their study, the Swedish Data Inspection Board apparently wonders. In its opinion to a proposed new Swedish law on research databases, it states that this seems too long a period of time.

Yet, researchers judge that code keys need to be saved to connect old samples to new registry data. The discovery of a link between HPV infection and cervical cancer, for example, could not have been made with newly collected samples but presupposed access to identifiable samples collected in the 1960s. The cancer doesn’t develop until decades after infection.

New generations of researchers are beginning to perceive it as an ethical duty to make data usable for other scientists, today and in the future. Platforms for long-term data sharing are being built up not only in biobank research, but also in physics, in neuroscience, in linguistics, in archeology…

It started in physics, but has now reached the humanities and the social sciences where it is experienced as a paradigm shift.

A recent US report suggests that sharing data should become the norm:

Research is obviously changing shape. New opportunities to manage data mean that research is moving up an IT-gear. The change also means a norm shift. Data are no longer expected to be tied to specific projects and research groups. Data are expected to be openly available for a long time – Open Access.

The norm shift raises, of course, issues of privacy. But when we discuss those issues, public bodies can hardly judge for researchers what, in the current vibrant situation, is reasonable and unreasonable, important and unimportant.

Perhaps it is profoundly logical, in today’s circumstances, to give data a longer and more open life than in the previous way of organizing research. Perhaps such long-term transparency really means moving up a gear.

We need to be humbly open to that possibility and not repeat an old norm that research itself is leaving behind.

Pär Segerdahl

Approaching future issues - the Ethics Blog

The need of a bird’s-eye view

Pär SegerdahlIn the previous blog post I wrote about the tendency in today’s research to build common research platforms where data are stored and made open: available for future research, meta-analysis and critical scrutiny of published research.

The tendency is supported at EU level, by bodies responsible for research. Simultaneously, it is obstructed at EU level, by other bodies working with data protection.

The same hopeless conflict can be seen in Sweden, where the Swedish Data Inspection Board time and again stops such efforts or criticizes suggestions for how to regulate them. This month the Data Inspection Board criticized a proposed law on research databases.

It may seem as if the board just dryly listed a number of points where the proposal is inconsistent with other laws or allowed unreasonable infringement of privacy. At the same time, the Data Inspection Board seems alien to the new way of organizing research. Why on earth should researchers want to save so much data so damn long?

How can we handle these conflicts between public bodies that each has his own little mission and thus its own limited field of vision?

Pär Segerdahl

We want to be just - the Ethics Blog

« Older posts Newer posts »