Teaching the child the concept of what it learns

April 21, 2015

Pär SegerdahlIt is natural to think that a child, who learns to speak, learns precisely that: simply to speak. And a child who learns addition learns precisely that: simply to add.

But is speaking “simply speaking” and is adding “simply adding”?

Imagine a very young child who is beginning to say what its parents recognize as the word “mummy.” The parents probably respond, enthusiastically:

  • “Oh, you said mummy!”

By repeating “mummy,” the parents naturally assume they support the child to say mummy again. Their focus is entirely on “mummy”: on the child’s saying of “mummy” and on their repetitions of “mummy.” By encouraging the child to say “mummy” again (and more clearly), they are teaching the child to speak.

No doubt their encouraging repetitions do support the child. However, the parents didn’t merely repeat “mummy.” They also said:

  • “Oh, you said mummy!”

From the very first words a child utters, parents respond not only by repeating what the child says, but also by speaking about speaking:

  • Say daddy!”
  • “Do you want to speak to mummy?”
  • “You said you wanted cookies”
  • “Which cookie did you mean?”
  • “What’s your name?”
  • “What you said isn’t true”
  • “Don’t use that word!”

Parents’ natural attitude is that they teach the child simply to speak. But, more spontaneously, without intending or noticing it, they initiate the child into the notions of speaking. One might call this neglected dimension of teaching: the reflexive dimension. When we teach the child X, we simultaneously initiate it into the reflexive notions of X: into the concept of what it learns.

This should apply also to learning addition, and I assume to just about anything we learn. There is an easily neglected initiation into a reflexive dimension of what is learned.

I suppose one reason why the reflexive dimension is neglected is that it is what enables talk about what the child learns. Reflexivity draws our attention away from itself, and thus from the fact that the child not simply learns what learns, but also the concept of what it learns.

If you want to read more about reflexive practices – how they are acquired, how they practically contribute to making language what it is (said to be); how they tend to be intellectually sublimated as theories of language – I want to recommend the writings of Talbot J. Taylor.

One article by Taylor that especially clearly demonstrates the early onset of reflexive language use in children  is:

Taylor’s work on reflexivity challenges me to reconsider the nature of philosophy. For philosophy seems to be concerned with the kind of notions we fail to notice we initiate children into, when we say, “You said mummy!”

Philosophy is “about” what we don’t notice we learn as children.

Pär Segerdahl

Minding our language - the Ethics Blog


Experts on assignment in the real world

April 14, 2015

Pär SegerdahlExperts on assignment in the real world cease in part to be experts. Just consider computer experts who create a computer system for the tax authorities, or for a bank, or for a hospital.

In order for these systems to work on location, the computer experts need to be open to what they don’t know much about: the unique activities at the tax authorities, or at the bank, or at the hospital.

Computer experts who aren’t open to their non-expertise on the site where they are on assignment perform worse as experts and will deliver inferior systems.

Experts can therefore not in practice be only experts. If one exaggerates one’s role as an expert, one fails on assignment in the real world.

This should apply also to other forms of expertise. My guess is that legal experts almost always find themselves in this precarious situation of being experts in a reality that constantly forces them to open themselves to their non-expertise. In fact, law appears to be an occupation that to an unusually high degree develops this openness systematically. I admire how legal experts constantly learn about the multifarious realities they act in.

Jurists should be a role model for computer experts and economic experts: because they methodically manage their inevitable non-expertise.

This post indicates the spirit in which I (as legal non-expert) took the liberty to question the Swedish Data Inspection Board’s shutting down of LifeGene and more recent rejection of a proposed law on research databases.

Can one be an expert “purely” on data protection? I think not. My impression is that the Data Inspection Board, on assignment in the world of research, didn’t open itself to its non-expertise in this reality. They acted (it seems to me) as if data protection issues could be handled as a separate field of expertise, without carefully considering the unique conditions of contemporary research and the kinds of aims that research initiatives can have.

Perhaps the temptation resides in the Board’s role as a public body: as an authority with a seemingly “pure” mission.

Pär Segerdahl

We like broad perspectives : www.ethicsblog.crb.uu.se


Neuroethics: new wine in old bottles?

April 7, 2015

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog


Is it ethical that uninformed members of the public decide just how bad your disability is?

March 31, 2015

Terry FlynnLast time I raised the possibility of changing child health policy because teenagers are more likely than adults to view mental health impairments as being the worst type of disability. However, today I consider adults only in order to address a more fundamental issue.

Imagine you had an uncommon, but not rare, incurable disease that caused you to suffer from both “moderate” pain and “moderate” depression and neither had responded to existing treatments. If policy makers decided there were only enough funds to try to help one of these symptoms, who decides which should get priority?

In most of Europe, perhaps surprisingly, it would not be you the patient, nor even the wider patient group suffering from this condition. It is the general population. Why? The most often quoted reason will be familiar to those who know the history of the USA: “no taxation without representation”. Tax-payers supposedly fund most health care and their views should decide where this money is most needed. If they consider pain to be worse than depression, then health services should prioritise treatment for pain.

Thus, many European countries have conducted nationally representative surveys to quantify their general public’s views on various health states. Unfortunately Swedish population values were only published last year, almost two decades after the first European country published theirs. Although late, these Swedish population values raise a disturbing issue.

Suppose the general population is wrong?

Why might this be? Many people surveyed are, and always have been, basically healthy. How do they know whether depression is better or worse than pain? In fact, these people tend to say pain would be worse, whilst patients who have experienced both say the opposite.

The Swedish general population study was large and relatively well equipped to investigate how people in ill health value disability. And, indeed, they do value it differently than the average healthy Swedish person.

So is it ethical to disenfranchise patients in order that all citizens, informed or not, have a say?

Why not use the views of patients instead?

Well actually the stated policy in Sweden is that the health values ideally should come from the individuals affected by the health intervention (patients). So Sweden now has the information required to follow its own health policy aims. Perhaps it’s time politicians were asked if it is ethical to prioritise pain over mental health, just because various general populations thought this is so.

As a final thought, I return to the issue of “what funds healthcare”? You may be surprised to learn that the “general taxation” answer is wrong here too. But that strays beyond health care and ethics and into the dark heart of economics, which I will therefore discuss elsewhere next week!

Terry Flynn

We like challenging questions - the ethics blog


Being humans when we are animals

March 25, 2015

Pär SegerdahlMost people know that humans are animals, a primate species. Still, it is difficult to apply that knowledge directly to oneself: “I’m an animal”; “My parents are apes.”

– Can you say it without feeling embarrassed and slightly dizzy?

In a recent paper I explore this difficulty of “bringing home” an easily cited scientific fact:

Why does the scientific “fact” crumble when we apply it directly to ourselves?

I approach this difficulty philosophically. We cannot run ahead of ourselves, but I believe that’s what we attempt if we approach the difficulty theoretically. Say, by theorizing the contrast between humans and animals as an absolute presupposition of human language that science cannot displace.

Such a theory would be as easy to cite as the “fact” and wouldn’t touch our difficulty, the dizziness we feel.

Instead, I explore a personal experience. When I visited a laboratory for ape language research, an ape named Panbanisha told me to be QUIET and later called me a MONSTER. Being reprimanded by an ape made me dizzy about my humanness and about her animality.

How did the dizziness arise? After spending some time with the apes, the vertigo disappeared. How did it disappear?

That’s investigated in the paper by asking further questions, and by recollecting aspects of the meeting with Panbanisha to which those questions drew my attention. The paper offers a philosophical alternative to theory.

Trust your uncertainty and follow your questions!

Pär Segerdahl

Understanding enculturated apes - the ethics blog


Moody teenagers? Giving them a greater say in health policy might solve this

March 18, 2015

Terry FlynnWe have all heard of moody teenagers. Maybe we have them, or can remember being one. Recent research with my Australian colleagues suggests they may genuinely have more difficulty living with poor mental health than adults do.

Specifically, compared to the general public aged 18+, they are more likely to view mental health related impairments as being worse than physical disabilities.

This is not just an academic curiosity – if true, it means society is probably under-investing in child mental health. To explain why, we must first understand how most European countries decide on health funding priorities.

In general, disabilities with the greatest capacity to benefit from treatment are prioritised. To find out whether pain, depression, or some other, physical, impairment to health is worst – and therefore has the greatest potential benefit from treatment – nations conduct large population-based surveys. These require adults to make choices between lots of possible impaired health states in order to find out just how bad these are, relative to each other.

Of course, people often disagree on what is worst, and by how much, so decisions must be made as to whose values matter most. European nations generally agree that it is unethical to allow the rich to dictate what disabilities are most deserving of resources. Instead of “one € one vote”, it is “one person one vote”: taking a simple average of every adult’s values does this naturally.

Whilst this sounds fair and democratic in terms of process, it could be leading to uncomfortable outcomes for our moody teenager. Why? Well, if poor mental health is genuinely worse for teenagers than adults believe it to be then mental health interventions might not get funded: for example, if adults think pain is much worse, pain medications will be prioritised instead. This is because only adults are being asked for their health values, not teenagers.

So perhaps adults just don’t remember what it’s like to be young and we should use the teenagers’ values for health interventions that affect them?

Maybe not. There is a saying “age brings wisdom” and perhaps adults’ greater experience of illness means their values for mental health impairments are the correct ones. Maybe younger people have simply not experienced enough in life to know what aspects of illness are really worst. After all, immaturity is one reason why younger teenagers are not allowed to vote.

The ethical issues surrounding at what age teenagers can have sex, vote and make independent decisions in public life all become relevant here. However, “one person one vote” has one more disturbing implication that is relevant for people of all ages. By taking an average of everyone’s views, national health state value surveys include lots of healthy people who have no idea what it is like to live with severe illness. Does this matter? Well, it turns out that to the depressed patient in desperate need of a new anti-depressant it probably does.

Patients and the general public tend to disagree on which is worst – extreme pain or extreme depression. The general public gets the final say and my next blog entry will discuss how and why we might use the health values of patients themselves in priority setting instead.

Terry Flynn

We want to be just - the Ethics Blog


Openness as a norm

March 11, 2015

Pär SegerdahlWhy should scientists save their code keys as long as 20 years after they conducted their study, the Swedish Data Inspection Board apparently wonders. In its opinion to a proposed new Swedish law on research databases, it states that this seems too long a period of time.

Yet, researchers judge that code keys need to be saved to connect old samples to new registry data. The discovery of a link between HPV infection and cervical cancer, for example, could not have been made with newly collected samples but presupposed access to identifiable samples collected in the 1960s. The cancer doesn’t develop until decades after infection.

New generations of researchers are beginning to perceive it as an ethical duty to make data usable for other scientists, today and in the future. Platforms for long-term data sharing are being built up not only in biobank research, but also in physics, in neuroscience, in linguistics, in archeology…

It started in physics, but has now reached the humanities and the social sciences where it is experienced as a paradigm shift.

A recent US report suggests that sharing data should become the norm:

Research is obviously changing shape. New opportunities to manage data mean that research is moving up an IT-gear. The change also means a norm shift. Data are no longer expected to be tied to specific projects and research groups. Data are expected to be openly available for a long time – Open Access.

The norm shift raises, of course, issues of privacy. But when we discuss those issues, public bodies can hardly judge for researchers what, in the current vibrant situation, is reasonable and unreasonable, important and unimportant.

Perhaps it is profoundly logical, in today’s circumstances, to give data a longer and more open life than in the previous way of organizing research. Perhaps such long-term transparency really means moving up a gear.

We need to be humbly open to that possibility and not repeat an old norm that research itself is leaving behind.

Pär Segerdahl

Approaching future issues - the Ethics Blog


Follow

Get every new post delivered to your Inbox.

Join 157 other followers

%d bloggers like this: