A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroscience (Page 4 of 5)

The brain develops in interaction with culture

Pär SegerdahlThe brain develops dramatically during childhood. These neural changes occur in the child’s interaction with its environment. The brain becomes a brain that functions in the culture in which it develops. If a child is mistreated, if it is deprived of important forms of interaction, like language and care, the brain is deprived of its opportunities to develop. This can result in permanent damages.

The fact that the brain develops in interaction with culture and becomes a brain that functions in culture, raises the question if we can change the brain by changing the culture it interacts with during childhood. Can we, on the basis of neuroscientific knowledge, plan neural development culturally? Can we shape our own humanity?

In an article in EMBO reports, Kathinka Evers and Jean-Pierre Changeux discuss this neuro-cultural outlook, where brain and culture are seen as co-existing in continual interplay. They emphasize that our societies shape our brains, while our brains shape our societies. Then they discuss the possibilities this opens up for ethics.

The question in the article is whether knowledge about the dynamic interplay between co-existing brains-and-cultures can be used “proactively” to create environments that shape children’s brains and make them, for example, less violent. Environments in which they become humans with ethical norms and response patterns that better meet today’s challenges.

Similar projects have been implemented in school systems, but here the idea is to plan them on the basis of knowledge about the dynamic brain. But also on the basis of societal decision-making about which ethics that should be supported; about which values that are essential for life on this planet.

Personally I’m attracted by “co-existence thinking” as such, which I believe applies to many phenomena. For not only the brain develops in interaction with culture. So does plant and animal life, as well as climate – which in turn will shape human life.

Maybe it is such thinking we need: an ethics of co-existence. Co-existence thinking gives us responsibilities: through awareness of a mistreated nature; through awareness of our dependence on this nature. But such thinking also transcends what we otherwise could have imagined, by introducing the idea of possibilities emerging from the interplay.

Do not believe preachers of necessity. It could have been different. It can become different.

Pär Segerdahl

Evers, K. & Changeux, J-P. 2016. “Proactive epigenesis and ethical innovation: A neuronal hypothesis for the genesis of ethical rules.” EMBO reports 17: 1361-1364.

This post in Swedish

Approaching future issues - the Ethics Blog

Direct brain communication: a new book

Pär SegerdahlImages of the brain, created with advanced technology, are known to most of us. But progress in neuroscience is fast. Less familiar are new technical opportunities to communicate directly with the brain … or however you put it!

Even the unconscious brain is alive. It has been possible to depict responses in the “unconscious” brain to what occurs in its environment. In some cases one has been able to establish communication, where the “unconscious” patient answers yes/no-questions by thinking of one thing if the answer is “yes” and on another thing if the answer is “no.” This activates different parts of the brain. Since researchers/doctors can detect which part of the brain is activated, the patient can answer questions and communicate with the outside world. (Here is an earlier post on this.)

Other examples of this development are new interfaces between brain and computer, where people learn to control a computer, not through the muscles, but via electrodes connected in the brain. People who cannot communicate verbally can thus get computer support. They can also learn to control prostheses. The brain is obviously exceptionally plastic and interactive!

A new anthology, with Michele Farisco and Kathinka Evers from CRB as editors, systematically assesses the philosophical, scientific, ethical and legal issues that this development implies: Neurotechnology and Direct Brain Communication (Routledge, 2016).

The book addresses scientific and clinical implications of the possibility to communicate with patients who may not be quite as unconscious as we thought. Perhaps we should rather talk about altered states of consciousness. But also infant care is discussed, as well as ethical and legal issues about authority, informed consent and privacy.

The book is written for researchers and graduate students in cognitive science, neurology, psychiatry, clinical psychology, medicine, medical ethics, medical technology, neuroethics, neurophilosophy and philosophy of mind. It may interest also healthcare professionals and a broader public fascinated by the mind.

Michele Farisco and Kathinka Evers both work in the European flagship project, Human Brain Project.

(You find more information about the book and about the editors here.)

Pär Segerdahl

This post in Swedish

We recommend readings - the Ethics Blog

Searching for consciousness needs conceptual clarification

Michele FariscoWe can hardly think of ourselves as living persons without referring to consciousness. In fact, we normally define ourselves through two features of our life: we are awake (the level of our consciousness is more than zero), and we are aware of something (our consciousness is not empty).

While it is quite intuitive to think that our brains are necessary for us to be conscious, it is tempting to think that looking at what is going on in the brain is enough to understand consciousness. But empirical investigations are not enough.

Neuroscientific methods to investigate consciousness and its disorders have developed massively in the last decades. The scientific and clinical advancements that have resulted are impressive. But while the ethical and clinical impacts of these advancements are often debated and studied, there is little conceptual analysis.

I think of one example in particular, namely, the neuroscience of disorders of consciousness. These are states where a person’s consciousness is more or less severely damaged. Most commonly, we think of patients in vegetative state, who exhibit levels of consciousness without any content. But it could also be a minimally conscious state with fluctuating levels and contents of consciousness.

How can we explain these complex conditions? Empirical science is usually supposed to be authoritative and help to assess very important issues, such as consciousness. Such scientific knowledge is basically inferential: it is grounded in the comparative assessment of residual consciousness in brain-damaged patients.

But because of its inferential nature, neuroscience takes the form of an inductive reasoning: it infers the presence of consciousness starting from data extracted by neurotechnology. This is done by comparing data from brain damaged patients with data from healthy individuals. Yet this induction is valid only on the basis of a previous definition of consciousness, a definition we made within an implicit or explicit theoretical framework. Thus a conceptual assessment of consciousness that is defined within a well-developed conceptual framework is crucial, and it will affect the inference of consciousness from empirical data.

When it comes to disorders of consciousness, there is still no adequate conceptual analysis of the complexity of consciousness: its levels, modes and degrees. Neuroscience often takes a functionalist account of consciousness for granted in which consciousness is assumed to be equivalent to cognition or at least to be based in cognition. Yet findings from comatose patients suggest that this is not the case. Instead, consciousness seems to be grounded on the phenomenal functions of the brain as they are related to the resting state’s activity.

For empirical neuroscience to be able to contribute to an understanding of consciousness, neuroscientists need input from philosophy. Take the case of communication with speechless patients through neurotechnology (Conversations with seemingly unconscious patients), or the prospective simulation of the brain (The challenge to simulate the brain) for example: here scientists can give philosophers empirical data that need to be considered in order to develop a well-founded conceptual framework within which consciousness can be defined.

The alleged autonomy of empirical science as source of objective knowledge is problematic. This is the reason why philosophy needs to collaborate with scientists in order to conceptually refine their research methods. On the other hand, dialogue with science is essential for philosophy to be meaningful.

We need a conceptual strategy for clarifying the theoretical framework of neuroscientific inferences. This is what we are trying to do in our CRB neuroethics group as part of the Human Brain Project (Neuroethics and Neurophilosophy).

Michele Farisco

This post in Swedish

We want solid foundations - the Ethics Blog

Resignation syndrome in refugee children – a new hypothesis

Pär SegerdahlThere has been much discussion about the so-called “apathetic children” in families seeking asylum in Sweden. You read that right: in Sweden, not in other countries. By all accounts, these children are genuinely ill. They do not simulate total lack of willpower; like inability to eat, speak and move. They are in a life-threatening condition and show no reactions even to painful stimuli. But why do we have so many cases in Sweden and not in other countries?

Several hundred cases have been reported, which in 2014 led the Swedish National Board of Health and Welfare to introduce a new diagnosis: resignation syndrome. The “Swedish” syndrome appears to be a mystery, almost like a puzzle to crack. There are asylum seeking families all around the world: why does this syndrome occur to such an extent in a single country?

If you want to think more about this puzzling question, I recommended a new article in Frontiers in Behavioral Neuroscience, with Karl Sallin (PhD student at CRB) as first author. The article is long and technical, but for those interested, it is well worth the effort. It documents what is known about the syndrome and suggests a new hypothesis.

A common explanation of the syndrome is that it is a reaction to stress and depression. The explanation sounds intuitively reasonable, considering these children’s experiences. But if it were true, the syndrome should occur also in other countries. The mystery remains.

Another explanation is that the mother attempts to manage her trauma, her depression and her needs, by projecting her problems onto the child. The child, who experiences the mother as its only safety, adapts unconsciously and exhibits the symptoms that the mother treats the child as if it had. This explanation may also seem reasonable, especially considering another peculiarity of the syndrome: it does not affect unaccompanied refugee children, only children who arrive with their families. The problem is again: traumatized refugee families exist all around the world. So why is the syndrome common only in Sweden?

Now to Sallins’ hypothesis in the article. The hypothesis has two parts: one about the disease or diagnosis itself; and one about the cause of the disease, which may also explain the peculiar distribution.

After a review of symptoms and treatment response, Sallin suggests that we are not dealing with a new disease. The introduced diagnosis, “resignation syndrome,” is therefore inappropriate. We are dealing with a known diagnosis: catatonia, which is characterized by the same loss of motor skills. The children moreover seem to retain awareness, even though their immobility makes them seem unconscious. When they recover, they can often recall events that occurred while they were ill. They just cannot activate any motor skills. The catatonia hypothesis can be tested, Sallin suggests, by trying treatments with known responses in catatonic patients, and by performing PET scans of the brain.

The question then is: Why does catatonia arise only in refugee children in Sweden? That question brings us to the second part of the hypothesis, which has some similarities with the theory that the mother affects the child psychologically to exhibit symptoms: really have them, not only simulate them!

Here we might make a comparison with placebo and nocebo effects. If it is believed that a pill will have a certain impact on health – positive or negative – the effect can be produced even if the pill contains only a medically inactive substance. Probably, electromagnetic hypersensitivity is a phenomenon of this kind, having psychological causes: a nocebo effect.

The article enumerates cases where it can be suspected that catatonia-like conditions are caused psychologically: unexpected, unexplained sudden death after cancer diagnosis; death epidemics in situations of war and captivity characterized by hopelessness; acute or prolonged death after the utterance of magic death spells (known from several cultures).

The hypothesis is that life-threatening catatonia in refugee children is caused psychologically, in a certain cultural environment. Alternatively, one could say that catatonia is caused in the meeting between certain cultures and Swedish conditions, since it is more common in children from certain parts of the world. We are dealing with a culture bound psychogenesis.

Sallin compares with an outbreak of “hysteria” during the latter part of the 1800s, in connection with Jean-Martin Charcot’s famous demonstrations of hysterical patients, and where colorful symptom descriptions circulated in the press. Charcot first suggested that hysteria had organic causes. But when he later began to talk about psychological factors behind the symptoms, the number of cases of hysteria dropped.

(Perhaps I should point out that Sallin emphasizes that psychological causes are not to be understood in terms of a mind/body dualism.)

It remains to be examined exactly how meeting Swedish conditions contribute to psychologically caused catatonia in children in certain refugee families. But if I understand Sallin correctly, he thinks that the spread of symptom descriptions through mass media, and the ongoing practice of treating “children with resignation syndrome,” might be essential in this context.

If this is true, it creates an ethical problem mentioned in the article. There is no alternative to offering these children treatment: they cannot survive without tube feeding. But offering treatment also causes new cases.

Yes, these children must, of course, be offered care. But maybe Sallin, just by proposing psychological causes of the symptoms, has already contributed to reducing the number of cases in the future. Assuming that his hypothesis of a culture bound psychogenesis is true, of course.

What a fascinating interplay between belief and truth!

Pär Segerdahl

Sallin, K., Lagercrantz, H., Evers, K., Engström, I., Hjern, A., Petrovic, P., Resignation Syndrome: Catatonia? Culture-Bound? Frontiers in Behavioral Neuroscience 29, January 2016

This post in Swedish

We like challenging questions - the ethics blog

The challenge to simulate the brain

Michele FariscoIs it possible to create a computer simulation of the human brain? Perhaps, perhaps not. But right now, a group of scientists is trying. But it is not only finding enough computer power that makes it difficult: there are some very real philosophical challenges too.

Computer simulation of the brain is one of the most ambitious goals of the European Human Brain Project. As a philosopher, I am part of a group that looks at the philosophical and ethical issues, such as: What is the impact of neuroscience on social practice, particularly on clinical practice? What are the conceptual underpinnings of neuroscientific investigation and its impact on traditional ideas, like the human subject, free will, and moral agency? If you follow the Ethics Blog, you might have heard of our work before (“Conversations with seemingly unconscious patients”; “Where is consciousness?”).

One of the questions we ask ourselves is: What is a simulation in general and what is a brain simulation in particular? Roughly, the idea is to create an object that resembles the functional and (if possible also) the structural characteristics of the brain in order to improve our understanding and ability to predict its future development. Simulating the brain could be defined as an attempt to develop a mathematical model of the cerebral functional architecture and to load it onto a computer in order to artificially reproduce its functioning. But why should we reproduce brain functioning?

I can see three reasons: describing, explaining and predicting cerebral activities. The implications are huge. In clinical practice with neurological and psychiatric patients, simulating the damaged brain could help us understand it better and predict its future developments, and also refine current diagnostic and prognostic criteria.

Great promises, but also great challenges ahead of us! But let me now turn to challenges that I believe can be envisaged from a philosophical and conceptual perspective.

A model is in some respects simplified and arbitrary: the selection of parameters to include depends on the goals of the model to be built. This is particularly challenging when the object being simulated is characterized by a high degree of complexity.

The main method used for building models of the brain is “reverse engineering.” This is a method that includes two main steps: dissecting a functional system at the physical level into component parts or subsystems; and then reconstructing the system virtually. Yet the brain hardly seems decomposable into independent modules with linear interactions. The brain rather appears as a nonlinear complex integrated system and the relationship between the brain’s components is non-linear. That means that their relationship cannot be described as a direct proportionality and their relative change is not related to a constant multiplier. To complicate things further, the brain is not completely definable by algorithmic methods. This means that it can show unpredicted behavior. And then to make it even more complex: The relationship between the brain’s subcomponents affects the behavior of the subcomponents.

The brain is a holistic system and despite being deterministic it is still not totally predictable. Simulating it is hardly conceivable. But even if it should be possible, I am afraid that a new “artificial” brain will have limited practical utility: for instance, the prospective general simulation of the brain risks to lose the specific characteristics of the particular brain under treatment.

Furthermore, it is impossible to simulate “the brain” simply because such an entity doesn’t exist. We have billions of different brains in the world. They are not completely similar, even if they are comparable. Abstracting from such diversity is the major limitation of brain simulation. Perhaps it would be possible to overcome this limitation by using a “general” brain simulation as a template to simulate “particular” brains. But maybe this would be even harder to conceive and realize.

Brain simulation is indeed one of the most promising contemporary scientific enterprises, but it needs a specific conceptual investigation in order to clarify its inspiring philosophy and avoid misinterpretations and disproportional expectations. Even, but not only, by lay people.

If you want to know more, I recommend having a look at a report of our publications so far.

Michele Farisco

We like challenging questions - the ethics blog

Our publications on neuroethics and philosophy of the brain

Pär SegerdahlAt CRB, an international, multidisciplinary research group works with ethical and philosophical questions that are associated with the neuroscientific exploration of the human mind and brain.

As part of the European Human Brain Project, they approach not only ethical questions that arise, or may arise, with the development and practical application of neuroscience. They also more fundamentally explore philosophical questions about, for example, the concepts of consciousness, human identity, and the self.

In order to give an overview of their extensive work, we recently compiled a report of their articles, books and book chapters. It is available online:

The report also contains abstracts of all the publications. – Have a look at the compilation; I’m sure you will find it fascinating!

I might add that we recently updated similar reports on our work in biobank ethics and in nursing ethics:

Here too you’ll find abstracts of our interesting publications in these fields.

Pär Segerdahl

Approaching future issues - the Ethics Blog

Where is consciousness?

 

Michele FariscoWould it be possible to use brain imaging techniques to detect consciousness and then “read” directly in people’s brains what they want or do not want? Could one, for example, ask a severely brain injured patient for consent to some treatment, and then obtain an answer through a brain scan?

Together with the philosopher Kathinka Evers and the neuroscientist Steven Laureys, I recently investigated ethical and clinical issues arising from this prospective “cerebral communication.”

Our brains are so astonishingly complex! The challenge is how to handle this complexity. To do that we need to develop our conceptual apparatus and create what we would like to call a “fundamental” neuroethics. Sound research needs solid theory, and in line with this I would like to comment upon the conceptual underpinnings of this ongoing endeavor of developing a “fundamental” neuroethics.

The assumption that visualizing activity in a certain brain area can mean reading the conscious intention of the scanned subject presupposes that consciousness can be identified with particular brain areas. While both science and philosophy widely accept that consciousness is a feature of the brain, recent developments in neuroscience problematize relating consciousness to specific areas of the brain.

Tricky logical puzzles arise here. The so called “mereological fallacy” is the error of attributing properties of the whole (the living human person) to its parts (the brain). In our case a special kind of mereological fallacy risks to be embraced: attributing features of the whole (the brain) to its parts (those visualized as more active in the scan). Consciousness is a feature of the whole brain: the sole fact that a particular area is more active than others does not imply conscious activity.

The reverse inference is another nice logical pitfall: the fact that a study reveals that a particular cerebral area, say A, is more active during a specific task, say T, does not imply that A always results in T, nor that T always presupposes A.

In short, we should avoid the conceptual temptation to view consciousness according to the so called “homunculus theory”: like an entity placed in a particular cerebral area. This is unlikely: consciousness does not reside in specific brain regions, but is rather equivalent to the activity of the brain as a whole.

But where is consciousness? To put it roughly, it is nowhere and everywhere in the brain. Consciousness is a feature of the brain and the brain is more than the sum of its parts: it is an open system, where external factors can influence its structure and function, which in turn affects our consciousness. Brain and consciousness are continually changing in deep relationships with the external environment.

We address these issues in more detail in a forthcoming book that I and Kathinka Evers are editing, involving leading researchers both in neuroscience and in philosophy:

Michele Farisco

We want solid foundations - the Ethics Blog

 

Neuroethics: new wine in old bottles?

Michele FariscoNeuroscience is increasingly raising philosophical, ethical, legal and social problems concerning old issues which are now approached in a new way: consciousness, freedom, responsibility and self are today investigated in a new light by the so called neuroethics.

Neuroethics was conceived as a field deserving its own name at the beginning of the 21st century. Yet philosophy is much older, and its interest in “neuroethical” issues can be traced back to its very origins.

What is “neuroethics”? Is it a new way of doing or a new way of thinking ethics? Is it a sub-field of bioethics? Or does it stand as a discipline in its own? Is it only a practical or even a conceptual discipline?

I would like to suggest that neuroethics – besides the classical division between “ethics of neuroscience” and “neuroscience of ethics” – above all needs to be developed as a conceptual assessment of what neuroscience is telling us about our nature: the progress in neuroscientific investigation has been impressive in the last years, and in the light of huge investments in this field (e.g., the European Human Brain Project and the American BRAIN Initiative) we can bet that new  striking discoveries will be made in the next decades.

For millennia, philosophers were interested in exploring what was generally referred to as human nature, and particularly the mind as one of its essential dimensions. Two avenues have been traditionally developed within the general conception of mind: a non-materialistic and idealistic approach (the mind is made of a special stuff non-reducible to the brain); and a materialistic approach (the mind is no more than a product or a property of the brain).

Both interpretations assume a dualistic theoretical framework: the human being is constituted from two completely different dimensions, which have completely different properties with no interrelations between them, or, at most, a relationship mediated solely by an external element. Such a dualistic approach to human identity is increasingly criticized by contemporary neuroscience, which is showing the plastic and dynamic nature of the human brain and consequently of the human mind.

This example illustrates in my view that neuroethics above all is a philosophical discipline with a peculiar interdisciplinary status: it can be a privileged field where philosophy and science collaborate in order to conceptually cross the wall which has been built between them.

Michele Farisco

We transgress disciplinary borders - the Ethics Blog

How can the brain be computer simulated?

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogA computer simulated human brain – that undoubtedly sounds like science fiction. But the EU flagship project, the Human Brain Project, actually has computer simulation of the brain as an objective.

What will be accomplished during the ten years that the project is financed will presumably be simulations of more limited brain functions (often in the mouse brain). But the proud objective to simulate the human brain has now been formulated in a serious research project.

But what does “computer simulation of the brain” mean?

In an article in the journal Neuron Kathinka Evers and Yadin Dudai discuss the meaning of simulation of the brain. Kathinka Evers from CRB leads the philosophical research in the EU Project and Yadin Dudai is a neuroscientist from the Weizmann Institute of Science who also works in the project.

The article combines philosophical and scientific vantage points to clarify the type of simulation that is relevant in neuroscience and what goals it may have. Several of the questions in the article are relevant also for the simulation of more limited brain functions. For example, the question if the ability to make a computer simulation of a brain function means that you understand it.

The most thought-provoking questions, however, concern the big (but distant) goal to simulate a whole human brain. Is it possible in principle, given that the brain is embedded in the body and is in constant interaction with it? Is it possible, given that the brain interacts not only with the body but also with a social environment?

Does simulating the brain require that one also simulates the brain’s interaction with the body and the social context in which it operates? Kathinka Evers thinks so. The attempt to simulate the brain is too limited if one does not start out from the fact that the brain is in constant interaction with an environment that constantly changes it.

The brain must be understood (and simulated) as an “experienced brain.”

Suppose that one day one manages to simulate an experienced human brain in intensive interaction with a bodily and social environment. Has one then simulated a brain so well that one created consciousness?

The questions in the article are many and breathtaking – read it!

Pär Segerdahl

We like challenging questions - the ethics blog

Conversations with seemingly unconscious patients

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogResearch and technology changes us: changes the way we live, speak and think. One area of ​​research that will change us in the future is brain research. Here are some remarkable discoveries about some seemingly unconscious patients; discoveries that we still don’t know how to make intelligible or relate to.

A young woman survived a car accident but got such serious injuries that she was judged to be in a vegetative state, without consciousness. When sentences were spoken to her and her neural responses were measured through fMRI, however, it was discovered that her brain responded equivalently to conscious control subjects’ brains. Was she conscious although she appeared to be in a coma?

To get more clarity the research team asked the woman to perform two different mental tasks. The first task was to imagine that she was playing tennis; the other that she visited her house. Once again the measured brain activation was equivalent to that of the conscious control subjects.

She is not the only case. Similar responses have been measured in other patients who according to international guidelines were unconscious. Some have learned to respond appropriately to yes/no questions, such as, “Is your mother’s name Yolande?” They respond by mentally performing different tasks – let’s say, imagine squeezing their right hand for “yes” and moving all their toes for “no.” Their neural responses are then measured.

There is already technology that connects brain and computer. People learn to use these “neuro-prosthetics” without muscle use. This raises the question if in the future one may be able to communicate with some patients who today would be diagnosed as unconscious.

– Should one then begin to ask these patients about informed consent for different treatments?

Here at the CRB researchers are working with such neuro-ethical issues within a big European research effort: the Human Brain Project. Within this project, Kathinka Evers leads the work on ethical and societal implications of brain research, and Michele Farisco writes his (second) thesis in the project, supervised by Kathinka.

Michele Farisco’s thesis deals with disorders of consciousness. I just read an exciting book chapter that Michele authored with Kathinka and Steven Laureys (one of neuro-scientists in the field):

They present developments in the field and discuss the possibility of informed consent from some seemingly unconscious patients. They point out that informed consent has meaning only if there is a relationship between doctor/researcher and patient, which requires communication. This condition may be met if the technology evolves and people learn to use it.

But it is still unclear, they argue, whether all requirements for informed consent are satisfied. In order to give informed consent, patients must understand what they agree to. This is usually checked by asking patients to describe with their own words what the doctor/researcher communicated. This cannot be done through yes/no-communication via neuroimaging. Furthermore, the patient must understand that the information applies to him or her at a certain time, and it is unclear if these patients, who are detached from the course of everyday life and have suffered serious brain injury, have that understanding. Finally, the patient must be emotionally able to evaluate different alternatives. Also this condition is unclear.

It may seem early to discuss ethical issues related to discoveries that we don’t even know how to make intelligible. I think on the contrary that it can pave the way for emerging intelligibility. A personal reflection explains what I mean.

It is tempting to think that neuroscience must first determine whether the patients above are unconscious or not, by answering “the big question” how consciousness arises and becomes disturbed or inhibited in the brain. Only then can we understand these remarkable discoveries, and only then can practical applications and ethical implications be developed.

My guess is that practical technological applications, and human responses to their use, rather are venues for the intelligibility that is required for further scientific development. A brain does not give consent, but perhaps a seemingly unconscious patient with neuro-prosthesis. How future technology supported communication with such patients takes shape – how it works in practice and changes what we meaningfully can do, say and think – will guide future research. It is on this science-and-technology supported playing field that we might be able to ask and determine what we thought neuroscience had to determine beforehand, and on its own, by answering a “big question.”

After all, isn’t it on this playing field that we now begin to ask if some seemingly unconscious patients are conscious?

Ethics does not always run behind research, developing its “implications.” Perhaps neuro-ethics and neuroscience walk hand in hand. Perhaps neuroscience needs neuro-ethics.

Pär Segerdahl

In dialogue with patients

« Older posts Newer posts »