A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Category: In the research debate (Page 13 of 37)

New dissertation on patient preferences in medical approvals

During the spring, several doctoral students at CRB successfully defended their dissertations. Karin Schölin Bywall defended her dissertation on May 12, 2021. The dissertation, like the two previous ones, reflects a trend in bioethics from theoretical investigations to empirical studies of people’s perceptions of bioethical issues.

An innovative approach in Karin Schölin Bywall’s dissertation is that she identifies a specific area of ​​application where the preference studies that are increasingly used in bioethics can be particularly beneficial. It is about patients’ influence on the process of medical approval. Patients already have such an influence, but their views are obtained somewhat informally, from a small number of invited patients. Karin Schölin Bywall explores the possibility of strengthening patients’ influence scientifically. Preference studies can give decision-makers an empirically more well-founded understanding of what patients actually prefer when they weigh efficacy against side effects and other drug properties.

If you want to know more about the possibility of using preference studies to scientifically strengthen patients’ influence in medical approvals, read Karin Schölin Bywall’s dissertation: Getting a Say: Bringing patients’ views on benefit-risk into medical approvals.

If you want a concise summary of the dissertation, read Anna Holm’s news item on our website: Bringing patients’ views into medical approvals.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Schölin Bywall, K. (2021) Getting a Say: Bringing patients’ views on benefit-risk into medical approvals. [Dissertation]. Uppsala University.

This post in Swedish

We want solid foundations

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

When established treatments do not help

What should the healthcare team do when established treatments do not help the patient? Should one be allowed to test a so-called non-validated treatment on the patient, where efficacy and side effects have not yet been determined scientifically?

Gert Helgesson comments on this problem in Theoretical Medicine and Bioethics. His comment concerns suggestions from authors who in the same journal propose a specific restrictive policy. They argue that if you want to test a non-validated treatment, you should from the beginning plan this as a research project where the treatment is tested on several subjects. Only in this way do you get data that can form the basis for scientific conclusions about the treatment. Above all, the test will undergo ethical review, where the risks to the patient and the reasons for trying the treatment are carefully assessed.

Of course, it is important to be restrictive. At the same time, there are disadvantages with the specific proposal above. If the patient has a rare disease, for example, it can be difficult to gather enough patients to draw scientific conclusions from. Here it may be more reasonable to allow case reports and open storage of data, rather than requiring ethically approved clinical trials. Another problem is that clinical trials take place under conditions that differ from those of patient care. If the purpose is to treat an individual patient because established treatments do not work, then it becomes strange if the patient is included in a randomized study where the patient may end up in the control group which receives the standard treatment. A third problem is when the need for treatment is urgent and there is no time to approach an ethical review board and await their response. Moreover, is it reasonable that research ethical review boards make treatment decisions about individual patients?

Gert Helgesson is well aware of the complexity of the problem and the importance of being careful. Patients must not be used as if they were guinea pigs for clinicians who want to make quick, prestigious discoveries without undergoing proper research ethical review. At the same time, one can do a lot of good for patients by identifying new effective treatments when established treatments do not work. But who should make the decision to test a non-validated treatment if it is unreasonable to leave the decision to a research ethical board?

Gert Helgesson suggests that such decisions on non-validated treatments can reasonably be made by the head of the clinic, and that a procedure for such decisions at the clinic level should exist. For example, an advisory hospital board can be appointed, which supports discussions and decisions at the clinic level about new treatments. The fact that a treatment is non-validated does not mean that there are no empirical and theoretical reasons to believe that it might work. Making a careful assessment of these reasons is an important task in these discussions and decisions.

I hope I have done justice to Gert Helgesson’s balanced discussion of a complex question: What is a reasonable framework for new non-validated treatments? In some last-resort cases where the need for care is urgent, for example, or the disease is rare, decisions about non-validated treatments should be clinical rather than research ethical, concludes Gert Helgesson. The patient must, of course, consent and a careful assessment must be made of the available knowledge about the treatment.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Helgesson, G. What is a reasonable framework for new non-validated treatments?. Theor Med Bioeth 41, 239–245 (2020). https://doi.org/10.1007/s11017-020-09537-6

This post in Swedish

We recommend readings

An unusually big question

Sometimes the intellectual claims on science are so big that they risk obscuring the actual research. This seems to happen not least when the claims are associated with some great prestigious question, such as the origin of life or the nature of consciousness. By emphasizing the big question, one often wants to show that modern science is better suited than older human traditions to answer the riddles of life. Better than philosophy, for example.

I think of this when I read a short article about such a riddle: “What is consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers.” The article by Christof Koch gives the impression that it is only a matter of time before science determines not only where in the brain consciousness arises (one already seems have a suspect), but also the specific neural mechanisms that give rise to – everything you have ever experienced. At least if one is to believe one of the fundamental theories about the matter.

Reading about the discoveries behind the identification of where in the brain consciousness arises is as exciting as reading a whodunit. It is obvious that important research is being done here on the effects that loss or stimulation of different parts of the brain can have on people’s experiences, mental abilities and personalities. The description of a new technology and mathematical algorithm for determining whether patients are conscious or not is also exciting and indicates that research is making fascinating progress, which can have important uses in healthcare. But when mathematical symbolism is used to suggest a possible fundamental explanation for everything you have ever experienced, the article becomes as difficult to understand as the most obscure philosophical text from times gone by.

Since even representatives of science sometimes make philosophical claims, namely, when they want to answer prestigious riddles, it is perhaps wiser to be open to philosophy than to compete with it. Philosophy is not just about speculating about big questions. Philosophy is also about humbly clarifying the questions, which otherwise tend to grow beyond all reasonable limits. Such openness to philosophy flourishes in the Human Brain Project, where some of my philosophical colleagues at CRB collaborate with neuroscientists to conceptually clarify questions about consciousness and the brain.

Something I myself wondered about when reading the scientifically exciting but at the same time philosophically ambitious article, is the idea that consciousness is everything we experience: “It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.” What does it mean to take such an all-encompassing claim seriously? What is not consciousness? If everything we can experience is consciousness, from the taste of chocolate mousse to the sight of the stars in the sky and our human bodies with their various organs, where is the objective reality to which science wants to relate consciousness? Is it in consciousness?

If consciousness is our inevitable vantage point, if everything we experience as real is consciousness, it becomes unclear how we can treat consciousness as an objective phenomenon in the world along with the body and other objects. Of course, I am not talking here about actual scientific research about the brain and consciousness, but about the limitless intellectual claim that scientists sooner or later will discover the neural mechanisms that give rise to everything we can ever experience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Christof Koch, What Is Consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers, Nature 557, S8-S12 (2018) https://doi.org/10.1038/d41586-018-05097-x

This post in Swedish

We transcend disciplinary borders

Patient integrity at the end of life

When we talk about patient integrity, we often talk about the patients’ medical records and the handling of their personal data. But patient integrity is not just about how information about patients is handled, but also about how the patients themselves are treated. For example, can they tell about their problems without everyone in the waiting room hearing them?

This more real aspect of patient integrity is perhaps extra challenging in an intensive care unit. Here, patients can be more or less sedated and connected to life-sustaining equipment. The patients are extremely vulnerable, in some cases dying. It can be difficult to see the human being for all the medical devices. Protecting the integrity of these patients is a challenge, not least for the nurses, who have close contact with them around the clock (and with the relatives). How do nurses perceive and manage the integrity of patients who end their lives in an intensive care unit?

This important question is examined in an article in the journal Annals of Intensive Care, written by Lena Palmryd, Åsa Rejnö and Tove Godskesen. They conducted an interview study with nurses in four intensive care units in Sweden. Many of the nurses had difficulty defining integrity and explaining what the concept means in the care of dying patients. This is not surprising. Not even the philosopher Socrates would have succeeded in defining integrity. However, the nurses used other words that emphasised respect for the patient and patient-centred attitudes, such as being listening and sensitive to the patient. They also tried to describe good care.

When I read the article, I was struck by how ethically central concepts, such as integrity and autonomy, often obscure reality and paralyse us. Just when we need to see clearly and act wisely. When the authors of the article analyse the interviews with the nurses, they use five categories instead, which in my opinion speak more clearly than the overall concept of integrity does:

  1. Seeing the unique individual
  2. Being sensitive to the patient’s vulnerability
  3. Observing the patient’s physical and mental sphere
  4. Taking into account the patient’s religion and culture
  5. Being respectful during patient encounters

How transparent to reality these words are! They let us see what it is about. Of course, it is not wrong to talk about integrity and it is no coincidence that these categories emerged in the analysis of the conversations with the nurses about integrity. However, sometimes it is perhaps better to refrain from ethically central concepts, because such concepts often hide more than they reveal.

The presentation of the interviews under these five headings, with well-chosen quotes from the conversations, is even more clarifying. This shows the value of qualitative research. In interview studies, reality is revealed through people’s own words. Strangely enough, such words can help us to see reality more clearly than the technical concepts that the specialists in the field consider to be the core of the matter. Under heading (2), for example, a nurse tells of a patient who suffered from hallucinations, and who became anxious when people showed up that the patient did not recognize. One evening, the doctors came in with 15 people from the staff, to provide staff with a report at the patient’s bedside: “So I also drove them all out; it’s forbidden, 15 people can’t stand there, for the sake of the patient.” These words are as clarifying as the action itself is.

I do not think that the nurse who drove out the crowd for the sake of the patient thought that she was doing it “to protect the patient’s integrity.” Ethically weighty concepts can divert our attention, as if they were of greater importance than the actual human being. Talking about patient integrity can, oddly enough, make us blind to the patient.

Perhaps that is why many of Socrates’ conversations about concepts end in silence instead of in definitions. Should we define silence as an ethical concept? Should we arrange training where we have the opportunity to talk more about silence? The instinct to control reality by making concepts of it diverts attention from reality.

Read the qualitative study of patients’ integrity at the end of life, which draws attention to what it really is about.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Palmryd, L., Rejnö, Å. & Godskesen, T.E. Integrity at end of life in the intensive care unit: a qualitative study of nurses’ views. Ann. Intensive Care 11, 23 (2021). https://doi.org/10.1186/s13613-021-00802-y

This post in Swedish

We like real-life ethics

Two new dissertations!

Two of our doctoral students at CRB recently successfully defended their dissertations. Both dissertations reflect a trend in bioethics from purely theoretical studies to also include empirical studies of people’s perceptions of bioethical issues.

Åsa Grauman’s dissertation explores the public’s view of risk information about cardiovascular disease. The risk of cardiovascular disease depends on many factors, both lifestyle and heredity influence the risk. Many find it difficult to understand such risk information and many underestimate their risk, while others worry unnecessarily. For risk information to make sense to people, it must be designed so that recipients can benefit from it in practice. That requires knowing more about their perspective on risk, how health information affects them, and what they think is important and unimportant when it comes to risk information about cardiovascular disease. One of Åsa Grauman’s conclusions from her studies of these issues is that people often estimate their risk on the basis of self-assessed health and family history. As this can lead to the risk being underestimated, she argues that health examinations are important which can nuance individuals’ risk assessments and draw their attention to risk factors that they themselves can influence.

If you want more conclusions and see the studies behind them, read Åsa Grauman’s dissertation: The publics’ perspective on cardiovascular risk information: Implications for practice.

Mirko Ancillotti’s dissertation explores the Swedish public’s view of antibiotic resistance and our responsibility to reduce its prevalence. The rise of antibiotic-resistant bacteria is one of the major global threats to public health. The increase is related to our often careless overuse of antibiotics in society. The problem needs to be addressed both nationally and internationally, both collectively and individually. Mirko Ancillotti focuses on our individual responsibility for antibiotic resistance. He examines how such a responsibility can be supported through more effective health communication and improved institutional conditions that can help people to use antibiotics more judiciously. Such support requires knowledge of the public’s beliefs, values ​​and preferences regarding antibiotics, which may affect their willingness and ability to take responsibility for their own use of antibiotics. One of the studies in the dissertation indicates that people are prepared to make significant sacrifices to reduce their contribution to antibiotic resistance.

If you want to know more about the Swedish public’s view of antibiotic resistance and the possibility of supporting judicious behaviour, read Mirko Ancillotti’s dissertation: Antibiotic Resistance: A Multimethod Investigation of Individual Responsibility and Behaviour.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Åsa Grauman. 2021. The publics’ perspective on cardiovascular risk information: Implications for practice. Uppsala: Acta Universitatis Upsaliensis.

Mirko Ancillotti. 2021. Antibiotic Resistance: A Multimethod Investigation of Individual Responsibility and Behaviour. Uppsala: Acta Universitatis Upsaliensis.

This post in Swedish

Ethics needs empirical input

Human rights and legal issues related to artificial intelligence

How do we take responsibility for a technology that is used almost everywhere? As we develop more and more uses of artificial intelligence (AI), the challenges grow to get an overview of how this technology can affect people and human rights.

Although AI legislation is already being developed in several areas, Rowena Rodrigues argues that we need a panoramic overview of the widespread challenges. What does the situation look like? Where can human rights be threatened? How are the threats handled? Where do we need to make greater efforts? In an article in the Journal of Responsible Technology, she suggests such an overview, which is then discussed on the basis of the concept of vulnerability.

The article identifies ten problem areas. One problem is that AI makes decisions based on algorithms where the decision process is not completely transparent. Why did I not get the job, the loan or the benefit? Hard to know when computer programs deliver the decisions as if they were oracles! Other problems concern security and liability, for example when automatic decision-making is used in cars, medical diagnosis, weapons or when governments monitor citizens. Other problem areas may involve risks of discrimination or invasion of privacy when AI collects and uses large amounts of data to make decisions that affect individuals and groups. In the article you can read about more problem areas.

For each of the ten challenges, Rowena Rodrigues identifies solutions that are currently in place, as well as the challenges that remain to be addressed. Human rights are then discussed. Rowena Rodrigues argues that international human rights treaties, although they do not mention AI, are relevant to most of the issues she has identified. She emphasises the importance of safeguarding human rights from a vulnerability perspective. Through such a perspective, we see more clearly where and how AI can challenge human rights. We see more clearly how we can reduce negative effects, develop resilience in vulnerable communities, and tackle the root causes of the various forms of vulnerability.

Rowena Rodrigues is linked to the SIENNA project, which ends this month. Read her article on the challenges of a technology that is used almost everywhere: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4. https://doi.org/10.1016/j.jrt.2020.100005

This post in Swedish

We recommend readings

Learning from international attempts to legislate psychosurgery

So-called psychosurgery, in which psychiatric disorders are treated by neurosurgery, for example, by cutting connections in the brain, may have a somewhat tarnished reputation after the insensitive use of lobotomy in the 20th century to treat anxiety and depression. Nevertheless, neurosurgery for psychiatric disorders can help some patients and the area develops rapidly. The field probably needs an updated regulation, but what are the challenges?

The issue is examined from an international perspective in an article in Frontiers in Human Neuroscience. Neurosurgery for psychiatric disorders does not have to involve destroying brain tissue or cutting connections. In so-called deep brain stimulation, for example, electrical pulses are sent to certain areas of the brain. The method has been shown to relieve movement disorders in patients with Parkinson’s disease. This unexpected possibility illustrates one of the challenges. How do we delimit which treatments the regulation should cover in an area with rapid scientific and technical development?

The article charts legislation on neurosurgery for psychiatric disorders from around the world. The purpose is to find strengths and weaknesses in the various legislations. The survey hopes to justify reasonable ways of dealing with the challenges in the future, while achieving greater international harmonisation. The challenges are, as I said, several, but regarding the challenge of delimiting the treatments to be covered in the regulation, the legislation in Scotland is mentioned as an example. It does not provide an exhaustive list of treatments that are to be covered by the regulation, but states that treatments other than those listed may also be covered.

If you are interested in law and want a more detailed picture of the questions that need to be answered for a good regulation of the field, read the article: International Legal Approaches to Neurosurgery for Psychiatric Disorders.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Chandler JA, Cabrera LY, Doshi P, Fecteau S, Fins JJ, Guinjoan S, Hamani C, Herrera-Ferrá K, Honey CM, Illes J, Kopell BH, Lipsman N, McDonald PJ, Mayberg HS, Nadler R, Nuttin B, Oliveira-Maia AJ, Rangel C, Ribeiro R, Salles A and Wu H (2021) International Legal Approaches to Neurosurgery for Psychiatric Disorders. Front. Hum. Neurosci. 14:588458. doi: 10.3389/fnhum.2020.588458

This post in Swedish

Thinking about law

Should social media platforms censor misinformation about COVID-19?

When the coronavirus began to spread outside China a year ago, the Director General of the World Health Organization said that we are not only fighting an epidemic, but also an infodemic. The term refers to the rapid spread of often false or questionable information.

While governments fight the pandemic through lockdowns, social media platforms such as Facebook, Twitter and YouTube fight the infodemic through other kinds of lockdowns and framings of information considered as misinformation. Content can be provided with warning signs and links to what are considered more reliable sources of information. Content can also be removed and in some cases accounts can be suspended.

In an article in EMBO Reports, Emilia Niemiec asks if there are wiser ways to handle the spread of medical misinformation than by letting commercial actors censor the content on their social media platforms. In addition to the fact that censorship seems to contradict the idea of ​​these platforms as places where everyone can freely express their opinion, it is unclear how to determine what information is false and harmful. For example, should researchers be allowed to use YouTube to discuss possible negative consequences of the lockdowns? Or should such content be removed as harmful to the fight against the pandemic?

If commercial social media platforms remove content on their own initiative, why do they choose to do so? Do they do it because the content is scientifically controversial? Or because it is controversial in terms of public opinion? Moreover, in the midst of a pandemic with a new virus, the state of knowledge is not always as clear as one might wish. In such a situation it is natural that even scientific experts disagree on certain important issues. Can social media companies then make reasonable decisions about what we currently know scientifically? We would then have a new “authority” that makes important decisions about what should be considered scientifically proven or well-grounded.

Emilia Niemiec suggests that a wiser way to deal with the spread of medical misinformation is to increase people’s knowledge of how social media works, as well as how research and research communication work. She gives several examples of what we may need to learn about social media platforms and about research to be better equipped against medical misinformation. Education as a vaccine, in other words, which immunises us against the misinformation. This immunisation should preferably take place as early as possible, she writes.

I would like to recommend Emilia Niemiec’s article as a thoughtful discussion of issues that easily provoke quick and strong opinions. Perhaps this is where the root of the problem lies. The pandemic scares us, which makes us mentally tense. Without that fear, it is difficult to understand the rapid spread of unjustifiably strong opinions about facts. Our fear in an uncertain situation makes us demand knowledge, precisely because it does not exist. Anything that does not point in the direction that our fear demands immediately arouses our anger. Fear and anger become an internal mechanism that, at lightning speed, generates hardened opinions about what is true and false, precisely because of the uncertainty of the issues and of the whole situation.

So I am dreaming of one further vaccine. Maybe we need to immunise ourselves also against the fear and the anger that uncertainty causes in our rapidly belief-forming intellects. Can we immunise ourselves against something as human as fear and anger in uncertain situations? In any case, the thoughtfulness of the article raises hopes about it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Niemiec, Emilia. 2020. COVID-19 and misinformation: Is censorship of social media a remedy to the spread of medical misinformation? EMBO Reports, Vol. 21, no 11, article id e51420

This post in Swedish

We recommend readings

How do we take responsibility for dual-use research?

We are more often than we think governed by old patterns of thought. As a philosopher, I find it fascinating to see how mental patterns capture us, how we get imprisoned in them, and how we can get out of them. With that in mind, I recently read a book chapter on something that is usually called dual-use research. Here, too, there are patterns of thought that can capture us.

In the chapter, Inga Ulnicane discusses how one developed responsibility for neuroscientific dual-use research of concern in the Human Brain Project (HBP). I read the chapter as a philosophical drama. The European rules that govern HBP are themselves governed by mental patterns about what dual-use research is. In order to take real responsibility for the project, it was therefore necessary within HBP to think oneself free from the patterns that governed the governance of the project. Responsibility became a philosophical challenge: to raise awareness of the real dual-use issues that may be associated with neuroscientific research.

Traditionally, “dual use” refers to civilian versus military uses. By regulating that research in HBP should focus exclusively on civil applications, it can be said that the regulation of the project was itself regulated by this pattern of thought. There are, of course, major military interests in neuroscientific research, not least because the research borders on information technology, robotics and artificial intelligence. Results can be used to improve soldiers’ abilities in combat. They can be used for more effective intelligence gathering, more powerful image analysis, faster threat detection, more accurate robotic weapons, and to satisfy many other military desires.

The problem is that there are more problematic desires than military ones. Research results can also be used to manipulate people’s thoughts and feelings for non-military purposes. They can be used to monitor populations and control their behaviour. It is impossible to say once and for all what problematic desires neuroscientific research can arouse, military and non-military. A single good idea can cause several bad ideas in many other areas.

Therefore, one prefers in HBP to talk about beneficial and harmful uses, rather than civilian and military. This more open understanding of “the dual” means that one cannot identify problematic areas of use once and for all. Instead, continuous discussion is required among researchers and other actors as well as the general public to increase awareness of various possible problematic uses of neuroscientific research. We need to help each other see real problems, which can occur in completely different places than we expect. Since the problems moreover move across borders, global cooperation is needed between brain projects around the world.

Within HBP, it was found that an additional thought pattern governed the regulation of the project and made it more difficult to take real responsibility. The definition of dual-use in the documents was taken from the EU export control regulation, which is not entirely relevant for research. Here, too, greater awareness is required, so that we do not get caught up in thought patterns about what it is that could possibly have dual uses.

My personal conclusion is that human challenges are not only caused by a lack of knowledge. They are also caused by how we are tempted to think, by how we unconsciously repeat seemingly obvious patterns of thought. Our tendency to become imprisoned in mental patterns makes us unaware of our real problems and opportunities. Therefore, we should take the human philosophical drama more seriously. We need to see the importance of philosophising ourselves free from our self-incurred captivity in enticing ways of thinking. This is what one did in the Human Brain Project, I suggest, when one felt challenged by the question of what it really means to take responsibility for dual-use research of concern.

Read Inga Ulnicane’s enlightening chapter, The governance of dual-use research in the EU. The case of neuroscience, which also mentions other patterns that can govern our thinking about governance of dual-use research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ulnicane, I. (2020). The governance of dual-use research in the EU: The case of neuroscience. In A. Calcara, R. Csernatoni, & C. Lavallée (Editors), Emerging security technologies and EU governance: Actors, practices and processes. London: Routledge / Taylor & Francis Group, pages 177-191.

This post in Swedish

Thinking about thinking

« Older posts Newer posts »