A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroethics (Page 1 of 7)

Securing the future already from the beginning

Imagine if there was a reliable method for predicting and managing future risks, such as anything that could go wrong with new technology. Then we could responsibly steer clear of all future dangers, we could secure the future already now.

Of course, it is just a dream. If we had a “reliable method” for excluding future risks from the beginning, time would soon rush past that method, which then proved to be unreliable in a new era. Because we trusted the method, the method of managing future risks soon became a future risk in itself!

It is therefore impossible to secure the future from the beginning. Does this mean that we must give up all attempts to take responsibility for the future, because every method will fail to foresee something unpredictably new and therefore cause misfortune? Is it perhaps better not to try to take any responsibility at all, so as not to risk causing accidents through our imperfect safety measures? Strangely enough, it is just as impossible to be irresponsible for the future as it is to be responsible. You would need to make a meticulous effort so that you do not happen to cook a healthy breakfast or avoid a car collision. Soon you will wish you had a “safe method” that could foresee all the future dangers that you must avoid to avoid if you want to live completely irresponsibly. Your irresponsibility for the future would become an insurmountable responsibility.

Sorry if I push the notions of time and responsibility beyond their breaking point, but I actually think that many of us have a natural inclination to do so, because the future frightens us. A current example is the tendency to think that someone in charge should have foreseen the pandemic and implemented powerful countermeasures from the beginning, so that we never had a pandemic. I do not want to deny that there are cases where we can reason like that – “someone in charge should have…” – but now I want to emphasize the temptation to instinctively reason in such a way as soon as something undesirable occurs. As if the future could be secured already from the beginning and unwanted events would invariably be scandals.

Now we are in a new situation. Due to the pandemic, it has become irresponsible not to prepare (better than before) for risks of pandemics. This is what our responsibility for the future looks like. It changes over time. Our responsibility rests in the present moment, in our situation today. Our responsibility for the future has its home right here. It may sound irresponsible to speak in such a way. Should we sit back and wait for the unwanted to occur, only to then get the responsibility to avoid it in the future? The problem is that this objection once again pushes concepts beyond their breaking point. It plays around with the idea that the future can be foreseen and secured already now, a thought pattern that in itself can be a risk. A society where each public institution must secure the future within its area of ​​responsibility, risks kicking people out of the secured order: “Our administration demands that we ensure that…, therefore we need a certificate and a personal declaration from you, where you…” Many would end up outside the secured order, which hardly secures any order. And because the trouble-makers are defined by contrived criteria, which may be implemented in automated administration systems, these systems will not only risk making systematic mistakes in meeting real people. They will also invite cheating with the systems.

So how do we take responsibility for the future in a way that is responsible in practice? Let us first calm down. We have pointed out that it is impossible not to take responsibility! Just breathing means taking responsibility for the future, or cooking breakfast, or steering the car. Taking responsibility is so natural that no one needs to take responsibility for it. But how do we take responsibility for something as dynamic as research and innovation? They are already in the future, it seems, or at least at the forefront. How can we place the responsibility for a brave new world in the present moment, which seems to be in the past already from the beginning? Does not responsibility have to be just as future oriented, just as much at the forefront, since research and innovation are constantly moving towards the future, where they make the future different from the already past present moment?

Once again, the concepts are pushed beyond their breaking point. Anyone who reads this post carefully can, however, note a hopeful contradiction. I have pointed out that it is impossible to secure the future already now, from the beginning. Simultaneously, I point out that it is in the present moment that our responsibility for the future lies. It is only here that we take responsibility for the future, in practice. How can I be so illogical?

The answer is that the first remark is directed at our intellectual tendency to push the notions of time and responsibility beyond their limits, when we fear the future and wish that we could control it right now. The second remark reminds us of how calmly the concepts of time and responsibility work in practice, when we take responsibility for the future. The first remark thus draws a line for the intellect, which hysterically wants to control the future totally and already from the beginning. The second remark opens up the practice of taking responsibility in each moment.

When we take responsibility for the future, we learn from history as it appears in current memory, as I have already indicated. The experiences from the pandemic make it possible at present to take responsibility for the future in a different way than before. The not always positive experiences of artificial intelligence make it possible at present to take better responsibility for future robotics. The strange thing, then, is that taking responsibility presupposes that things go wrong sometimes and that we are interested in the failures. Otherwise we had nothing to learn from, to prepare responsibly for the future. It is really obvious. Responsibility is possible only in a world that is not fully secured from the beginning, a world where the undesirable happens. Life is contradictory. We can never purify security according to the one-sided demands of the intellect, for security presupposes the uncertain and the undesirable.

Against this philosophical background, I would like to recommend an article in the Journal of Responsible Innovation, which discusses responsible research and innovation in a major European research project, the Human Brain Project (HBP): From responsible research and innovation to responsibility by design. The article describes how one has tried to be foresighted and take responsibility for the dynamic research and innovation within the project. The article reflects not least on the question of how to continue to be responsible even when the project ends, within the European research infrastructure that is planned to be the project’s product: EBRAINS.

The authors are well aware that specific regulated approaches easily become a source of problems when they encounter the new and unforeseen. Responsibility for the future cannot be regulated. It cannot be reduced to contrived criteria and regulations. One of the most important conclusions is that responsibility from the beginning needs to be an integral part of research and innovation, rather than an external framework. Responsibility for the future requires flexibility, openness, anticipation, engagement and reflection. But what is all that?

Personally, I want to say that it is partly about accepting the basic ambiguity of life. If we never have the courage to soar in uncertainty, but always demand security and nothing but security, we will definitely undermine security. By being sincerely interested in the uncertain and the undesirable, responsibility can become an integral part of research and innovation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bernd Carsten Stahl, Simisola Akintoye, Lise Bitsch, Berit Bringedal, Damian Eke, Michele Farisco, Karin Grasenick, Manuel Guerrero, William Knight, Tonii Leach, Sven Nyholm, George Ogoh, Achim Rosemann, Arleen Salles, Julia Trattnig & Inga Ulnicane. From responsible research and innovation to responsibility by design. Journal of Responsible Innovation. (2021) DOI: 10.1080/23299460.2021.1955613

This post in Swedish

Approaching future issues

Can subjectivity be explained objectively?

The notion of a conscious universe, animated by unobservable experiences, is today presented almost as a scientific hypothesis. How is that possible? Do cosmologists’ hypotheses that the universe is filled with dark matter and dark energy contribute to making the idea of ​​a universe filled with “dark consciousness” almost credible?

I ask the question because I myself am amazed at how the notion that elementary particles have elementary experiences suddenly has become academically credible. The idea that consciousness permeates reality is usually called panpsychism and is considered to have been represented by several philosophers in history. The alleged scientific status of panpsychism is justified today by emphasizing two classic philosophical failures to explain consciousness. Materialism has not succeeded in explaining how consciousness can arise from non-conscious physical matter. Dualism has failed to explain how consciousness, if it is separate from matter, can interact with physical reality.

Against this discouraging background, panpsychism is presented as an attractive, even elegant solution to the problem of consciousness. The hypothesis is that consciousness is hidden in the universe as a fundamental non-observable property of matter. Proponents of this elegant solution suggest that this “dark consciousness,” which permeates the universe, is extremely modest. Consciousness is present in every elementary particle in the form of unimaginably simple elementary experiences. These insignificant experiences are united and strengthened in the brain’s nervous system, giving rise to what we are familiar with as our powerful human consciousness, with its stormy feelings and thoughts.

However, this justification of panpsychism as an elegant solution to a big scientific problem presupposes that there really is a big scientific problem to “explain consciousness.” Is not the starting point a bit peculiar, that even subjectivity must be explained as an objective phenomenon? Even dualism tends to objectify consciousness, since it presents consciousness as a parallel universe to our physical universe.

The alternative explanations are thus all equally objectifying. Either subjectivity is reduced to purely material processes, or subjectivity is explained as a mental parallel universe, or subjectivity is hypostasized as “dark consciousness” that pervades the universe: as elementary experiential qualities of matter. Can we not let subjectivity be subjectivity and objectivity be objectivity?

Once upon a time there was a philosopher named Immanuel Kant. He saw how our constantly objectifying subjectivity turns into an intellectual trap, when it tries to understand itself without limiting its own objectifying approach to all questions. We then resemble cats that hopelessly chase their own tails: either by spinning to the right or by spinning to the left. Both directions are equally frustrating. Is there an elegant solution to the spinning cat’s problem? Now, I do not want to claim that Kant definitely exposed the “hard problem” of consciousness as an intellectual trap, but he pointed out the importance of self-critically examining our projective, objectifying way of functioning. If we lived as expansively as we explain everything objectively, we would soon exhaust the entire planet… is not that exactly what we do?

During a philosophy lecture, I tried to show the students how we can be trapped by apparent problems, by pseudo-problems that of course are not scientific problems, since they make us resemble cats chasing their own tails without realizing the unrealizability of the task. One student did not like what she perceived as an arbitrary limitation of the enormous achievements of science, so she objected: “But if it is the task of science to explain all big problems, then it must attempt to explain these riddles as well.” The objection is similar to the motivation of panpsychism, where it is assumed that it is the task of science to explain everything objectively, even subjectivity, no matter how hopelessly the questions spin in our heads.

The spinning cat’s problem has a simple solution: stop chasing the tail. Humans, on the other hand, need to clearly see the hopelessness of their spinning in order to stop it. Therefore, humans need to philosophize in order to live well on this planet.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

If you want to read more about panpsychism, here are two links:

Does consciousness pervade the universe?

The idea that everything from spoons to stones is conscious is gaining academic credibility

This post in Swedish

We challenge habits of thought

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

To change the changing human

Neuroscience contributes to human self-understanding, but it also raises concerns that it might change humanness, for example, through new neurotechnology that affects the brain so deeply that humans no longer are truly human, or no longer experience themselves as human. Patients who are treated with deep brain stimulation, for example, can state that they feel like robots.

What ethical and legal measures could such a development justify?

Arleen Salles, neuroethicist in the Human Brain Project, argues that the question is premature, since we have not clarified our concept of humanness. The matter is complicated by the fact that there are several concepts of human nature to be concerned about. If we believe that our humanness consists of certain unique abilities that distinguish humans from animals (such as morality), then we tend to dehumanize beings who we believe lack these abilities as “animal like.” If we believe that our humanity consists in certain abilities that distinguish humans from inanimate objects (such as emotions), then we tend to dehumanize beings who we believe lack these abilities as “mechanical.” It is probably in the latter sense that the patients above state that they do not feel human but rather as robots.

After a review of basic features of central philosophical concepts of human nature, Arleen Salles’ reflections take a surprising turn. She presents a concept of humanness that is based on the neuroscientific research that one worries could change our humanness! What is truly surprising is that this concept of humanness to some extent questions the question itself. The concept emphasizes the profound changeability of the human.

What does it mean to worry that neuroscience can change human nature, if human nature is largely characterized its ability to change?

If you follow the Ethics Blog and remember a post about Kathinka Evers’ idea of a neuroscientifically motivated responsibility for human nature, you are already familiar with the dynamic concept of human nature that Arleen Salles presents. In simple terms, it can be said to be a matter of complementing human genetic evolution with an “epigenetic” selective stabilization of synapses, which every human being undergoes during upbringing. These connections between brain cells are not inherited genetically but are selected in the living brain while it interacts with its environments. Language can be assumed to belong to the human abilities that largely develop epigenetically. I have proposed a similar understanding of language in collaboration with two ape language researchers.

Do not assume that this dynamic concept of human nature presupposes that humanness is unstable. As if the slightest gust of wind could disrupt human evolution and change human nature. On the contrary, the language we develop during upbringing probably contributes to stabilizing the many human traits that develop simultaneously. Language probably supports the transmission to new generations of the human forms of life where language has its uses.

Arleen Salles’ reflections are important contributions to the neuroethical discussion about human nature, the brain and neuroscience. In order to take ethical responsibility, we need to clarify our concepts, she emphasizes. We need to consider that humanness develops in three interconnected dimensions. It is about our genetics together with the selective stabilization of synapses in living brains in continuous interaction with social-cultural-linguistic environments. All at the same time!

Arleen Salles’ reflections are published as a chapter in a new anthology, Developments in Neuroethics and Bioethics (Elsevier). I am not sure if the publication will be open access, but hopefully you can find Arleen Salles’ contribution via this link: Humanness: some neuroethical reflections.

The chapter is recommended as an innovative contribution to the understanding of human nature and the question of whether neuroscience can change humanness. The question takes a surprising turn, which suggests we all together have an ongoing responsibility for our changing humanness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles (2021). Humanness: some neuroethical reflections. Developments in Neuroethics and Bioethics. https://doi.org/10.1016/bs.dnb.2021.03.002

This post in Swedish

We think about bioethics

Can you be cloned?

Why can we feel metaphysical nausea at the thought of cloned humans? I guess it has to do with how we, without giving ourselves sufficient time to reflect, are captivated by a simple image of individuality and cloning. The image then controls our thinking. We may imagine that cloning consists in multiplying our unique individuality in the form of indistinguishable copies. We then feel dizzy at the unthinkable thought that our individual selves would be multiplied as copies all of which in some strange way are me, or cannot be distinguished from me.

In a contribution to a philosophical online magazine, Kathinka Evers diagnoses this metaphysical nausea about cloning. If you have the slightest tendency to worry that you may be multiplied as “identical copies” that cannot be distinguished from you, then give yourself the seven minutes it takes to read the text and free yourself from the ailment:

“I cannot be cloned: the identity of clones and what it tells us about the self.”

Of course, Kathinka Evers does not deny that cloning is possible or associated with risks of various kinds. She questions the premature image of cloning by giving us time to reflect on individual identity, without being captivated by the simple image.

We are disturbed by the thought that modern research in some strange way could do what should be unthinkable. When it becomes clear that what we are worried about is unthinkable, the dizziness disappears. In her enlightening diagnosis of our metaphysical nausea, Kathinka Evers combines philosophical reflection with illuminating facts about, among other things, genetics and personality development.

Give yourself the seven minutes it takes to get rid of metaphysical nausea about cloning!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations

An unusually big question

Sometimes the intellectual claims on science are so big that they risk obscuring the actual research. This seems to happen not least when the claims are associated with some great prestigious question, such as the origin of life or the nature of consciousness. By emphasizing the big question, one often wants to show that modern science is better suited than older human traditions to answer the riddles of life. Better than philosophy, for example.

I think of this when I read a short article about such a riddle: “What is consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers.” The article by Christof Koch gives the impression that it is only a matter of time before science determines not only where in the brain consciousness arises (one already seems have a suspect), but also the specific neural mechanisms that give rise to – everything you have ever experienced. At least if one is to believe one of the fundamental theories about the matter.

Reading about the discoveries behind the identification of where in the brain consciousness arises is as exciting as reading a whodunit. It is obvious that important research is being done here on the effects that loss or stimulation of different parts of the brain can have on people’s experiences, mental abilities and personalities. The description of a new technology and mathematical algorithm for determining whether patients are conscious or not is also exciting and indicates that research is making fascinating progress, which can have important uses in healthcare. But when mathematical symbolism is used to suggest a possible fundamental explanation for everything you have ever experienced, the article becomes as difficult to understand as the most obscure philosophical text from times gone by.

Since even representatives of science sometimes make philosophical claims, namely, when they want to answer prestigious riddles, it is perhaps wiser to be open to philosophy than to compete with it. Philosophy is not just about speculating about big questions. Philosophy is also about humbly clarifying the questions, which otherwise tend to grow beyond all reasonable limits. Such openness to philosophy flourishes in the Human Brain Project, where some of my philosophical colleagues at CRB collaborate with neuroscientists to conceptually clarify questions about consciousness and the brain.

Something I myself wondered about when reading the scientifically exciting but at the same time philosophically ambitious article, is the idea that consciousness is everything we experience: “It is the tune stuck in your head, the sweetness of chocolate mousse, the throbbing pain of a toothache, the fierce love for your child and the bitter knowledge that eventually all feelings will end.” What does it mean to take such an all-encompassing claim seriously? What is not consciousness? If everything we can experience is consciousness, from the taste of chocolate mousse to the sight of the stars in the sky and our human bodies with their various organs, where is the objective reality to which science wants to relate consciousness? Is it in consciousness?

If consciousness is our inevitable vantage point, if everything we experience as real is consciousness, it becomes unclear how we can treat consciousness as an objective phenomenon in the world along with the body and other objects. Of course, I am not talking here about actual scientific research about the brain and consciousness, but about the limitless intellectual claim that scientists sooner or later will discover the neural mechanisms that give rise to everything we can ever experience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Christof Koch, What Is Consciousness? Scientists are beginning to unravel a mystery that has long vexed philosophers, Nature 557, S8-S12 (2018) https://doi.org/10.1038/d41586-018-05097-x

This post in Swedish

We transgress disciplinary borders

Human rights and legal issues related to artificial intelligence

How do we take responsibility for a technology that is used almost everywhere? As we develop more and more uses of artificial intelligence (AI), the challenges grow to get an overview of how this technology can affect people and human rights.

Although AI legislation is already being developed in several areas, Rowena Rodrigues argues that we need a panoramic overview of the widespread challenges. What does the situation look like? Where can human rights be threatened? How are the threats handled? Where do we need to make greater efforts? In an article in the Journal of Responsible Technology, she suggests such an overview, which is then discussed on the basis of the concept of vulnerability.

The article identifies ten problem areas. One problem is that AI makes decisions based on algorithms where the decision process is not completely transparent. Why did I not get the job, the loan or the benefit? Hard to know when computer programs deliver the decisions as if they were oracles! Other problems concern security and liability, for example when automatic decision-making is used in cars, medical diagnosis, weapons or when governments monitor citizens. Other problem areas may involve risks of discrimination or invasion of privacy when AI collects and uses large amounts of data to make decisions that affect individuals and groups. In the article you can read about more problem areas.

For each of the ten challenges, Rowena Rodrigues identifies solutions that are currently in place, as well as the challenges that remain to be addressed. Human rights are then discussed. Rowena Rodrigues argues that international human rights treaties, although they do not mention AI, are relevant to most of the issues she has identified. She emphasises the importance of safeguarding human rights from a vulnerability perspective. Through such a perspective, we see more clearly where and how AI can challenge human rights. We see more clearly how we can reduce negative effects, develop resilience in vulnerable communities, and tackle the root causes of the various forms of vulnerability.

Rowena Rodrigues is linked to the SIENNA project, which ends this month. Read her article on the challenges of a technology that is used almost everywhere: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4. https://doi.org/10.1016/j.jrt.2020.100005

This post in Swedish

We recommend readings

Learning from international attempts to legislate psychosurgery

So-called psychosurgery, in which psychiatric disorders are treated by neurosurgery, for example, by cutting connections in the brain, may have a somewhat tarnished reputation after the insensitive use of lobotomy in the 20th century to treat anxiety and depression. Nevertheless, neurosurgery for psychiatric disorders can help some patients and the area develops rapidly. The field probably needs an updated regulation, but what are the challenges?

The issue is examined from an international perspective in an article in Frontiers in Human Neuroscience. Neurosurgery for psychiatric disorders does not have to involve destroying brain tissue or cutting connections. In so-called deep brain stimulation, for example, electrical pulses are sent to certain areas of the brain. The method has been shown to relieve movement disorders in patients with Parkinson’s disease. This unexpected possibility illustrates one of the challenges. How do we delimit which treatments the regulation should cover in an area with rapid scientific and technical development?

The article charts legislation on neurosurgery for psychiatric disorders from around the world. The purpose is to find strengths and weaknesses in the various legislations. The survey hopes to justify reasonable ways of dealing with the challenges in the future, while achieving greater international harmonisation. The challenges are, as I said, several, but regarding the challenge of delimiting the treatments to be covered in the regulation, the legislation in Scotland is mentioned as an example. It does not provide an exhaustive list of treatments that are to be covered by the regulation, but states that treatments other than those listed may also be covered.

If you are interested in law and want a more detailed picture of the questions that need to be answered for a good regulation of the field, read the article: International Legal Approaches to Neurosurgery for Psychiatric Disorders.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Chandler JA, Cabrera LY, Doshi P, Fecteau S, Fins JJ, Guinjoan S, Hamani C, Herrera-Ferrá K, Honey CM, Illes J, Kopell BH, Lipsman N, McDonald PJ, Mayberg HS, Nadler R, Nuttin B, Oliveira-Maia AJ, Rangel C, Ribeiro R, Salles A and Wu H (2021) International Legal Approaches to Neurosurgery for Psychiatric Disorders. Front. Hum. Neurosci. 14:588458. doi: 10.3389/fnhum.2020.588458

This post in Swedish

Thinking about law

How do we take responsibility for dual-use research?

We are more often than we think governed by old patterns of thought. As a philosopher, I find it fascinating to see how mental patterns capture us, how we get imprisoned in them, and how we can get out of them. With that in mind, I recently read a book chapter on something that is usually called dual-use research. Here, too, there are patterns of thought that can capture us.

In the chapter, Inga Ulnicane discusses how one developed responsibility for neuroscientific dual-use research of concern in the Human Brain Project (HBP). I read the chapter as a philosophical drama. The European rules that govern HBP are themselves governed by mental patterns about what dual-use research is. In order to take real responsibility for the project, it was therefore necessary within HBP to think oneself free from the patterns that governed the governance of the project. Responsibility became a philosophical challenge: to raise awareness of the real dual-use issues that may be associated with neuroscientific research.

Traditionally, “dual use” refers to civilian versus military uses. By regulating that research in HBP should focus exclusively on civil applications, it can be said that the regulation of the project was itself regulated by this pattern of thought. There are, of course, major military interests in neuroscientific research, not least because the research borders on information technology, robotics and artificial intelligence. Results can be used to improve soldiers’ abilities in combat. They can be used for more effective intelligence gathering, more powerful image analysis, faster threat detection, more accurate robotic weapons, and to satisfy many other military desires.

The problem is that there are more problematic desires than military ones. Research results can also be used to manipulate people’s thoughts and feelings for non-military purposes. They can be used to monitor populations and control their behaviour. It is impossible to say once and for all what problematic desires neuroscientific research can arouse, military and non-military. A single good idea can cause several bad ideas in many other areas.

Therefore, one prefers in HBP to talk about beneficial and harmful uses, rather than civilian and military. This more open understanding of “the dual” means that one cannot identify problematic areas of use once and for all. Instead, continuous discussion is required among researchers and other actors as well as the general public to increase awareness of various possible problematic uses of neuroscientific research. We need to help each other see real problems, which can occur in completely different places than we expect. Since the problems moreover move across borders, global cooperation is needed between brain projects around the world.

Within HBP, it was found that an additional thought pattern governed the regulation of the project and made it more difficult to take real responsibility. The definition of dual-use in the documents was taken from the EU export control regulation, which is not entirely relevant for research. Here, too, greater awareness is required, so that we do not get caught up in thought patterns about what it is that could possibly have dual uses.

My personal conclusion is that human challenges are not only caused by a lack of knowledge. They are also caused by how we are tempted to think, by how we unconsciously repeat seemingly obvious patterns of thought. Our tendency to become imprisoned in mental patterns makes us unaware of our real problems and opportunities. Therefore, we should take the human philosophical drama more seriously. We need to see the importance of philosophising ourselves free from our self-incurred captivity in enticing ways of thinking. This is what one did in the Human Brain Project, I suggest, when one felt challenged by the question of what it really means to take responsibility for dual-use research of concern.

Read Inga Ulnicane’s enlightening chapter, The governance of dual-use research in the EU. The case of neuroscience, which also mentions other patterns that can govern our thinking about governance of dual-use research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ulnicane, I. (2020). The governance of dual-use research in the EU: The case of neuroscience. In A. Calcara, R. Csernatoni, & C. Lavallée (Editors), Emerging security technologies and EU governance: Actors, practices and processes. London: Routledge / Taylor & Francis Group, pages 177-191.

This post in Swedish

Thinking about thinking

« Older posts