A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroethics (Page 3 of 9)

Digital twins, virtual brains and the dangers of language

A new computer simulation technology has begun to be introduced, for example, in the manufacturing industry. The computer simulation is called a digital twin, which challenges me to bring to life for the reader what something that sounds so imaginative can be in reality.

The most realistic explanation I can find actually comes from Harry Potter’s world. Do you remember the map of Hogwarts, which not only shows all the rooms and corridors, but also the steps in real time of those who sneak around the school? A similar map can be easily created in a computer environment by connecting the map in the computer to sensors in the floor of the building that the map depicts. Immediately you have an interactive digital map of the building that is automatically updated and shows people’s movements in it. Imagine further that the computer simulation can make calculations that predict crowds that exceed the authorities’ recommendations, and that it automatically sends out warning messages via a speaker system. As far as I understand, such an interactive digital map can be called a digital twin for an intelligent house.

Of course, this is a revolutionary technology. The architect’s drawing in a computer program gets extended life in both the production and maintenance of the building. The digital simulation is connected to sensors that update the simulation with current data on relevant factors in the construction process and thereafter in the finished building. The building gets a digital twin that during the entire life cycle of the building automatically contacts maintenance technicians when the sensors show that the washing machines are starting to wear out or that the air is not circulating properly.

The scope of use for digital twins is huge. The point of them, as I understand it, is not that they are “exact virtual copies of reality,” whatever that might mean. The point is that the computer simulation is linked to the simulated object in a practically relevant way. Sensors automatically update the simulation with relevant data, while the simulation automatically updates the simulated object in relevant ways. At the same time, users, manufacturers, maintenance technicians and other actors are updated, who easily can monitor the object’s current status, opportunities and risks, wherever they are in the world.

The European flagship project Human Brain Project plans to develop digital twins of human brains by building virtual brains in a computer environment. In a new article, the philosophers Kathinka Evers and Arleen Salles, who are both working in the project, examine the enormous challenges involved in developing digital twins of living human brains. Is it even conceivable?

The authors compare types of objects that can have digital twins. It can be artefacts such as buildings and cars, or natural inanimate phenomena such as the bedrock at a mine. But it could also be living things such as the heart or the brain. The comparisons in the article show that the brain stands out in several ways, all of which make it unclear whether it is reasonable to talk about digital twins of human brains. Would it be more appropriate to talk about digital cousins?

The brain is astronomically complex and despite new knowledge about it, it is highly opaque to our search for knowledge. How can we talk about a digital twin of something that is as complex as a galaxy and as unknown as a black hole? In addition, the brain is fundamentally dynamically interactive. It is connected not only with the body but also with culture, society and the world around it, with which it develops in uninterrupted interaction. The brain almost merges with its environment. Does that imply that a digital twin would have to be a twin of the brain-body-culture-society-world, that is, a digital twin of everything?

No, of course not. The aim of the project is to find specific medical applications of the new computer simulation technology. By developing digital twins of certain aspects of certain parts of patients’ brains, it is hoped that one can improve and individualize, for example, surgical procedures for diseases such as epilepsy. Just as the map from Harry Potter’s world shows people’s steps in real time, the digital twin of the brain could follow the spread of certain nerve impulses in certain parts of the patient’s brain. This can open up new opportunities to monitor, diagnose, predict and treat diseases such as epilepsy.

Should we avoid the term digital twin when talking about the brain? Yes, it would probably be wiser to talk about digital siblings or digital cousins, argue Kathinka Evers and Arleen Salles. Although experts in the field understand its technical use, the term “digital twin” is linguistically risky when we talk about human brains. It easily leads the mind astray. We imagine that the digital twin must be an exact copy of a human’s whole brain. This risks creating unrealistic expectations and unfounded fears about the development. History shows that language also contains other dangers. Words come with normative expectations that can have ethical and social consequences that may not have been intended. Talking about a digital twin of a mining drill is probably no major linguistic danger. But when it comes to the brains of individual people, the talk of digital twins can become a new linguistic arena where we reinforce prejudices and spread fears.

After reading some popular scientific explanations of digital twins, I would like to add that caution may be needed also in connection with industrial applications. After all, the digital twin of a mining drill is not an “exact virtual copy of the real drill” in some absolute sense, right down to the movements of individual atoms. The digital twin is a copy in the practical sense that the application makes relevant. Sometimes it is enough to copy where people put their feet down, as in Harry Potter’s world, whose magic unexpectedly helps us understand the concept of a digital twin more realistically than many verbal explanations do. Explaining words with the help of other words is not always clarifying, if all the words steer thought in the same direction. The words “copy” and “replica” lead our thinking just as right and just as wrong as the word “twin” does.

If you want to better understand the challenges of creating digital twins of human brains and the importance of conceptual clarity concerning the development, read the philosophically elucidatory article: Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers, Kathinka & Salles, Arleen. (2021). Epistemic Challenges of Digital Twins & Virtual Brains: Perspectives from Fundamental Neuroethics. SCIO: Revista de Filosofía. 27-53. 10.46583 / scio_2021.21.846

This post in Swedish

Minding our language

Inspired

What does it mean to be inspired by someone? Think of these inspired music albums where artists lovingly pay tribute to a great musician by making their own interpretations of the songs. These interpretations often express deep gratitude for the inspiration received from the musician. We can feel similar gratitude to inspiring people in many different areas.

Why are we inspired by inspiring people? Here is a tempting picture. The person who inspires us has something that we lack. To be inspired is to want what the inspiring person has: “I also want to be able to…”; “I want to be as good as…” and so on. That is why we imitate those who inspire us. That is why we train hard. By imitating, by practicing, the inspiring person’s abilities can be transferred to us who lack them.

This could be called a pneumatic picture of inspiration. The inspiring one is, so to speak, an air tank with overpressure. The rest of us are tanks with negative pressure. The pressure difference causes the inspiration. By imitating the inspiring person, the pressure difference is evened out. The pressure migrates from the inspiring to the inspired. We inhale the air that flows from the tank with overpressure.

This picture is certainly partly correct, but it is hardly the whole truth about inspiration. I am not a musician. There is a big difference in pressure between me and any musician. Why does this pressure difference not cause inspiration? Why do I not start imitating musicians, training hard so that some of the musicians’ overpressure is transferred to me?

The pneumatic picture is not the whole truth, other pictures of inspiration are possible. Here is one. Maybe inspiration is not aroused by difference, not by the fact that we lack what the inspiring person has. Perhaps inspiration is aroused by similarity, by the fact that we sense a deep affinity with the one who inspires us. When we are inspired, we recognize ourselves in the one who inspires us. We discover something we did not know about ourselves. Seeds that we did not know existed in us begin to sprout, when the inspiring person makes us aware that we have the same feeling, the same passion, the same creativity… At that moment, the inspiration is aroused in us.

In this alternative picture of inspiration, there is no transfer of abilities from the inspiring one to the inspired ones. Rather, the abilities grow spontaneously in the inspired ones themselves, when they sense their affinity with the inspiring one. In the inspiring person, this growth has already taken place. Creativity has had time to develop and take shape, so that the rest of us can recognize ourselves in it. This alternative image of inspiration also provides an alternative image of human history in different areas. We are familiar with historical representations of how predecessors inspired their successors, as if the abilities of the predecessors were transferred horizontally in time. In the alternative picture, history is not just horizontal. Above all, it has a vertical depth dimension in each of us. Growing takes place vertically in each new generation, much like seeds sprout in the earth and grow towards the sky. History is, in this alternative image, a series of vertical growing, where it is difficult to distinguish the living creativity in the depth dimension from the imitation on the surface.

Why am I writing a post about inspiration? Apart from the fact that it is inspiring to think about something as vital as inspiration, I want to show how unnoticed we make pictures of facts. We do not see that it is actually just pictures, which could be replaced by completely different pictures. I learned this from the philosopher Ludwig Wittgenstein, who inspired me to examine philosophical questions myself: questions which surprisingly often arise because we are captured in our images of things. Our captivity in certain images prevents us from seeing other possibilities and obvious facts.

In addition, I want to show that it really makes a difference if we are caught in our pictures of things or open to the possibility of completely different pictures. It has been a long time since I wrote about ape language research on this blog, but the attempt to teach apes human language is an example of what a huge difference it can make, if we free ourselves from a picture that prevents us from seeing the possibility of other pictures.

Attempts to teach apes human language were based on the first picture, which highlights the difference between the one who inspires and the one who is inspired. It was thought that because apes lack the language skills that we humans have, there is only one way to teach apes human language. We need to transfer the language skills horizontally to the apes, by training them. This “single” opportunity failed so clearly, and the failure was so well-documented, that only a few researchers were subsequently open to the results of a markedly more successful, at least as well-documented experiment, which was based on the alternative picture of inspiration.

In the alternative experiment, the researchers saw an opportunity that the first picture made it difficult to see. If apes and humans live together daily in a closely united group, so that they have opportunities to sense affinities with each other, then language seeds that we did not know existed in apes could be inspired to sprout and grow spontaneously in the apes themselves. Vertically within the apes, rather than through horizontal transmission, as when humans train animals. In fact, this alternative experiment was so successful that it resulted in a series of spontaneous language growths in apes. As time went on, new-born apes were inspired not only by the humans in the group, but also by the older apes whose linguistic creativity had taken shape.

If you want to read more about this unexpected possibility of inspiration between species, which suggests unexpected affinities, as when humans are inspired by each other, you will find a book reference below. I wrote the book a long time ago with William M. Fields and Sue Savage-Rumbaugh. Both have inspired me – for which I am deeply grateful – for example, in this blog post with its alternative picture of inspiration. That I mention the book again is because I hope that the time is ripe for philosophers, psychologists, anthropologists, educationalists, linguists, neuroscientists and many others to be inspired by the unexpected possibility of human-inspired linguistic creativity in our non-human relatives.

To finally connect the threads of music and ape language research, I can tell you that two great musicians, Paul McCartney and Peter Gabriel, have visited the language-inspired apes. Both of them played music with the apes and Peter Gabriel and Panbanisha even created a song together. Can we live without inspiration?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Segerdahl, P., Fields, W. & Savage-Rumbaugh, S. 2005. Kanzi’s Primal Language. The Cultural Initiation of Primates into Language. Palgrave Macmillan

Segerdahl, P. 2017. Can an Ape Become Your Co-Author? Reflections on Becoming as a Presupposition of Teaching. In: A Companion to Wittgenstein on Education. Pedagogical Investigations. Peters, M. A. and Stickney, J. (Eds.). Singapore: Springer, pp. 539-553

This post in Swedish

We write about apes

Brain-inspired AI: human narcissism again?

This is an age when Artificial Intelligence (AI) is literally exploding and invading almost every aspect of our lives. From entertainment to work, from economics to medicine, from education to marketing, we deal with a number of disparate AI systems that make our lives much easier than a few years ago, but also raise new ethical issues or emphasize old, still open questions.

A basic fact about AI is that it is progressing at an impressive pace, while still being limited with regard to various specific contexts and goals. We often read, also in non-specialized journals, that AI systems are not robust (meaning they are not good at dealing with datasets too much different from the one they have been trained with, so that the risk of cyber-attacks is still pretty high), not fully transparent, and limited in their capacity to generalize, for instance. This suggests that the reliability of AI systems, in other words the possibility to use them for achieving different goals, is limited, and we should not blindly trust them.

A strategy increasingly chosen by AI researchers in order to improve the systems they develop is taking inspiration from biology, and specifically from the human brain. Actually, this is not really new: already the first wave of AI took inspiration from the brain, which was (and still is) the most familiar intelligent system in the world. This trend towards brain-inspired AI is gaining much more momentum today, for two main reasons among others: big data and the very powerful technology to handle big data. And yet, brain-inspired AI raises a number of questions of an even deeper nature, which urge us to stop and think.

Indeed, when compared to the human brain, present AI reveals several differences and limitations with regards to different contexts and goals. For instance, present Machine Learning cannot generalize the abilities it achieves on the basis of specific data in order to use them in different settings and for different goals. Also, AI systems are fragile: a slight change in the characteristics of processed data can have catastrophic consequences. These limitations are arguably dependent on both how AI is conceived (technically speaking: on its underlying architecture), and on how it works (on its underlying technology). I would like to introduce some reflections about the choice to use the human brain as a model for improving AI, including the apparent limitations of this choice to use the brain as a model.

Very roughly, AI researchers are looking at the human brain to infer operational principles and then translate them into AI systems and eventually make these systems better in a number of tasks. But is a brain-inspired strategy the best we can choose? What justifies it? In fact, there are already AI systems that work in ways that do not conform to the human brain. We cannot exclude a priori that AI will eventually develop more successfully along lines that do not fully conform to, or that even deviate from, the way the human brain works.

Also, we should not forget that there is no such thing as the brain: there is a huge diversity both among different people and within the brain itself. The development of our brains reflects a complex interplay between our genetic make-up and our life experiences. Moreover, the brain is a multilevel organ with different structural and functional levels.

Thus, claiming a brain-inspired AI without clarifying which specific brain model is used as a reference (for instance, the neurons’ action potentials rather than the connectomes’ network) is possibly misleading if not nonsensical.

There is also a more fundamental philosophical point worth considering. Postulating that the human brain is paradigmatic for AI risks to implicitly endorse a form of anthropocentrism and anthropomorphism, which are both evidence of our intellectual self-centeredness and of our limited ability to think beyond what we think we are.

While pragmatic reasons might justify the choice to take the brain as a model for AI (after all, for many aspects, the brain is the most efficient intelligent system that we know in nature), I think we should avoid the risk of translating this legitimate technical effort into a further narcissistic, self-referential anthropological model. Our history is already full of such models, and they have not been ethically or politically harmless.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Conceptual analysis when we get stuck in thoughts

When philosophers are asked what method we use when we philosophize, we are happy to answer: our most important method is conceptual analysis. We apply conceptual analysis to answer philosophical questions such as “What is knowledge?”, “What is justice?”, “What is truth?” What we do is that we propose general definitions of the concepts, which we then fine-tune by using concrete examples to test that the definitions really capture all individual cases of the concepts and only these.

The problem is that both those who ask for the method of philosophy and those who answer “conceptual analysis” seem to assume that philosophy is not challenged by deeply disturbing problems, but defines concepts almost routinely. The general questions above are hardly even questions, other than purely grammatically. Who lies awake wondering “What is knowledge, what is justice, what is truth, what is goodness, what is…?”

In order to get insomnia from the questions, in order for the questions to become living philosophical problems, in order for us to be disturbed by them, we need more than only generally formulated questions.

Moreover, if there was such a thing as a method of answering philosophical questions, then the questions should already have been answered. I mean, if we since the days of Socrates had a method that answers philosophical “What is?”-questions by defining concepts, then there cannot be many questions left to answer. At most, we can refine the definitions, or apply the method to concepts that did not exist 2600 years ago. Basically, philosophy should not have many questions left to be challenged by. Since ancient times, we have a well-proven method!

To understand why philosophers continue to wonder, we need to understand why questions that superficially sound so uninteresting that we fall asleep can sometimes be so deeply perplexing that we lie awake thinking. Let me give you an example that gives a glimpse of the depths of philosophy, a glimpse of that disturbing “extra” that keeps philosophers awake at night.

The example is a “Swedish” disease, which has attracted attention around the world as something very strange. I am thinking of what was first called apathy in refugee children, but which later got the name resignation syndrome. The disease affects certain groups of children seeking asylum in Sweden. Children from the former Yugoslavia and from Central Asian countries of the former Soviet Union have been overrepresented. The children lose physical and mental functions and in the end can neither move nor communicate. They become bedridden, do not respond to pain and must be fed by tube. More than 1000 children have been affected by the disease in Sweden since the 1990s.

Confronted with this disease in refugee children, it may seem natural to think that the condition is reasonably caused by traumatic experiences in the home country and during the flight, as well as by the stress of living under deportation threat. It is not unreasonable to think so. Trauma and stress probably contribute to the disease. There is only one problem. If this were the cause, then resignation syndrome should occur in refugee children in other parts of the world as well. Unfortunately, refugee children with traumatic experiences and stressful deportation threats are not only found in Sweden. So why are (certain groups of) refugee children affected by the syndrome in Sweden in particular?

What is resignation syndrome? Here we have a question that on the surface does not sound more challenging than any other generally formulated “What is?”-question. But the question is today a challenging philosophical problem, at least for Karl Sallin, who is writing his dissertation on the syndrome here at CRB, within the framework of the Human Brain Project. What is that “extra” element that makes the question philosophically challenging for Karl Sallin?

It may seem natural to think that the challenging aspect of the question is simply that we do not yet know the answer. We do not know all the facts. It is not unreasonable to think so. Lack of knowledge naturally contributes to the question. Again, there is only one problem. We already consider ourselves knowing the answer! We think that this extreme form of despair in refugee children must, of course, be caused by traumatic experiences and by the stress that the threat of deportation entails. In the end, they can no longer bear it, but give up! If this reasonable answer were correct, then resignation syndrome should not exist only in Sweden. The philosophical question thus arises because the only reasonable answer conflicts with obvious facts.

That is why the question is philosophically challenging. Not because we do not know the answer. But because we consider ourselves to know what the answer must be! The answer seems so reasonable that we should hardly need to do more research on the matter before we take action by alleviating the children’s stressful situation, which we think is the only possible cause of the syndrome. And that is what happened…

For some years now, the guidelines for Swedish health care staff have emphasized the family’s role in recovery, as well as the importance of working for a residence permit. The guidelines are governed by the seemingly reasonable idea that children’s recovery depends on relieving the stress that causes the syndrome. Once again, there is only one problem. The guidelines never had a positive effect on the syndrome, despite attempts to create peace and stability in the family and work for a residence permit. The syndrome continued to be a “Swedish” disease. Why is the condition so stubbornly linked to Sweden?

Do you see the philosophical problem? It is not just about lack of knowledge. It is about the fact that we already think we have knowledge. The thought that the cause must be stress is so obvious, that we hardly notice that we are thinking it. It seems immediately real. In short, we have got stuck in our own thoughts, which we repeat again and again, even though we repeatedly clash with obvious facts. Like a mosquito trying to get out of a window, but just crashing, crashing, crashing.

When Karl Sallin treats the issue of resignation syndrome as a philosophical issue, he does something extremely unusual, for which there are no routine methods. He directs his attention not only outwards towards the disease, but also inwards towards ourselves. More empirical research alone does not solve the problem. As little as continuing to collide with the glass pane solves the mosquito’s problem. We need to stop and examine ourselves.

This post has now become so long that I have to stop before I can describe Karl Sallin’s dissolution of the mystery. Maybe it is good that we are not rushing forward. Riddles need time, which our impatient intellect rarely gives them. The point about the method of philosophy has hopefully become clear. The reason why philosophers analyse concepts is that we humans sometimes get caught up in our own concepts of reality. In this case, we get stuck in our concept of resignation syndrome as a stress disorder. Perhaps I can still mention that Karl Sallin’s conceptual analysis of our thought pattern about the syndrome dissolves the feeling of being faced with an incomprehensible mystery. The syndrome is no longer in conflict with obvious facts. He also shows that our thought patterns may have contributed to the disease becoming so prominent in Sweden. Our publically stated belief that the disease must be caused by stress, and our attempts to cure the disease by relieving stress, created a cultural context where this “Swedish” disease became possible. The cultural context affected the mind and the brain, which affected the biology of the body. In any case, that is what Karl Sallin suggests: resignation syndrome is a culture-bound disease. This unexpected possibility frees us from the thought we were stuck in as the only alternative.

So why did Socrates ask questions in Athens 2600 years ago? Because he discovered a method that could answer philosophical questions? My guess is that he did it for the same reason that Karl Sallin does it today. Because we humans have a tendency to imagine that we already know the answers. When we clearly see that we do not know what we thought we knew, we are freed from repeatedly colliding with a reality that should be obvious.

In philosophy, it is often the answer that is the question.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Sallin, K., Evers, K., Jarbin, H., Joelsson, L., Petrovic, P. (2021) Separation and not Residency Permit Restores Function in Resignation Syndrome: A Retrospective Cohort Study. Eur Child Adolesc Psychiatry, 10.1007/s00787-021-01833-3

Sallin, K., Lagercrantz, H., Evers, K., Engström, I., Hjern, A., Petrovic, P. (2016) Resignation Syndrome: Catatonia? Culture-Bound? Frontiers in Behavioral Neuroscience, 10:7. 10.3389/fnbeh.2016.00007

This post in Swedish

We challenge habits of thought

Philosophical research communication

How do you communicate about research with people who are not researchers? The scientific results usually presuppose a complicated intellectual framework, which the researchers have acquired through long education and experience. How can we talk about their research with people who are not researchers?

At CRB, we take research communication seriously, so this question follows us daily. A common way to solve the problem is to replace researchers’ complex intellectual frameworks with simple images, which people in general are more familiar with. An example could be comparing a body cell with a small factory. We thus compare the unknown with the familiar, so that the reader gets a certain understanding: “Aha, the cell functions as a kind of factory.”

Giving research results a more comprehensible context by using images that replace the researchers’ intellectual framework often works well. We sometimes use that method ourselves here at CRB. But we also use another way of embedding the research, so that it touches people. We use philosophical reflection. We ask questions that you do not need to be an expert to wonder about. The questions lead to thoughts that you do not need to be a specialist to follow. Finally, the research results are incorporated into the reasoning. We then point out that a new article sheds light on the issues we have thought about together. In this way, the research gets an understandable context, namely, in the form of thoughts that anyone can have.

We could call this philosophical research communication. There is a significant difference between these two ways of making research understandable. When simple images are used, they only aim to make people (feel that they) understand what they are not familiar with. The images are interchangeable. If you find a better image, you immediately use it instead. The images are not essential in themselves. That we compare the body cell with a factory does not express any deep interest in factories. But the philosophical questions and reflections that we at CRB embed the research in, are essential in themselves. They are sincere questions and thoughts. They cannot be replaced by other questions and reasoning, for the sole purpose of effectively conveying research results. In philosophical research communication, we give research an essential context, which is not just an interchangeable pedagogical aid. The embedding is as important as what is embedded.

Philosophical research communication is particularly important to us at CRB, as we are a centre for ethics research. Our research is driven by philosophical questions and reflections, for example, within the Human Brain Project, which examines puzzling phenomena such as consciousness and artificial intelligence. Even when we perform empirical studies, the point of those studies is to shed light on ethical problems and thoughts. In our research communication, we focus on this interplay between the philosophically thought-provoking and the empirical results.

Another difference between these ways of communicating research has to do with equality. Since the simple images that are used to explain research are not essential in themselves, such research communication is, after all, somewhat unequal. The comparison, which seemed to make us equal, is not what the communication is really about. The reader’s acquaintance with factories does not help the reader to have their own views on research. Philosophical research communication is different. Because the embedding philosophical questions and thoughts are essential and meant seriously, we meet on the same level. We can wonder together about the same honest questions. When research is communicated philosophically, communicators as well as researchers and non-researchers are equal.

Philosophical research communication can thereby deepen the meaning of the research, sometimes even for the researchers themselves!

As philosophical research communication unites us around common questions and thoughts, it is important in an increasingly fragmented and specialized society. It helps us to think together, which is easier than you might believe, if we dare to open up to our own questions. Here, of course, I assume that the communication is sincere, that it comes from independently thinking people, that it is not based on any intellectually constructed thought patterns, which one must be a philosophy expert to understand.

In that case, philosophical research communicators would need to bring philosophy itself to life, by sincerely asking the most alive questions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

Neuroimaging the brain without revealing the person

Three contemporary trends create great challenges for researchers. First, science is expected to become increasingly open, among other things by making collected data available to new users and new purposes. At the same time, data protection laws are being strengthened to protect privacy. Finally, artificial intelligence finds new ways to reveal the individuals behind data, where this was previously impossible.

Neuroimaging is an example of how open science, stronger data protection legislation and more powerful AI challenge the research community. You may not think that you can identify the person whose brain is imaged by using a magnetic camera? But the image actually also depicts the shape of the skull and face, including any scars. You could thus recognize the person. In order to be able to share neuroimaging data without revealing the person, it has hitherto been assumed sufficient to remove the shape of the skull and face in the images, or to make the contours blurry. The problem is the third trend: more powerful AI.

AI can learn to identify people, where human eyes fail. Brain images where the shape of the skull and face has been made unrecognizable often turn out to contain enough information for self-learning face recognition programs to be able to identify people in the defaced images. AI can thus re-identify what had been de-identified. In addition, the anatomy of the brain itself is individual. Just as our fingers have unique fingerprints, our brains have unique “brainprints.” This makes it possible to link neuroimaging data to a person, namely, if you have previously identified neuroimaging data from the person. For example, via another database, or if the person has spread their brain images via social media so that “brainprint” and person are connected.

Making the persons completely unidentifiable would change the images so drastically that they would lose their value for research. The three contemporary trends – open science, stronger data protection legislation and more powerful AI – thus seem to be on a collision course. Is it at all possible to share scientifically useful neuroimaging data in a responsible way, when AI seems to be able to reveal the people whose brains have been imaged?

Well, everything unwanted that can happen does not have to happen. If the world were as insidiously constructed as in a conspiracy theory, no safety measures in the world could save us from the imminent end of the world. On the contrary, such totalized safety measures would definitely undermine safety, which I recently blogged about.

So what should researchers do in practice, when building international research infrastructures to share neuroimaging data (according to the first trend above)? A new article in Neuroimage: Reports, presents a constructive proposal. The authors emphasize, among other things, increased and continuously updated awareness among researchers about realistic data protection risks. Researchers doing neuroimaging need to be trained to think in terms of data protection and see this as a natural part of their research.

Above all, the article proposes several concrete measures to technically and organizationally build research infrastructures where data protection is included from the beginning, by design and by default. Because completely anonymized neuroimaging data is an impossibility (such data would lose its scientific value), pseudonymization and encryption are emphasized instead. Furthermore, technical systems of access control are proposed, as well as clear data use agreements that limit what the user may do with the data. Moreover, of course, informed consent from participants in the studies is part of the proposed measures.

Taken together, these safety measures, built-in from the beginning, would make it possible to construct research infrastructures that satisfy stronger data protection rules, even in a world where artificial intelligence can in principle see what human eyes cannot see. The three contemporary trends may not be on a collision course, after all. If data protection is built in from the beginning, by design and by default, researchers can share data without being forced to destroy the scientific value of the images, and people may continue to want to participate in research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Damian Eke, Ida E.J. Aasebø, Simisola Akintoye, William Knight, Alexandros Karakasidis, Ezequiel Mikulan, Paschal Ochang, George Ogoh, Robert Oostenveld, Andrea Pigorini, Bernd Carsten Stahl, Tonya White, Lyuba Zehl. “Pseudonymisation of neuroimages and data protection: Increasing access to data while retaining scientific utility,” Neuroimage: Reports, 2021,Volume 1, Issue 4

This post in Swedish

Approaching future issues

Securing the future already from the beginning

Imagine if there was a reliable method for predicting and managing future risks, such as anything that could go wrong with new technology. Then we could responsibly steer clear of all future dangers, we could secure the future already now.

Of course, it is just a dream. If we had a “reliable method” for excluding future risks from the beginning, time would soon rush past that method, which then proved to be unreliable in a new era. Because we trusted the method, the method of managing future risks soon became a future risk in itself!

It is therefore impossible to secure the future from the beginning. Does this mean that we must give up all attempts to take responsibility for the future, because every method will fail to foresee something unpredictably new and therefore cause misfortune? Is it perhaps better not to try to take any responsibility at all, so as not to risk causing accidents through our imperfect safety measures? Strangely enough, it is just as impossible to be irresponsible for the future as it is to be responsible. You would need to make a meticulous effort so that you do not happen to cook a healthy breakfast or avoid a car collision. Soon you will wish you had a “safe method” that could foresee all the future dangers that you must avoid to avoid if you want to live completely irresponsibly. Your irresponsibility for the future would become an insurmountable responsibility.

Sorry if I push the notions of time and responsibility beyond their breaking point, but I actually think that many of us have a natural inclination to do so, because the future frightens us. A current example is the tendency to think that someone in charge should have foreseen the pandemic and implemented powerful countermeasures from the beginning, so that we never had a pandemic. I do not want to deny that there are cases where we can reason like that – “someone in charge should have…” – but now I want to emphasize the temptation to instinctively reason in such a way as soon as something undesirable occurs. As if the future could be secured already from the beginning and unwanted events would invariably be scandals.

Now we are in a new situation. Due to the pandemic, it has become irresponsible not to prepare (better than before) for risks of pandemics. This is what our responsibility for the future looks like. It changes over time. Our responsibility rests in the present moment, in our situation today. Our responsibility for the future has its home right here. It may sound irresponsible to speak in such a way. Should we sit back and wait for the unwanted to occur, only to then get the responsibility to avoid it in the future? The problem is that this objection once again pushes concepts beyond their breaking point. It plays around with the idea that the future can be foreseen and secured already now, a thought pattern that in itself can be a risk. A society where each public institution must secure the future within its area of ​​responsibility, risks kicking people out of the secured order: “Our administration demands that we ensure that…, therefore we need a certificate and a personal declaration from you, where you…” Many would end up outside the secured order, which hardly secures any order. And because the trouble-makers are defined by contrived criteria, which may be implemented in automated administration systems, these systems will not only risk making systematic mistakes in meeting real people. They will also invite cheating with the systems.

So how do we take responsibility for the future in a way that is responsible in practice? Let us first calm down. We have pointed out that it is impossible not to take responsibility! Just breathing means taking responsibility for the future, or cooking breakfast, or steering the car. Taking responsibility is so natural that no one needs to take responsibility for it. But how do we take responsibility for something as dynamic as research and innovation? They are already in the future, it seems, or at least at the forefront. How can we place the responsibility for a brave new world in the present moment, which seems to be in the past already from the beginning? Does not responsibility have to be just as future oriented, just as much at the forefront, since research and innovation are constantly moving towards the future, where they make the future different from the already past present moment?

Once again, the concepts are pushed beyond their breaking point. Anyone who reads this post carefully can, however, note a hopeful contradiction. I have pointed out that it is impossible to secure the future already now, from the beginning. Simultaneously, I point out that it is in the present moment that our responsibility for the future lies. It is only here that we take responsibility for the future, in practice. How can I be so illogical?

The answer is that the first remark is directed at our intellectual tendency to push the notions of time and responsibility beyond their limits, when we fear the future and wish that we could control it right now. The second remark reminds us of how calmly the concepts of time and responsibility work in practice, when we take responsibility for the future. The first remark thus draws a line for the intellect, which hysterically wants to control the future totally and already from the beginning. The second remark opens up the practice of taking responsibility in each moment.

When we take responsibility for the future, we learn from history as it appears in current memory, as I have already indicated. The experiences from the pandemic make it possible at present to take responsibility for the future in a different way than before. The not always positive experiences of artificial intelligence make it possible at present to take better responsibility for future robotics. The strange thing, then, is that taking responsibility presupposes that things go wrong sometimes and that we are interested in the failures. Otherwise we had nothing to learn from, to prepare responsibly for the future. It is really obvious. Responsibility is possible only in a world that is not fully secured from the beginning, a world where the undesirable happens. Life is contradictory. We can never purify security according to the one-sided demands of the intellect, for security presupposes the uncertain and the undesirable.

Against this philosophical background, I would like to recommend an article in the Journal of Responsible Innovation, which discusses responsible research and innovation in a major European research project, the Human Brain Project (HBP): From responsible research and innovation to responsibility by design. The article describes how one has tried to be foresighted and take responsibility for the dynamic research and innovation within the project. The article reflects not least on the question of how to continue to be responsible even when the project ends, within the European research infrastructure that is planned to be the project’s product: EBRAINS.

The authors are well aware that specific regulated approaches easily become a source of problems when they encounter the new and unforeseen. Responsibility for the future cannot be regulated. It cannot be reduced to contrived criteria and regulations. One of the most important conclusions is that responsibility from the beginning needs to be an integral part of research and innovation, rather than an external framework. Responsibility for the future requires flexibility, openness, anticipation, engagement and reflection. But what is all that?

Personally, I want to say that it is partly about accepting the basic ambiguity of life. If we never have the courage to soar in uncertainty, but always demand security and nothing but security, we will definitely undermine security. By being sincerely interested in the uncertain and the undesirable, responsibility can become an integral part of research and innovation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bernd Carsten Stahl, Simisola Akintoye, Lise Bitsch, Berit Bringedal, Damian Eke, Michele Farisco, Karin Grasenick, Manuel Guerrero, William Knight, Tonii Leach, Sven Nyholm, George Ogoh, Achim Rosemann, Arleen Salles, Julia Trattnig & Inga Ulnicane. From responsible research and innovation to responsibility by design. Journal of Responsible Innovation. (2021) DOI: 10.1080/23299460.2021.1955613

This post in Swedish

Approaching future issues

Can subjectivity be explained objectively?

The notion of a conscious universe, animated by unobservable experiences, is today presented almost as a scientific hypothesis. How is that possible? Do cosmologists’ hypotheses that the universe is filled with dark matter and dark energy contribute to making the idea of ​​a universe filled with “dark consciousness” almost credible?

I ask the question because I myself am amazed at how the notion that elementary particles have elementary experiences suddenly has become academically credible. The idea that consciousness permeates reality is usually called panpsychism and is considered to have been represented by several philosophers in history. The alleged scientific status of panpsychism is justified today by emphasizing two classic philosophical failures to explain consciousness. Materialism has not succeeded in explaining how consciousness can arise from non-conscious physical matter. Dualism has failed to explain how consciousness, if it is separate from matter, can interact with physical reality.

Against this discouraging background, panpsychism is presented as an attractive, even elegant solution to the problem of consciousness. The hypothesis is that consciousness is hidden in the universe as a fundamental non-observable property of matter. Proponents of this elegant solution suggest that this “dark consciousness,” which permeates the universe, is extremely modest. Consciousness is present in every elementary particle in the form of unimaginably simple elementary experiences. These insignificant experiences are united and strengthened in the brain’s nervous system, giving rise to what we are familiar with as our powerful human consciousness, with its stormy feelings and thoughts.

However, this justification of panpsychism as an elegant solution to a big scientific problem presupposes that there really is a big scientific problem to “explain consciousness.” Is not the starting point a bit peculiar, that even subjectivity must be explained as an objective phenomenon? Even dualism tends to objectify consciousness, since it presents consciousness as a parallel universe to our physical universe.

The alternative explanations are thus all equally objectifying. Either subjectivity is reduced to purely material processes, or subjectivity is explained as a mental parallel universe, or subjectivity is hypostasized as “dark consciousness” that pervades the universe: as elementary experiential qualities of matter. Can we not let subjectivity be subjectivity and objectivity be objectivity?

Once upon a time there was a philosopher named Immanuel Kant. He saw how our constantly objectifying subjectivity turns into an intellectual trap, when it tries to understand itself without limiting its own objectifying approach to all questions. We then resemble cats that hopelessly chase their own tails: either by spinning to the right or by spinning to the left. Both directions are equally frustrating. Is there an elegant solution to the spinning cat’s problem? Now, I do not want to claim that Kant definitely exposed the “hard problem” of consciousness as an intellectual trap, but he pointed out the importance of self-critically examining our projective, objectifying way of functioning. If we lived as expansively as we explain everything objectively, we would soon exhaust the entire planet… is not that exactly what we do?

During a philosophy lecture, I tried to show the students how we can be trapped by apparent problems, by pseudo-problems that of course are not scientific problems, since they make us resemble cats chasing their own tails without realizing the unrealizability of the task. One student did not like what she perceived as an arbitrary limitation of the enormous achievements of science, so she objected: “But if it is the task of science to explain all big problems, then it must attempt to explain these riddles as well.” The objection is similar to the motivation of panpsychism, where it is assumed that it is the task of science to explain everything objectively, even subjectivity, no matter how hopelessly the questions spin in our heads.

The spinning cat’s problem has a simple solution: stop chasing the tail. Humans, on the other hand, need to clearly see the hopelessness of their spinning in order to stop it. Therefore, humans need to philosophize in order to live well on this planet.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

If you want to read more about panpsychism, here are two links:

Does consciousness pervade the universe?

The idea that everything from spoons to stones is conscious is gaining academic credibility

This post in Swedish

We challenge habits of thought

Consciousness and complexity: theoretical challenges for a practically useful idea

Contemporary research on consciousness is ambiguous, like the double-faced god Janus. On the one hand, it has achieved impressive practical results. We can today detect conscious activity in the brain for a number of purposes, including better therapeutic approaches to people affected by disorders of consciousness such as coma, vegetative state and minimally conscious state. On the other hand, the field is marked by a deep controversy about methodology and basic definitions. As a result, we still lack an overarching theory of consciousness, that is to say, a theoretical account that scholars agree upon.

Developing a common theoretical framework is recognized as increasingly crucial to understanding consciousness and assessing related issues, such as emerging ethical issues. The challenge is to find a common ground among the various experimental and theoretical approaches. A strong candidate that is achieving increasing consensus is the notion of complexity. The basic idea is that consciousness can be explained as a particular kind of neural information processing. The idea of associating consciousness with complexity was originally suggested by Giulio Tononi and Gerald Edelman in a 1998 paper titled Consciousness and Complexity. Since then, several papers have been exploring its potential as the key for a common understanding of consciousness.

Despite the increasing popularity of the notion, there are some theoretical challenges that need to be faced, particularly concerning the supposed explanatory role of complexity. These challenges are not only philosophically relevant. They might also affect the scientific reliability of complexity and the legitimacy of invoking this concept in the interpretation of emerging data and in the elaboration of scientific explanations. In addition, the theoretical challenges have a direct ethical impact, because an unreliable conceptual assumption may lead to misplaced ethical choices. For example, we might wrongly assume that a patient with low complexity is not conscious, or vice-versa, eventually making medical decisions that are inappropriate to the actual clinical condition.

The claimed explanatory power of complexity is challenged in two main ways: semantically and logically. Let us take a quick look at both.

Semantic challenges arise from the fact that complexity is such a general and open-ended concept. It lacks a shared definition among different people and different disciplines. This open-ended generality and lack of definition can be a barrier to a common scientific use of the term, which may impact its explanatory value in relation to consciousness. In the landmark paper by Tononi and Edelman, complexity is defined as the sum of integration (conscious experience is unified) and differentiation (we can experience a large number of different states). It is important to recognise that this technical definition of complexity refers only to the stateof consciousness, not to its contents. This means that complexity-related measures can give us relevant information about the levelof consciousness, yet they remain silent about the corresponding contentsandtheirphenomenology. This is an ethically salient point, since the dimensions of consciousness that appear most relevant to making ethical decisions are those related to subjective positive and negative experiences. For instance, while it is generally considered as ethically neutral how we treat a machine, it is considered ethically wrong to cause negative experiences to other humans or to animals.

Logical challenges arise about the justification for referring to complexity in explaining consciousness. This justification usually takes one of two alternative forms. The justification is either bottom-up (from data to theory) or top-down (from phenomenology to physical structure). Both raise specific issues.

Bottom-up: Starting from empirical data indicating that particular brain structures or functions correlate to particular conscious states, relevant theoretical conclusions are inferred. More specifically, since the brains of subjects that are manifestly conscious exhibit complex patterns (integrated and differentiated patterns), we are supposed to be justified to infer that complexity indexes consciousness. This conclusion is a sound inference to the best explanation, but the fact that a conscious state correlates with a complex brain pattern in healthy subjects does not justify its generalisation to all possible conditions (for example, disorders of consciousness), and it does not logically imply that complexity is a necessary and/or sufficient condition for consciousness.

Top-down: Starting from certain characteristics of personal experience, we are supposed to be justified to infer corresponding characteristics of the underlying physical brain structure. More specifically, if some conscious experience is complex in the technical sense of being both integrated and differentiated, we are supposed to be justified to infer that the correlated brain structures must be complex in the same technical sense. This conclusion does not seem logically justified unless we start from the assumption that consciousness and corresponding physical brain structures must be similarly structured. Otherwise it is logically possible that conscious experience is complex while the corresponding brain structure is not, and vice versa. In other words, it does not appear justified to infer that since our conscious experience is integrated and differentiated, the corresponding brain structure must be integrated and differentiated. This is a possibility, but not a necessity.

The abovementioned theoretical challenges do not deny the practical utility of complexity as a relevant measure in specific clinical contexts, for example, to quantify residual consciousness in patients with disorders of consciousness. What is at stake is the explanatory status of the notion. Even if we question complexity as a key factor in explaining consciousness, we can still acknowledge that complexity is practically relevant and useful, for example, in the clinic. In other words, while complexity as an explanatory category raises serious conceptual challenges that remain to be faced, complexity represents at the practical level one of the most promising tools that we have to date for improving the detection of consciousness and for implementing effective therapeutic strategies.

I assume that Giulio Tononi and Gerald Edelman were hoping that their theory about the connection between consciousness and complexity finally would erase the embarrassing ambiguity of consciousness research, but the deep theoretical challenges suggest that we have to live with the resemblance to the double-faced god Janus for a while longer.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Tononi, G. and G. M. Edelman. 1998. Consciousness and complexity. Science 282(5395): 1846-1851.

We like critical thinking

To change the changing human

Neuroscience contributes to human self-understanding, but it also raises concerns that it might change humanness, for example, through new neurotechnology that affects the brain so deeply that humans no longer are truly human, or no longer experience themselves as human. Patients who are treated with deep brain stimulation, for example, can state that they feel like robots.

What ethical and legal measures could such a development justify?

Arleen Salles, neuroethicist in the Human Brain Project, argues that the question is premature, since we have not clarified our concept of humanness. The matter is complicated by the fact that there are several concepts of human nature to be concerned about. If we believe that our humanness consists of certain unique abilities that distinguish humans from animals (such as morality), then we tend to dehumanize beings who we believe lack these abilities as “animal like.” If we believe that our humanity consists in certain abilities that distinguish humans from inanimate objects (such as emotions), then we tend to dehumanize beings who we believe lack these abilities as “mechanical.” It is probably in the latter sense that the patients above state that they do not feel human but rather as robots.

After a review of basic features of central philosophical concepts of human nature, Arleen Salles’ reflections take a surprising turn. She presents a concept of humanness that is based on the neuroscientific research that one worries could change our humanness! What is truly surprising is that this concept of humanness to some extent questions the question itself. The concept emphasizes the profound changeability of the human.

What does it mean to worry that neuroscience can change human nature, if human nature is largely characterized its ability to change?

If you follow the Ethics Blog and remember a post about Kathinka Evers’ idea of a neuroscientifically motivated responsibility for human nature, you are already familiar with the dynamic concept of human nature that Arleen Salles presents. In simple terms, it can be said to be a matter of complementing human genetic evolution with an “epigenetic” selective stabilization of synapses, which every human being undergoes during upbringing. These connections between brain cells are not inherited genetically but are selected in the living brain while it interacts with its environments. Language can be assumed to belong to the human abilities that largely develop epigenetically. I have proposed a similar understanding of language in collaboration with two ape language researchers.

Do not assume that this dynamic concept of human nature presupposes that humanness is unstable. As if the slightest gust of wind could disrupt human evolution and change human nature. On the contrary, the language we develop during upbringing probably contributes to stabilizing the many human traits that develop simultaneously. Language probably supports the transmission to new generations of the human forms of life where language has its uses.

Arleen Salles’ reflections are important contributions to the neuroethical discussion about human nature, the brain and neuroscience. In order to take ethical responsibility, we need to clarify our concepts, she emphasizes. We need to consider that humanness develops in three interconnected dimensions. It is about our genetics together with the selective stabilization of synapses in living brains in continuous interaction with social-cultural-linguistic environments. All at the same time!

Arleen Salles’ reflections are published as a chapter in a new anthology, Developments in Neuroethics and Bioethics (Elsevier). I am not sure if the publication will be open access, but hopefully you can find Arleen Salles’ contribution via this link: Humanness: some neuroethical reflections.

The chapter is recommended as an innovative contribution to the understanding of human nature and the question of whether neuroscience can change humanness. The question takes a surprising turn, which suggests we all together have an ongoing responsibility for our changing humanness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles (2021). Humanness: some neuroethical reflections. Developments in Neuroethics and Bioethics. https://doi.org/10.1016/bs.dnb.2021.03.002

This post in Swedish

We think about bioethics

« Older posts Newer posts »