A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroethics (Page 1 of 9)

Neuroethics: don’t let the name fool you

Names easily give the impression that the named is something separate and autonomous: something to which you can attach a label. If you want to launch something and get attention – “here is something completely new to reckon with” – it is therefore a good idea to immediately create a new name that spreads the image of something very special.

Despite this, names usually lag behind what they designate. The named has already taken shape, without anyone noticing it as anything special. In the freedom from a distinctive designation, roots have had time to spread and branches to stretch far. Since everything that is given freedom to grow is not separate and autonomous, but rooted, interwoven and in exchange with its surroundings, humans eventually notice it as something interesting and therefore give it a special name. New names can thus give a misleading image of the named as newer and more separate and autonomous than it actually is. When the name arrives, almost everything is already prepared in the surroundings.

In an open peer commentary in the journal AJOB Neuroscience, Kathinka Evers, Manuel Guerrero and Michele Farisco develop a similar line of reasoning about neuroethics. They comment on an article published in the same issue that presents neuroethics as a new field only 15 years old. The authors of the article are concerned by the still unfinished and isolated nature of the field and therefore launch a vision of a “translational neuroethics,” which should resemble that tree that has had time to grow together with its surroundings. In the vision, the new version of neuroethics is thus described as integrated, inclusive and impactful.

In their commentary, Kathinka Evers and co-authors emphasize that it is only the label “neuroethics” that has existed for 15 years. The kind of questions that neuroethics works with were already dealt with in the 20th century in applied ethics and bioethics, and some of the conceptual problems have been discussed in philosophy since antiquity. Furthermore, ethics committees have dealt with neuroethical issues long before the label existed. Viewed in this way, neuroethics is not a new and separate field, but rather a long-integrated and cooperating sub-discipline to neuroscience, philosophy and bioethics – depending on which surroundings we choose to emphasize.

Secondly, the commentators point out, the three characteristics of a “translational neuroethics” – integration, inclusiveness and impact – are a prerequisite for something to be considered a scientific field. An isolated field that does not include knowledge and perspectives from surrounding sciences and areas of interest, and that lacks practical impact, is hardly what we see today as a research field. The three characteristics are therefore not entirely successful as a vision of a future development of neuroethics. If the field is to deserve its name at all, the characteristics must already permeate neuroethics. Do they do that?

Yes, say the commentators if I understand them correctly. But in order to see this we must not be deceived by the distinctive designation, which gives the image of something new, separate and autonomous. We must see that work on neuroethical issues has been going on for a long time in several different philosophical and scientific contexts. Already when the field got its distinctive name, it was integrated, inclusive and impactful, not least within the academically established discipline of bioethics. Some problematic tendencies toward isolation have indeed existed, but they were related to the distinctive label, as it was sometimes used by isolated groups to present their activities as something new and special to be reckoned with.

The open commentary is summarized by the remark that we should avoid the temptation to see neuroethics as a completely new, autonomous and separate discipline: the temptation that the name contributes to. Such an image makes us myopic, the commentators write, which paradoxically can make it more difficult to support the three objectives of the vision. It is both truer and more fruitful to consider neuroethics and bioethics as distinct but not separate fields. If this is true, we do not need to launch an even newer version of neuroethics under an even newer label.

Read the open commentary here: Neuroethics & bioethics: distinct but not separate. If you want to read the article that is commented on, you will find the reference at the bottom of this post.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Guerrero & M. Farisco (2023) Neuroethics & Bioethics: Distinct but Not Separate, AJOB Neuroscience, 14:4, 414-416, DOI: 10.1080/21507740.2023.2257162

Anna Wexler & Laura Specker Sullivan (2023) Translational Neuroethics: A Vision for a More Integrated, Inclusive, and Impactful Field, AJOB Neuroscience, 14:4, 388-399, DOI: 10.1080/21507740.2021.2001078

This post in Swedish

Minding our language

Encourage children to take responsibility for others?

It happens that academics write visionary texts that highlight great human challenges. I blogged about such a philosophically visionary article a few years ago; an article in which Kathinka Evers discussed the interaction between society and the brain. In the article, she developed the idea that we have a “proactive” responsibility to adapt our societies to what we know about the brain’s strengths and weaknesses. Above all, she emphasized that the knowledge we have today about the changeability of the brain gives us a proactive responsibility for our own human nature, as this nature is shaped and reshaped in interaction with the societies we build.

Today I want to recommend a visionary philosophical article by Jessica Nihlén Fahlquist, an article that I think has points of contact with Kathinka Evers’ paper. Here, too, the article highlights our responsibility for major human challenges, such as climate and, above all, public health. Here, too, human changeability is emphasized, not least during childhood. Here, too, it is argued that we have a responsibility to be proactive (although the term is not used). But where Kathinka Evers starts from neuroscience, Jessica Nihlén Fahlquist starts from virtue ethics and from social sciences that see children as social actors.

Jessica Nihlén Fahlquist points out that we live in more complex societies and face greater global challenges than ever before in human history. But humans are also complex and can under favorable circumstances develop great capacities for taking responsibility. Virtue ethics has this focus on the human being and on personal character traits that can be cultivated and developed to varying degrees. Virtue ethics is sometimes criticized for not being sufficiently action-guiding. But it is hard to imagine that we can deal with major human challenges through action-guiding rules and regulations alone. Rules are never as complex as human beings. Action-guiding rules assume that the challenges are already under some sort of control and thus are not as uncertain anymore. Faced with complex challenges with great uncertainties, we may have to learn to trust the human being. Do we dare to trust ourselves when we often created the problems?

Jessica Nihlén Fahlquist reasons in a way that brings to mind Kathinka Evers’ idea of a proactive responsibility for our societies and our human nature. Nihlén Fahlquist suggests, if I understand her correctly, that we already have a responsibility to create environments that support the development of human character traits that in the future can help us meet the challenges. We already have a responsibility to support greater abilities to take responsibility in the future, one could say.

Nihlén Fahlquist focuses on public health challenges and her reasoning is based on the pandemic and the issue of vaccination of children. Parents have a right and a duty to protect their children from risks. But reasonably, parents can also be considered obliged not to be overprotective, but also to consider the child’s development of agency and values. The virus that spread during the pandemic did not cause severe symptoms in children. Vaccination therefore does not significantly protect the child’s own health, but would be done with others in mind. Studies show that children may be capable of reasoning in terms of such responsibility for others. Children who participate in medical research can, for example, answer that they participate partly to help others. Do we dare to encourage capable children to take responsibility for public health by letting them reason about their own vaccination? Is it even the case that we should support children to cultivate such responsibility as a virtue?

Nihlén Fahlquist does not claim that children themselves have this responsibility to get vaccinated out of solidarity with others. But if some children prove to be able to reason in such a morally complex way about their own vaccination, one could say that these children’s sense of responsibility is something unexpected and admirable, something that we cannot demand from a child. By encouraging and supporting the unexpected and admirable in children, it can eventually become an expected responsibility in adults, suggests Jessica Nihlén Fahlquist. Virtue ethics makes it meaningful to think in terms of such possibilities, where humans can change and their virtues can grow. Do we dare to believe in such possibilities in ourselves? If you do not expect the unexpected you will not discover it, said a visionary Greek philosopher named Heraclitus.

Jessica Nihlén Fahlquist’s article is multifaceted and innovative. In this post, I have only emphasized one of her lines of thought, which I hope has made you curious about an urgent academic text: Taking risks to protect others – pediatric vaccination and moral responsibility.

In summary, Jessica Nihlén Fahlquist argues that vaccination should be regarded as an opportunity for children to develop their sense of responsibility and that parents, schools, healthcare professionals and public health authorities should include children in debates about ethical public health issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Jessica Nihlén Fahlquist, Taking Risks to Protect Others – Pediatric Vaccination and Moral Responsibility, Public Health Ethics, 2023;, phad005, https://doi.org/10.1093/phe/phad005

This post in Swedish

Approaching future issues

When ordinary words get scientific uses

A few weeks ago, Josepine Fernow wrote an urgent blog post about science and language. She linked to a research debate about conceptual challenges for neuroscience, challenges that arise when ordinary words get specialized uses in science as technically defined terms.

In the case under debate, the word “sentience” had been imported into the scientific study of the brain. A research group reported that they were able to determine that in vitro neurons from humans and mice have learning abilities and that they exhibit “sentience” in a simulated game world. Of course, it caused quite a stir that some neurons grown in a laboratory could exhibit sentience! But the research team did not mean what attracted attention. They meant something very technical that only a specialist in the field can understand. The surprising thing about the finding was therefore the choice of words.

When the startling choice of words was questioned by other researchers, the research team defended themselves by saying that they defined the term “sentience” strictly scientifically, so that everyone should have understood what they meant, at least the colleagues in the field. Well, not all people are specialists in the relevant field. Thus the discovery – whatever it was that was discovered – raised a stir among people as if it were a discovery of sentience in neurons grown in a laboratory.

The research group’s attitude towards their own technical language is similar to an attitude I encountered long ago in a famous theorist of language, Noam Chomsky. This is what Chomsky said about the scientific study of the nature of language: “every serious approach to the study of language departs from the common-sense usage, replacing it by some technical concept.” Chomsky is of course right that linguistics defines its own technical concepts of language. But one can sense a certain hubris in the statement, because it sounds as if only a linguistic theorist could understand “language” in a way that is worthy of serious attention. This is untenable, because it raises the question what a technical concept of language is. In what sense is a technical concept a concept of language? Is it a technical concept of language in the common sense? Or is it a technical concept of language in the same inaccessible sense? In the latter case, the serious study of language seems to degenerate into a navel-gazing that does not access language.

For a technical concept of language to be a concept of language, our ordinary notions must be taken into account. Otherwise, the technical concept ceases to be a concept of language.

This is perhaps something to consider in neuroscience as well. Namely to the extent that one wants to shed light on phenomena such as consciousness and sentience. Of course, neuroscience will define its own technical concepts of these phenomena, as in the debated case. But if the technical concepts are to function as concepts of consciousness and sentience, then one cannot neglect our ordinary uses of words.

Science is very serious and important. But if the special significance of science goes to our heads, then our attitude risks undermining the great importance of science for humanity. Here you can read the views of three neuroethicists on these important linguistic issues: Conceptual conundrums for neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Minding our language

Taking care of the legacy: curating responsible research and innovation practice

Responsible research and innovation, or RRI as it is often called in EU-project language, is both scholarship and practice. Over the last decade, the Human Brain Project Building has used structured and strategic approaches to embed responsible research and innovation practices across the project. The efforts to curate the legacy of this work includes the development an online Ethics & Society toolkit. But how does that work? And what does a toolkit need in order to ensure it has a role to play?

A recent paper by Lise Bitsch and Bernd Stahl in Frontiers in Research Metrics and Analytics explores whether this kind of toolkit can help embed the legacy of RRI activities in a large research project. According to them, a toolkit has the potential to play an important role in preserving RRI legacy. But they also point out that that potential can only be realised if we have organisational structures and funding in place to make sure that this legacy is retained. Because as all resources, it needs to be maintained, shared, used, and curated. To play a role in the long-term.

Even though this particular toolkit is designed to integrate insights and practises of responsible research and innovation in the Human Brain Project, there are lessons to be learned for other efforts to ensure acceptability, desirability and sustainability of processes and outcomes of research and innovation activities. The Human Brain Project is a ten-year European Flagship project that has gone through several phases. Bernd Stahl is the ethics director of the Human Brain Project, and Lise Bitsch has led the project’s responsible research and innovation work stream for the past three years. And there is a lot to be learned. For projects who are considering developing similar tools, they describe the process of designing and developing the toolkit.

But there are parts of the RRI-legacy that cannot fit in a toolkit. The impact of the ethical, social and reflective work in the Human Brain Project is visible in governance structures, how the project is managing and handling data, in its publications and communications. The authors are part of those structures.

In addition to the Ethics & Society toolkit, the work has been published in journals, shared on the Ethics Dialogues blog (where a first version of this post was published) and the HBP Society Twitter handle, offering more opportunities to engage and discuss in the EBRAINS community Ethics & Society space. The capacity building efforts carried out for the project and EBRAINS research infrastructure have been developed into an online ethics & society training resource, and the work with gender and diversity has resulted in a toolkit for equality, diversity and inclusion in project themes and teams.

Read the paper by Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy.

(A first version of this post was originally published on the Ethics Dialogues blog, March 13, 2023)

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy, Frontiers in Research Metrics and Analytics, 13 March 2023, Sec. Research Policy and Strategic Management, Volume 8 – 2023, https://doi.org/10.3389/frma.2023.1112106

Part of international collaborations

Science, science communication and language

All communications require a shared language and fruitful discussions rely on conceptual clarity and common terms. Different definitions and divergent nomenclatures is a challenge for science: across different disciplines, between professions and when engaging with different publics. The audience for science communications is diverse. Research questions and results need to be shared within the field, between fields, with policy makers and publics. To be effective, the language, style and channel should to be adapted to the audiences’ needs, values and expectations.

This is not just in public facing communications. A recent discussion in Neuron is addressing the semantics of “sentience” in scientific communication, starting from an article by Brett J Kagan et al. on how in vitro neurons learn and exhibit sentience when embodied in a simulated game world. The article was published in December 2022 and received a lot of attention: both positive media coverage and a mix of positive and negative reactions from the scientific community. In a response, Fuat Balci et al. express concerns about the key claim in the article: claims that the authors demonstrated that cortical neurons are able to (in vitro) self-organise and display intelligent and sentient behaviour in a simulated game-world. Balci et al. are (among other things) critical of the use of terms and concepts that they claim misrepresent the findings. They also claim that Kagan et al. are overselling the translational and societal relevance of their findings. In essence creating hype around their own research. They raise a discussion about the importance of scientific communication: media tends to relay information from abstracts and statements about the significance of the research, and the scientists themselves amplify these statements in interviews. They claim that overselling results has an impact on how we evaluate scientific credibility and reliability. 

Why does this happen? Balci et al. point to a paper by Jevin D. West and Carl T. Bergstrom, from 2021 on misinformation in and about science, suggesting that hype, hyperbole (using exaggeration as a figure of speech or rhetorical device) and publication bias might have to do with demands on different productivity metrics. According to West and Bergstrom, exaggeration in popular scientific writing isn’t just misinforming the public: it also misleads researchers. In turn leading to citation misdirection and citation bias. A related problem is predatory publishing, which has the potential to mislead those of us without the means to detect untrustworthy publishers. And to top it off, echo-chambers and filter bubbles help select and deselect information and amplify the messages they think you want to hear.

The discussion in Neuron has continued with a response by Brett J. Kagan et al., in a letter about scientific communication and the semantics of sentience. They start by stating that the use of language to describe specific phenomena is a contentious aspect of scientific discourse and that whether scientific communication is effective or not depends on the context where the language is used. And that in this case using the term “sentience” has a technical meaning in line with recent literature in theoretical biology and the free energy principle, where biotic self-organisation is defined as either active inference or sentient behaviour

They make an interesting point that takes us back to the beginning of this post, namely the challenges of multidisciplinary work. Advancing research in cross-disciplinary collaboration is often challenging in the beginning because of difficulties integrating across fields. But if the different nomenclatures and approaches are recognized as an opportunity to improve and innovate, there can be benefits.

Recently, another letter by Karen S. Rommelfanger, Khara M. Ramos and Arleen Salles added a layer of reflection on the conceptual conundrums for neuroscience. In their own field of neuroethics, calls for clear language and concepts in scientific practice and communication is nothing new. They have all argued that conceptual clarity can improve science, enhance our understanding and lead to a more nuanced and productive discussion about the ethical issues. In the letter, the authors raise an important point about science and society. If we really believe that scientific terminology can retain its technically defined meaning when we transfer words to contexts permeated by a variety of cultural assumptions and colloquial uses of those same terms, we run the risk of trivialising the social and ethical impact that the choice of scientific terminology can have. They ask whether it is responsible of scientists to consider peers as their only (relevant) audience, or if conceptual clarity in science might often require public engagement and a multidisciplinary conversation.

One could also suggest that the choice to opt for terms like “sentience” and “intelligence” as a technical characterisation of how cortical neurons function in a simulated in-vitro game-world, could be considered to be questionable also from the point of view of scientific development. If we agree that neuroscience can shed light on sentience and intelligence, we also have to admit that at as of yet, we don’t know exactly how it will illuminate these capacities. And perhaps that means it is too early to bind very specific technical meaning to terms that have both colloquial and cultural meaning, and which neuroscience can illuminate in as yet unknown ways?

You may wonder why an ethics blog writer dares to express views on scientific terminology. The point I am trying to make is that we all use language, but we also produce language. Everyday. Together. In almost everything we do. This means that words like sentience and intelligence belong to us all. We have a shared responsibility for how we use them. The decision to give these common words technical meaning has consequences for how people will understand neuroscience when the words find their way back out of the technical context. But there can also be consequences for science when the words find their way in, as in the case under discussion. Because the boundaries between science and society might not be so clearly distinguishable as one might think.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

This post in Swedish

We care about communication

A new project will explore the prospect of artificial awareness

The neuroethics group at CRB has just started its work as part of a new European research project about artificial awareness. The project is called “Counterfactual Assessment and Valuation for Awareness Architecture” (CAVAA), and is funded for a duration of four years. The consortium is composed of 10 institutions, coordinated by the Radboud University in the Netherlands.

The goal of CAVAA is “to realize a theory of awareness instantiated as an integrated computational architecture…, to explain awareness in biological systems and engineer it in technological ones.” Different specific objectives derive from this general goal. First, CAVAA has a robust theoretical component: it relies on a strong theoretical framework. Conceptual reflection on awareness, including its definition and the identification of features that allow its attribution to either biological organisms or artificial systems, is an explicit task of the project. Second, CAVAA is interested in exploring the connection between awareness in biological organisms and its possible replication in artificial systems. The project thus gives much attention to the connection between neuroscience and AI. Third, against this background, CAVAA aims at replicating awareness in artificial settings. Importantly, the project also has a clear ethical responsibility, more specifically about anticipating the potential societal and ethical impact of aware artificial systems.

There are several reasons why a scientific project with a strong engineering and computer science component also has philosophers on board. We are asked to contribute to developing a strong and consistent theoretical account of awareness, including the conceptual conceivability and the technical feasibility of its artificial replication. This is not straightforward, not only because there are many content-related challenges, but also because there are logical traps to avoid. For instance, we should avoid the temptation to validate an empirical statement on the basis of our own theory: this would possibly be tautological or circular.

In addition to this theoretical contribution, we will also collaborate in identifying indicators of awareness and benchmarks for validating the cognitive architecture that will be developed. Finally, we will collaborate in the ethical analysis concerning potential future scenarios related to artificial awareness, such as the possibility of developing artificial moral agents or the need to extend moral rights also to artificial systems.

In the end, there are several potential contributions that philosophy can provide to the scientific attempt to replicate biological awareness in artificial systems. Part of this possible collaboration is the fundamental and provoking question: why should we try to develop artificial awareness at all? What is the expected benefit, should we succeed? This is definitely an open question, with possible arguments for and against attempting such a grand accomplishment.

There is also another question of equal importance, which may justify the effort to identify the necessary and sufficient conditions for artificial systems to become aware, and how to recognize them as such. What if we will inadvertently create (or worse: have already created) forms of artificial awareness, but do not recognize this and treat them as if they were unaware? Such scenarios also confront us with serious ethical issues. So, regardless of our background beliefs about artificial awareness, it is worth investing in thinking about it.

Stay tuned to hear more from CAVAA!

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Part of international collaborations

Patient views on treatment of Parkinson’s disease with embryonic stem cells

Stem cells taken from human embryos very early after fertilization can be grown as embryonic stem cell lines. These embryonic stem cells are called pluripotent, as they can differentiate into virtually all of the body’s cell types (without being able to develop into an individual). The medical interest in embryonic stem cells is related to the possibility of using them to regenerate damaged tissue. One disease one hopes to be able to develop stem cell treatment for is Parkinson’s disease.

In Sweden, it is permitted to use leftover donated embryos from IVF treatment for research purposes. However, not to produce medical products. The path towards possible future treatments is lined with legal and ethical uncertainties. In addition, the moral status of the embryo has been debated for a very long time, without any consensus on the matter being reached.

In this situation, studies of people’s perceptions of the use of human embryonic stem cells for the development of medical treatments become urgent. Recently, the first study of the perceptions of patients, the group that can become recipients, was published. It is an interview study with seventeen patients in Sweden who have Parkinson’s disease. Author is Jennifer Drevin along with six co-authors.

The interviewees were generally positive about using human embryonic stem cells to treat Parkinson’s disease. They did not regard the embryo as a life with human rights, but at the same time they saw the embryo as something special. It was considered that the embryo has great value for the couple who want to become parents and emphasized the importance of the woman’s or the couple’s free and informed consent to donation. As patients, they expressed interest in a treatment that did not limit everyday life through, for example, complicated daily medication. They were interested in better cognitive and communicative abilities and wanted to be more independent: not having to ask family members for support in everyday tasks. The effectiveness of the treatment was considered important and there was concern that stem cell treatment might not be effective enough, or have side effects.

Furthermore, concerns were expressed that donors could be exploited, for example poor and vulnerable groups, and that financial compensation could have negative effects. Allowing donation only of leftover embryos from IVF treatment was considered reassuring, as the main purpose would not be to make money. Finally, there was concern that the pharmaceutical industry would not always prioritize the patient over profit and that expensive stem cell treatments could lead to societal and global injustices. Suspicions that companies will not use embryos ethically were expressed, and some felt that it was more problematic to make a profit on products from embryos than on other medical products. Transparency around the process of developing and using medical stem cell products was considered important.

If you want to see more results, read the study here: Patients’ views on using human embryonic stem cells to treat Parkinson’s disease: an interview study.

It can be difficult to draw general conclusions from the study and the summary above reproduces some of the statements in the interviews. We should, among other things, keep in mind that the interviews were conducted with a small number of patients who themselves have the disease and that the study was conducted in Sweden. The authors emphasize that the study can help clinicians and researchers develop treatments in ways that take into account patients’ needs and concerns. A better understanding of people’s attitudes can also contribute to the public debate and support the development of policy and legislation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Drevin, J., Nyholm, D., Widner, H. et al. Patients’ views on using human embryonic stem cells to treat Parkinson’s disease: an interview study. BMC Med Ethics 23, 102 (2022). https://doi.org/10.1186/s12910-022-00840-6

This post in Swedish

In dialogue with patients

A charming idea about consciousness

Some ideas can have such a charm that you only need to hear them once to immediately feel that they are probably true: “there must be some grain of truth in it.” Conspiracy theories and urban myths probably spread in part because of how they manage to charm susceptible human minds by ringing true. It is said that even some states of illness are spread because the idea of ​​the illness has such a strong impact on many of us. In some cases, we only need to hear about the diagnosis to start showing the symptoms and maybe we also receive the treatment. But even the idea of diseases spread by ideas has charm, so we should be on our guard.

The temptation to fall for the charm of certain ideas naturally also exists in academia. At the same time, philosophy and science are characterized by self-critical examination of ideas that may sound so attractive that we do not notice the lack of examination. As long as the ideas are limited hypotheses that can in principle be tested, it is relatively easy to correct one’s hasty belief in them. But sometimes these charming ideas consist of grand hypotheses about elusive phenomena that no one knows how to test. People can be so convinced by such ideas that they predict that future science just needs to fill in the details. A dangerous rhetoric to get caught up in, which also has its charm.

Last year I wrote a blog post about a theory at the border between science and philosophy that I would like to characterize as both grand and charming. This is not to say that the theory must be false, just that in our time it may sound immediately convincing. The theory is an attempt to explain an elusive “phenomenon” that perplexes science, namely the nature of consciousness. Many feel that if we could explain consciousness on purely scientific grounds, it would be an enormously significant achievement.

The theory claims that consciousness is a certain mathematically defined form of information processing. Associating consciousness with information is timely, we are immediately inclined to listen. What type of information processing would consciousness be? The theory states that consciousness is integrated information. Integration here refers not only to information being stored as in computers, but to all this diversified information being interconnected and forming an organized whole, where all parts are effectively available globally. If I understand the matter correctly, you can say that the integrated information of a system is the amount of generated information that exceeds the information generated by the parts. The more information a system manages to integrate, the more consciousness the system has.

What, then, is so charming about the idea that ​​consciousness is integrated information? Well, the idea might seem to fit with how we experience our conscious lives. At this moment you are experiencing multitudes of different sensory impressions, filled with details of various kinds. Visual impressions are mixed with impressions from the other senses. At the same time, however, these sensory impressions are integrated into a unified experience from a single viewpoint, your own. The mathematical theory of information processing where diversification is combined with integration of information may therefore sound attractive as a theory of consciousness. We may be inclined to think: Perhaps it is because the brain processes information in this integrative way that our conscious lives are characterized by a personal viewpoint and all impressions are organized as an ego-centred subjective whole. Consciousness is integrated information!

It becomes even more enticing when it turns out that the theory, called Integrated Information Theory (IIT), contains a calculable measure (Phi) of the amount of integrated information. If the theory is correct, then one would be able to quantify consciousness and give different systems different Phi for the amount of consciousness. Here the idea becomes charming in yet another way. Because if you want to explain consciousness scientifically, it sounds like a virtue if the theory enables the quantification of how much consciousness a system generates. The desire to explain consciousness scientifically can make us extra receptive to the idea, which is a bit deceptive.

In an article in Behavioral and Brain Sciences, Björn Merker, Kenneth Williford and David Rudrauf examine the theory of consciousness as integrated information. The review is detailed and comprehensive. It is followed up by comments from other researchers, and ends with the authors’ response. What the three authors try to show in the article is that even if the brain does integrate information in the sense of the theory, the identification of consciousness with integrated information is mistaken. What the theory describes is efficient network organization, rather than consciousness. Phi is a measure of network efficiency, not of consciousness. What the authors examine in particular is that charming feature I just mentioned: the theory can seem to “fit” with how we experience our conscious lives from a unified ego-centric viewpoint. It is true that integrated information constitutes a “unity” in the sense that many things are joined in a functionally organized way. But that “unity” is hardly the same “unity” that characterizes consciousness, where the unity is your own point of view on your experiences. Effective networks can hardly be said to have a “viewpoint” from a subjective “ego-centre” just because they integrate information. The identification of features of our conscious lives with the basic concepts of the theory is thus hasty, tempting though it may be.

The authors do not deny that the brain integrates information in accordance with the theory. The theory mathematically describes an efficient way to process information in networks with limited energy resources, something that characterizes the brain, the authors point out. But if consciousness is identified with integrated information, then many other systems that process information in the same efficient way would also be conscious. Not only other biological systems besides the brain, but also artifacts such as some large-scale electrical power grids and social networks. Proponents of the theory seem to accept this, but we have no independent reason to suppose that systems other than the brain would have consciousness. Why then insist that other systems are also conscious? Well, perhaps because one is already attracted by the association between the basic concepts of the theory and the organization of our conscious experiences, as well as by the possibility of quantifying consciousness in different systems. The latter may sound like a scientific virtue. But if the identification is false from the beginning, then the virtue appears rather as a departure from science. The theory might flood the universe with consciousness. At least that is how I understand the gist of ​​the article.

Anyone who feels the allure of the theory that consciousness is integrated information should read the careful examination of the idea: The integrated information theory of consciousness: A case of mistaken identity.

The last word has certainly not been said and even charming ideas can turn out to be true. The problem is that the charm easily becomes the evidence when we are under the influence of the idea. Therefore, I believe that the careful discussion of the theory of consciousness as integrated information is urgent. The article is an excellent example of the importance of self-critical examination in philosophy and science.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Merker, B., Williford, K., & Rudrauf, D. (2022). The integrated information theory of consciousness: A case of mistaken identity. Behavioral and Brain Sciences, 45, E41. doi:10.1017/S0140525X21000881

This post in Swedish

We like critical thinking

AI narratives from the Global North

The way we develop, adopt, regulate and accept artificial intelligence is embedded in our societies and cultures. Our narratives about intelligent machines take on a flavour of the art, literature and imaginations of the people who live today, and of those that came before us. But some of us are missing from the stories that are told about thinking machines. A recent paper about forgotten African AI narratives and the future of AI in Africa shines a light on some of the missing narratives.

In the paper, Damian Eke and George Ogoh point to the fact that how artificial intelligence is developed, adopted, regulated and accepted is hugely influenced by socio-cultural, ethical, political, media and historical narratives. But most of the stories we tell about intelligent machines are imagined and conceptualised in the Global North. The paper begs the question whether it is a problem? And if so, in what way? When machine narratives put the emphasis on technology neutrality, that becomes a problem that goes beyond AI.

What happens when Global North narratives set the agenda for research and innovation also in the Global South, and what happens more specifically to the agenda for artificial intelligence? The impact is difficult to quantify. But when historical, philosophical, socio-cultural and political narratives from Africa are missing, we need to understand why and what it might imply. Damian Eke & George Ogoh provide a list of reasons for why this is important. One is concerns about the state of STEM education (science, technology, engineering and mathematics) in many African countries. Another reason is the well-documented issue of epistemic injustice: unfair discrimination against people because of prejudices about their knowledge. The dominance of Global North narratives could lead to devaluing the expertise of Africans in the tech community. This brings us to the point of the argument, which is that African socio-cultural, ethical and political contexts and narratives are absent from the global debate about responsible AI.

The paper makes the case for including African AI narratives not only into the research and development of artificial intelligence, but also into the ethics and governance of technology more broadly. Such inclusion would help counter epistemic injustice. If we fail to include narratives from the South into the AI discourse, the development can never be truly global. Moreover, excluding African AI narratives will limit our understanding of how different cultures in Africa conceptualise AI, and we miss an important perspective on how people across the world perceive the risks and benefits of machine learning and AI powered technology. Nor will we understand the many ways in which stories, art, literature and imaginations globally shape those perceptions.

If we want to develop an “AI for good”, it needs to be good for Africa and other parts of the Global South. According to Damian Eke and George Ogoh, it is possible to create a more meaningful and responsible narrative about AI. That requires that we identify and promote people-centred narratives. And anchor AI ethics for Africa in African ethical principles, like ubuntu. But the key for African countries to participate in the AI landscape is a greater focus on STEM education and research. The authors end their paper with a call to improve the diversity of voices in the global discourse about AI. Culturally sensitive and inclusive AI applications would benefit us all, for epistemic injustice is not just a geographical problem. Our view of whose knowledge has value is powered by a broad variety of forms of prejudice.

Damian Eke and George Ogoh are both actively contributing to the Human Brain Project’s work on responsible research and innovation. The Human Brain Project is a European Flagship project providing in-depth understanding of the complex structure and function of the human brain, using interdisciplinary approaches.

Do you want to learn more? Read the article here: Forgotten African AI Narratives and the future of AI in Africa.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Eke D, Ogoh G, Forgotten African AI Narratives and the future of AI in Africa, International Review of Information Ethics, 2022;31(08).

We want to be just

Does the brain make room for free will?

The question of whether we have free will has been debated throughout the ages and everywhere in the world. Can we influence our future or is it predetermined? If everything is predetermined and we lack free will, why should we act responsibly and by what right do we hold each other accountable?

There have been different ideas about what predetermines the future and excludes free will. People have talked about fate and about the gods. Today, we rather imagine that it is about necessary causal relationships in the universe. It seems that the strict determinism of the material world must preclude the free will that we humans perceive ourselves to have. If we really had free will, we think, then nature would have to give us a space of our own to decide in. A causal gap where nature does not determine everything according to its laws, but allows us to act according to our will. But this seems to contradict our scientific world view.

In an article in the journal Intellectica, Kathinka Evers at CRB examines the plausibility of this choice between two extreme positions: either strict determinism that excludes free will, or free will that excludes determinism.

Kathinka Evers approaches the problem from a neuroscientific perspective. This particular perspective has historically tended to support one of the positions: strict determinism that excludes free will. How can the brain make room for free will, if our decisions are the result of electrochemical processes and of evolutionarily developed programs? Is it not right there, in the brain, that our free will is thwarted by material processes that give us no space to act?

Some authors who have written about free will from a neuroscientific perspective have at times explained away freedom as the brain’s user’s illusion: as a necessary illusion, as a fictional construct. Some have argued that since social groups function best when we as individuals assume ourselves to be responsible actors, we must, after all, keep this old illusion alive. Free will is a fiction that works and is needed in society!

This attitude is unsound, says Kathinka Evers. We cannot build our societies on assumptions that contradict our best knowledge. It would be absurd to hold people responsible for actions that they in fact have no ability to influence. At the same time, she agrees that the notion of free will is socially important. But if we are to retain the notion, it must be consistent with our knowledge of the brain.

One of the main points of the article is that our knowledge of the brain could actually provide some room for free will. The brain could function beyond the opposition between indeterminism and strict determinism, some neuroscientific theories suggest. This does not mean that there would be uncaused neural events. Rather, a determinism is proposed where the relationship between cause and effect is variable and contingent, not invariable and necessary, as we commonly assume. As far as I understand, it is about the fact that the brain has been shown to function much more independently, actively and flexibly than in the image of it as a kind of programmed machine. Different incoming nerve signals can stabilize different neural patterns of connections in the brain, which support the same behavioural ability. And the same incoming nerve signal can stabilize different patterns of connections in the brain that result in the same behavioural ability. Despite great variation in how individuals’ neural patterns of connections are stabilized, the same common abilities are supported. This model of the brain is thus deterministic, while being characterized by variability. It describes a kind of kaleidoscopically variable causality in the brain between incoming signals and resulting behaviours and abilities.

Kathinka Evers thus hypothetically suggests that this variability in the brain, if real, could provide empirical evidence that free will is compatible with determinism.

Read the philosophically exciting article here: Variable determinism in social applications: translating science to society

Although Kathinka Evers suggests that a certain amount of free will could be compatible with what we know about the brain, she emphasizes that neuroscience gives us increasingly detailed knowledge about how we are conditioned by inherited programs, for example, during adolescence, as well as by our conditions and experiences in childhood. We should, after all, be cautiously restrained in praising and blaming each other, she concludes the article, referring to the Stoic Epictetus, one of the philosophers who thought about free will and who rather emphasized freedom from the notion of a free will.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers Kathinka (2021/2). Variable Determinism in Social Applications: Translating Science to Society. In Monier Cyril & Khamassi Mehdi (Eds), Liberty and cognition, Intellectica, 75, pp.73-89.

This post in Swedish

We like challenging questions

« Older posts