A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: neuroethics (Page 1 of 9)

Why does science ask the question of artificial consciousness?

The possibility of conscious AI is increasingly perceived as a legitimate and important scientific question. This interest has arisen after a long history of scientific doubts about the possibility of consciousness not only in other animals, but sometimes even in humans. The very concept of consciousness was for a period considered scientifically suspect. But now the question of conscious AI is being raised within science.

For anyone interested in how such a mind-boggling question can be answered philosophically and scientifically, I would like to recommend an interesting AI-philosophical exchange of views in the French journal Intellectica. The exchange (which is in English) revolves around an article by two philosophers, Jonathan Birch and Kristin Andrews, who for several years have discussed consciousness not only among mammals, but also among birds, fish, cephalopods, crustaceans, reptiles, amphibians and insects. The two philosophers carefully distinguish between psychological questions about what might make us emotionally attracted to believe that an AI system is conscious, and logical questions about what philosophically and scientifically can count as evidence for conscious AI. It is to this logical perspective that they want to contribute. How can we determine whether an artificial system is truly conscious; not just be seduced into believing it because the system emotionally convincingly mirrors the behavior of subjectively experiencing humans? Their basic idea is that we should first study consciousness in a wide range of animal species beyond mammals. Partly because the human brain is too different from (today’s) artificial systems to serve as a suitable reference point, but above all because such a broad comparison can help us identify the essential features of consciousness: features that could be used as markers for consciousness in artificial systems. The two philosophers’ proposal is thus that by starting from different forms of animal consciousness, we can better understand how we should philosophically and scientifically seek evidence for or against conscious AI.

One of my colleagues at CRB, Kathinka Evers, also a philosopher, comments on the article. She appreciates Birch and Andrews’ discussion as philosophically clarifying and sees the proposal to approach the question of conscious AI by studying forms of consciousness in a wide range of animal species as well argued. However, she believes that a number of issues require more attention. Among other things, she asks whether the transition from carbon- to silicon-based substrates does not require more attention than Birch and Andrews give it.

Birch and Andrews propose a thought experiment in which a robot rat behaves exactly like a real rat. It passes the same cognitive and behavioral tests. They further assume that the rat brain is accurately depicted in the robot, neuron for neuron. In such a case, they argue, it would be inconsistent not to accept the same pain markers that apply to the rat for the robot as well. The cases are similar, they argue, the transition from carbon to silicon does not provide sufficient reason to doubt that the robot rat can feel pain when it exhibits the same features that mark pain in the real rat. But the cases are not similar, Kathinka Evers points out, because the real rat, unlike the robot, is alive. If life is essential for consciousness, then it is not inconsistent to doubt that the robot can feel pain even in this thought experiment. Someone could of course associate life with consciousness and argue that a robot rat that exhibits the essential features of consciousness must also be considered alive. But if the purpose is to identify what can logically serve as evidence for conscious AI, the problem remains, says Kathinka Evers, because we then need to clarify how the relationship between life and consciousness should be investigated and how the concepts should be defined.

Kathinka Evers thus suggests several questions of relevance to what can logically be considered evidence for conscious AI. But she also asks a more fundamental question, which can be sensed throughout her commentary. She asks why the question of artificial consciousness is even being raised in science today. As mentioned, one of Birch and Andrews’ aims was to avoid the answer being influenced by psychological tendencies to interpret an AI that convincingly reflects human emotions as if it were conscious. But Kathinka Evers asks, as I read her, whether this logical purpose may not come too late. Is not the question already a temptation? AI is trained on human-generated data to reflect human behavior, she points out. Are we perhaps seeking philosophical and scientific evidence regarding a question that seems significant simply because we have a psychological tendency to identify with our digital mirror images? For a question to be considered scientific and worth funding, some kind of initial empirical support is usually required, but there is no evidence whatsoever for the possibility of consciousness in non-living entities such as AI systems. The question of whether an AI can be conscious has no more empirical support than the question of whether volcanoes can experience their eruptions, Kathinka Evers points out. There is a great risk that we will scientifically try to answer a question that lacks scientific basis. No matter how carefully we seek the longed-for answer, the question itself seems imprudent.

I am reminded of the myth of Narcissus. After a long history of rejecting the love of others (the consciousness of others), he finally fell in love with his own (digital) reflection, tried hopelessly to hug it, and was then tormented by an eternal longing for the image. Are you there? Will the reflection respond? An AI will certainly generate a response that speaks to our human emotions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Birch Jonathan, Andrews Kristin (2024/2). To Understand AI Sentience, First Understand it in Animals. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 213-226.

Evers Kathinka (2024/2). To understand sentience in AI first understand it in animals. Commentary to Jonathan Birch and Kristin Andrews. In Gefen Alexandre & Huneman Philippe (Eds), Philosophies of AI: thinking and writing with LLMs, Intellectica, 81, pp. 229-232.

This post in Swedish

We challenge habits of thought

Conceivability and feasibility of artificial consciousness

Can artificial consciousness be engineered, is the endeavor even conceivable?  In a number of previous posts, I have explored the possibility of developing AI consciousness from different perspectives, including ethical analysis, a comparative analysis of artificial and biological consciousness, and a reflection about the fundamental motivation behind the development of AI consciousness.

Together with Kathinka Evers from CRB, and with other colleagues from the CAVAA project, I recently published a new paper which aims to clarify the first preparatory steps that would need to be taken on the path to AI consciousness: Preliminaries to artificial consciousness: A multidimensional heuristic approach. These first requirements are above all logical and conceptual. We must understand and clarify the concepts that motivate the endeavor. In fact, the growing discussion about AI consciousness often lacks consistency and clarity, which risks creating confusion about what is logically possible, conceptually plausible, and technically feasible.

As a possible remedy to these risks, we propose an examination of the different meanings attributed to the term “consciousness,” as the concept has many meanings and is potentially ambiguous. For instance, we propose a basic distinction between the cognitive and the experiential dimensions of consciousness: awareness can be understood as the ability to process information, store it in memory, and possibly retrieve it if relevant to the execution of specific tasks, while phenomenal consciousness can be understood as subjective experience (“what it is like to be” in a particular state, such as being in pain).

This distinction between cognitive and experiential dimensions is just one illustration of how the multidimensional nature of consciousness is clarified in our model, and how the model can support a more balanced and realistic discussion of the replication of consciousness in AI systems. In our multidisciplinary article, we try to elaborate a model that serves both as a theoretical tool for clarifying key concepts and as an empirical guide for developing testable hypotheses. Developing concepts and models that can be tested empirically is crucial for bridging philosophy and science, eventually making philosophy more informed by empirical data and improving the conceptual architecture of science.

In the article we also illustrate how our multidimensional model of consciousness can be tested empirically. We focus on awareness as a case study. As we see it, awareness has two fundamental capacities: the capacity to select relevant information from the environment, and the capacity to intentionally use this information to achieve specific goals. Basically, in order to be considered aware, the information processing should be more sophisticated than a simple input-output processing. For example, the processing needs to evaluate the relevance of information on the basis of subjective priors, such as needs and expectations. Furthermore, in order to be considered aware, information processing should be combined with a capacity to model or virtualize the world, in order to predict more distant future states. To truly be markers of awareness, these capacities for modelling and virtualization should be combined with an ability to intentionally use them for goal-directed behavior.

There are already some technical applications that exhibit capacities like these. For instance, researchers from the CAVAA project have developed a robot system which is able to adapt and correct its functioning and to learn “on the fly.” These capacities make the system able to dynamically and autonomously adapt its behavior to external circumstances to achieve its goals. This illustrates how awareness as a dimension of consciousness can already be engineered and reproduced.

Is this sufficient to conclude that AI consciousness is a fact? Yes and no. The full spectrum of consciousness has not yet been engineered and perhaps its complete reproduction is not conceivable or feasible. In fact, the phenomenal dimension of consciousness appears as a stumbling block against “full” AI consciousness. Among other things, because subjective experience arises from the capacity of biological subjects to evaluate the world, that is, to assign specific values to it on the basis of subjective needs. These needs are not just cognitive needs, as in the case of awareness, but emotionally charged and with a more comprehensive impact on the subjective state. Nevertheless, we cannot rule out this possibility a priori, and the fundamental question whether there can be a “ghost in the machine” remains open for further investigation.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

K. Evers, M. Farisco, R. Chatila, B.D. Earp, I.T. Freire, F. Hamker, E. Nemeth, P.F.M.J. Verschure, M. Khamassi, Preliminaries to artificial consciousness: A multidimensional heuristic approach, Physics of Life Reviews, Volume 52, 2025, Pages 180-193, ISSN 1571-0645, https://doi.org/10.1016/j.plrev.2025.01.002

We like challenging questions

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

A way out of the Babylonian confusion of tongues in the theorizing of consciousness?

There is today a wide range of competing theories, each in its own way trying to account for consciousness in neurobiological terms. Parallel to the “Babylonian confusion of tongues” and inability to collaborate that this entails in the theorizing of consciousness, progress has been made in the empirical study of the brain. Advanced methods for imaging and measuring the brain and its activities map structures and functions that are possibly relevant for consciousness. The problem is that these empirical data once again inspire a wide range of theories about the place of consciousness in the brain.

It has been pointed out that a fragmented intellectual state such as this, where competing schools of thought advocate their own theories based on their own starting points – with no common framework or paradigm within which the proposals can be compared and assessed – is typical of a pre-scientific stage of a possibly nascent science. Given that the divergent theories each claim scientific status, this is of course troubling. But maybe the theories are not as divergent as they seem?

It has been suggested that several of the theories, upon closer analysis, possibly share certain fundamental ideas about consciousness, which could form the basis of a future unified theory. Today I want to recommend an article that self-critically examines this hope for a way out of the Babylonian confusion. If the pursuit of a unified theory of consciousness is not to degenerate into a kind of “manufactured uniformity,” we must first establish that the theories being integrated are indeed comparable in relevant respects. But can we identify such common denominators among the competing theories, which could support the development of an overarching framework for scientific research? That is the question that Kathinka Evers, Michele Farisco and Cyriel Pennartz investigate for some of the most debated neuroscientifically oriented theories of consciousness.

What do the authors conclude? Something surprising! They come to the conclusion that it is actually quite possible to identify a number of common denominators, which show patterns of similarities and differences among the theories, but that this is still not the way to an overall theory of consciousness that supports hypotheses that can be tested experimentally. Why? Partly because the common denominators, such as “information,” are sometimes too general to function as core concepts in research specifically about consciousness. Partly because theories that have common denominators can, after all, be conceptually very different.

The authors therefore suggest, as I understand them, that a more practicable approach could be to develop a common methodological approach to testing hypotheses about relationships between consciousness and the brain. It is perhaps only in the empirical workshop, open to the unexpected, so to speak, that a scientific framework, or paradigm, can possibly begin to take shape. Not by deliberately formulating unified theory based on the identification of common denominators among competing theories, which risks manufacturing a facade of uniformity.

The article is written in a philosophically open-minded spirit, without ties to specific theories. It can thereby stimulate the creative collaboration that has so far been inhibited by self-absorbed competition between schools of thought. Read the article here: Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses.

I would like to conclude by mentioning an easily neglected aspect of how scientific paradigms work (according to Thomas Kuhn). A paradigm does not only generate possible explanations of phenomena. It also generates the problems that researchers try to solve within the paradigm. Quantum mechanics and evolutionary biology enabled new questions that made nature problematic in new explorable ways. A possible future paradigm for scientific consciousness research would, if this is correct, not answer the questions about consciousness that baffle us today (at least not without first reinterpreting them). Rather, it would create new, as yet unasked questions, which are explorable within the paradigm that generates them.

The authors of the article may therefore be right that the most fruitful thing at the moment is to ask probing questions that help us delineate what actually lends itself to investigation, rather than to start by manufacturing overall theoretical uniformity. The latter approach would possibly put the cart before the horse.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Farisco, C.M.A. Pennartz, “Assessing the commensurability of theories of consciousness: On the usefulness of common denominators in differentiating, integrating and testing hypotheses,” Consciousness and Cognition, Volume 119, 2024,

This post in Swedish

Minding our language

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

Neuroethics: don’t let the name fool you

Names easily give the impression that the named is something separate and autonomous: something to which you can attach a label. If you want to launch something and get attention – “here is something completely new to reckon with” – it is therefore a good idea to immediately create a new name that spreads the image of something very special.

Despite this, names usually lag behind what they designate. The named has already taken shape, without anyone noticing it as anything special. In the freedom from a distinctive designation, roots have had time to spread and branches to stretch far. Since everything that is given freedom to grow is not separate and autonomous, but rooted, interwoven and in exchange with its surroundings, humans eventually notice it as something interesting and therefore give it a special name. New names can thus give a misleading image of the named as newer and more separate and autonomous than it actually is. When the name arrives, almost everything is already prepared in the surroundings.

In an open peer commentary in the journal AJOB Neuroscience, Kathinka Evers, Manuel Guerrero and Michele Farisco develop a similar line of reasoning about neuroethics. They comment on an article published in the same issue that presents neuroethics as a new field only 15 years old. The authors of the article are concerned by the still unfinished and isolated nature of the field and therefore launch a vision of a “translational neuroethics,” which should resemble that tree that has had time to grow together with its surroundings. In the vision, the new version of neuroethics is thus described as integrated, inclusive and impactful.

In their commentary, Kathinka Evers and co-authors emphasize that it is only the label “neuroethics” that has existed for 15 years. The kind of questions that neuroethics works with were already dealt with in the 20th century in applied ethics and bioethics, and some of the conceptual problems have been discussed in philosophy since antiquity. Furthermore, ethics committees have dealt with neuroethical issues long before the label existed. Viewed in this way, neuroethics is not a new and separate field, but rather a long-integrated and cooperating sub-discipline to neuroscience, philosophy and bioethics – depending on which surroundings we choose to emphasize.

Secondly, the commentators point out, the three characteristics of a “translational neuroethics” – integration, inclusiveness and impact – are a prerequisite for something to be considered a scientific field. An isolated field that does not include knowledge and perspectives from surrounding sciences and areas of interest, and that lacks practical impact, is hardly what we see today as a research field. The three characteristics are therefore not entirely successful as a vision of a future development of neuroethics. If the field is to deserve its name at all, the characteristics must already permeate neuroethics. Do they do that?

Yes, say the commentators if I understand them correctly. But in order to see this we must not be deceived by the distinctive designation, which gives the image of something new, separate and autonomous. We must see that work on neuroethical issues has been going on for a long time in several different philosophical and scientific contexts. Already when the field got its distinctive name, it was integrated, inclusive and impactful, not least within the academically established discipline of bioethics. Some problematic tendencies toward isolation have indeed existed, but they were related to the distinctive label, as it was sometimes used by isolated groups to present their activities as something new and special to be reckoned with.

The open commentary is summarized by the remark that we should avoid the temptation to see neuroethics as a completely new, autonomous and separate discipline: the temptation that the name contributes to. Such an image makes us myopic, the commentators write, which paradoxically can make it more difficult to support the three objectives of the vision. It is both truer and more fruitful to consider neuroethics and bioethics as distinct but not separate fields. If this is true, we do not need to launch an even newer version of neuroethics under an even newer label.

Read the open commentary here: Neuroethics & bioethics: distinct but not separate. If you want to read the article that is commented on, you will find the reference at the bottom of this post.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Guerrero & M. Farisco (2023) Neuroethics & Bioethics: Distinct but Not Separate, AJOB Neuroscience, 14:4, 414-416, DOI: 10.1080/21507740.2023.2257162

Anna Wexler & Laura Specker Sullivan (2023) Translational Neuroethics: A Vision for a More Integrated, Inclusive, and Impactful Field, AJOB Neuroscience, 14:4, 388-399, DOI: 10.1080/21507740.2021.2001078

This post in Swedish

Minding our language

Encourage children to take responsibility for others?

It happens that academics write visionary texts that highlight great human challenges. I blogged about such a philosophically visionary article a few years ago; an article in which Kathinka Evers discussed the interaction between society and the brain. In the article, she developed the idea that we have a “proactive” responsibility to adapt our societies to what we know about the brain’s strengths and weaknesses. Above all, she emphasized that the knowledge we have today about the changeability of the brain gives us a proactive responsibility for our own human nature, as this nature is shaped and reshaped in interaction with the societies we build.

Today I want to recommend a visionary philosophical article by Jessica Nihlén Fahlquist, an article that I think has points of contact with Kathinka Evers’ paper. Here, too, the article highlights our responsibility for major human challenges, such as climate and, above all, public health. Here, too, human changeability is emphasized, not least during childhood. Here, too, it is argued that we have a responsibility to be proactive (although the term is not used). But where Kathinka Evers starts from neuroscience, Jessica Nihlén Fahlquist starts from virtue ethics and from social sciences that see children as social actors.

Jessica Nihlén Fahlquist points out that we live in more complex societies and face greater global challenges than ever before in human history. But humans are also complex and can under favorable circumstances develop great capacities for taking responsibility. Virtue ethics has this focus on the human being and on personal character traits that can be cultivated and developed to varying degrees. Virtue ethics is sometimes criticized for not being sufficiently action-guiding. But it is hard to imagine that we can deal with major human challenges through action-guiding rules and regulations alone. Rules are never as complex as human beings. Action-guiding rules assume that the challenges are already under some sort of control and thus are not as uncertain anymore. Faced with complex challenges with great uncertainties, we may have to learn to trust the human being. Do we dare to trust ourselves when we often created the problems?

Jessica Nihlén Fahlquist reasons in a way that brings to mind Kathinka Evers’ idea of a proactive responsibility for our societies and our human nature. Nihlén Fahlquist suggests, if I understand her correctly, that we already have a responsibility to create environments that support the development of human character traits that in the future can help us meet the challenges. We already have a responsibility to support greater abilities to take responsibility in the future, one could say.

Nihlén Fahlquist focuses on public health challenges and her reasoning is based on the pandemic and the issue of vaccination of children. Parents have a right and a duty to protect their children from risks. But reasonably, parents can also be considered obliged not to be overprotective, but also to consider the child’s development of agency and values. The virus that spread during the pandemic did not cause severe symptoms in children. Vaccination therefore does not significantly protect the child’s own health, but would be done with others in mind. Studies show that children may be capable of reasoning in terms of such responsibility for others. Children who participate in medical research can, for example, answer that they participate partly to help others. Do we dare to encourage capable children to take responsibility for public health by letting them reason about their own vaccination? Is it even the case that we should support children to cultivate such responsibility as a virtue?

Nihlén Fahlquist does not claim that children themselves have this responsibility to get vaccinated out of solidarity with others. But if some children prove to be able to reason in such a morally complex way about their own vaccination, one could say that these children’s sense of responsibility is something unexpected and admirable, something that we cannot demand from a child. By encouraging and supporting the unexpected and admirable in children, it can eventually become an expected responsibility in adults, suggests Jessica Nihlén Fahlquist. Virtue ethics makes it meaningful to think in terms of such possibilities, where humans can change and their virtues can grow. Do we dare to believe in such possibilities in ourselves? If you do not expect the unexpected you will not discover it, said a visionary Greek philosopher named Heraclitus.

Jessica Nihlén Fahlquist’s article is multifaceted and innovative. In this post, I have only emphasized one of her lines of thought, which I hope has made you curious about an urgent academic text: Taking risks to protect others – pediatric vaccination and moral responsibility.

In summary, Jessica Nihlén Fahlquist argues that vaccination should be regarded as an opportunity for children to develop their sense of responsibility and that parents, schools, healthcare professionals and public health authorities should include children in debates about ethical public health issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Jessica Nihlén Fahlquist, Taking Risks to Protect Others – Pediatric Vaccination and Moral Responsibility, Public Health Ethics, 2023;, phad005, https://doi.org/10.1093/phe/phad005

This post in Swedish

Approaching future issues

When ordinary words get scientific uses

A few weeks ago, Josepine Fernow wrote an urgent blog post about science and language. She linked to a research debate about conceptual challenges for neuroscience, challenges that arise when ordinary words get specialized uses in science as technically defined terms.

In the case under debate, the word “sentience” had been imported into the scientific study of the brain. A research group reported that they were able to determine that in vitro neurons from humans and mice have learning abilities and that they exhibit “sentience” in a simulated game world. Of course, it caused quite a stir that some neurons grown in a laboratory could exhibit sentience! But the research team did not mean what attracted attention. They meant something very technical that only a specialist in the field can understand. The surprising thing about the finding was therefore the choice of words.

When the startling choice of words was questioned by other researchers, the research team defended themselves by saying that they defined the term “sentience” strictly scientifically, so that everyone should have understood what they meant, at least the colleagues in the field. Well, not all people are specialists in the relevant field. Thus the discovery – whatever it was that was discovered – raised a stir among people as if it were a discovery of sentience in neurons grown in a laboratory.

The research group’s attitude towards their own technical language is similar to an attitude I encountered long ago in a famous theorist of language, Noam Chomsky. This is what Chomsky said about the scientific study of the nature of language: “every serious approach to the study of language departs from the common-sense usage, replacing it by some technical concept.” Chomsky is of course right that linguistics defines its own technical concepts of language. But one can sense a certain hubris in the statement, because it sounds as if only a linguistic theorist could understand “language” in a way that is worthy of serious attention. This is untenable, because it raises the question what a technical concept of language is. In what sense is a technical concept a concept of language? Is it a technical concept of language in the common sense? Or is it a technical concept of language in the same inaccessible sense? In the latter case, the serious study of language seems to degenerate into a navel-gazing that does not access language.

For a technical concept of language to be a concept of language, our ordinary notions must be taken into account. Otherwise, the technical concept ceases to be a concept of language.

This is perhaps something to consider in neuroscience as well. Namely to the extent that one wants to shed light on phenomena such as consciousness and sentience. Of course, neuroscience will define its own technical concepts of these phenomena, as in the debated case. But if the technical concepts are to function as concepts of consciousness and sentience, then one cannot neglect our ordinary uses of words.

Science is very serious and important. But if the special significance of science goes to our heads, then our attitude risks undermining the great importance of science for humanity. Here you can read the views of three neuroethicists on these important linguistic issues: Conceptual conundrums for neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Minding our language

Taking care of the legacy: curating responsible research and innovation practice

Responsible research and innovation, or RRI as it is often called in EU-project language, is both scholarship and practice. Over the last decade, the Human Brain Project Building has used structured and strategic approaches to embed responsible research and innovation practices across the project. The efforts to curate the legacy of this work includes the development an online Ethics & Society toolkit. But how does that work? And what does a toolkit need in order to ensure it has a role to play?

A recent paper by Lise Bitsch and Bernd Stahl in Frontiers in Research Metrics and Analytics explores whether this kind of toolkit can help embed the legacy of RRI activities in a large research project. According to them, a toolkit has the potential to play an important role in preserving RRI legacy. But they also point out that that potential can only be realised if we have organisational structures and funding in place to make sure that this legacy is retained. Because as all resources, it needs to be maintained, shared, used, and curated. To play a role in the long-term.

Even though this particular toolkit is designed to integrate insights and practises of responsible research and innovation in the Human Brain Project, there are lessons to be learned for other efforts to ensure acceptability, desirability and sustainability of processes and outcomes of research and innovation activities. The Human Brain Project is a ten-year European Flagship project that has gone through several phases. Bernd Stahl is the ethics director of the Human Brain Project, and Lise Bitsch has led the project’s responsible research and innovation work stream for the past three years. And there is a lot to be learned. For projects who are considering developing similar tools, they describe the process of designing and developing the toolkit.

But there are parts of the RRI-legacy that cannot fit in a toolkit. The impact of the ethical, social and reflective work in the Human Brain Project is visible in governance structures, how the project is managing and handling data, in its publications and communications. The authors are part of those structures.

In addition to the Ethics & Society toolkit, the work has been published in journals, shared on the Ethics Dialogues blog (where a first version of this post was published) and the HBP Society Twitter handle, offering more opportunities to engage and discuss in the EBRAINS community Ethics & Society space. The capacity building efforts carried out for the project and EBRAINS research infrastructure have been developed into an online ethics & society training resource, and the work with gender and diversity has resulted in a toolkit for equality, diversity and inclusion in project themes and teams.

Read the paper by Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy.

(A first version of this post was originally published on the Ethics Dialogues blog, March 13, 2023)

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy, Frontiers in Research Metrics and Analytics, 13 March 2023, Sec. Research Policy and Strategic Management, Volume 8 – 2023, https://doi.org/10.3389/frma.2023.1112106

Part of international collaborations

« Older posts