A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 3 of 19)

A new project will explore the prospect of artificial awareness

The neuroethics group at CRB has just started its work as part of a new European research project about artificial awareness. The project is called “Counterfactual Assessment and Valuation for Awareness Architecture” (CAVAA), and is funded for a duration of four years. The consortium is composed of 10 institutions, coordinated by the Radboud University in the Netherlands.

The goal of CAVAA is “to realize a theory of awareness instantiated as an integrated computational architecture…, to explain awareness in biological systems and engineer it in technological ones.” Different specific objectives derive from this general goal. First, CAVAA has a robust theoretical component: it relies on a strong theoretical framework. Conceptual reflection on awareness, including its definition and the identification of features that allow its attribution to either biological organisms or artificial systems, is an explicit task of the project. Second, CAVAA is interested in exploring the connection between awareness in biological organisms and its possible replication in artificial systems. The project thus gives much attention to the connection between neuroscience and AI. Third, against this background, CAVAA aims at replicating awareness in artificial settings. Importantly, the project also has a clear ethical responsibility, more specifically about anticipating the potential societal and ethical impact of aware artificial systems.

There are several reasons why a scientific project with a strong engineering and computer science component also has philosophers on board. We are asked to contribute to developing a strong and consistent theoretical account of awareness, including the conceptual conceivability and the technical feasibility of its artificial replication. This is not straightforward, not only because there are many content-related challenges, but also because there are logical traps to avoid. For instance, we should avoid the temptation to validate an empirical statement on the basis of our own theory: this would possibly be tautological or circular.

In addition to this theoretical contribution, we will also collaborate in identifying indicators of awareness and benchmarks for validating the cognitive architecture that will be developed. Finally, we will collaborate in the ethical analysis concerning potential future scenarios related to artificial awareness, such as the possibility of developing artificial moral agents or the need to extend moral rights also to artificial systems.

In the end, there are several potential contributions that philosophy can provide to the scientific attempt to replicate biological awareness in artificial systems. Part of this possible collaboration is the fundamental and provoking question: why should we try to develop artificial awareness at all? What is the expected benefit, should we succeed? This is definitely an open question, with possible arguments for and against attempting such a grand accomplishment.

There is also another question of equal importance, which may justify the effort to identify the necessary and sufficient conditions for artificial systems to become aware, and how to recognize them as such. What if we will inadvertently create (or worse: have already created) forms of artificial awareness, but do not recognize this and treat them as if they were unaware? Such scenarios also confront us with serious ethical issues. So, regardless of our background beliefs about artificial awareness, it is worth investing in thinking about it.

Stay tuned to hear more from CAVAA!

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Part of international collaborations

Keys to more open debates

We are used to thinking that research is either theoretical or empirical, or a combination of theoretical and empirical approaches. I want to suggest that there are also studies that are neither theoretical nor empirical, even though it may seem unthinkable at first. This third possibility often occurs together with the other two, with which it is then interwoven without us particularly noticing it.

What is this third, seemingly unthinkable possibility? To think for yourself! Research rarely runs completely friction-free. At regular intervals, uncertainties appear around both theoretical and empirical starting points, which we have to clarify for ourselves. We then need to reflect on our starting points and perhaps even reconsider them. I am not referring primarily to how new scientific findings can justify re-examination of hypotheses, but to the continuous re-examinations that must be made in the research process that leads to these new findings. It happens so naturally in research work that you do not always think about the fact that you, as a researcher, also think for yourself, reconsider your starting points during the course of the work. Of course, thinking for yourself does not necessarily mean that you think alone. It often happens in conversations with colleagues or at research seminars. But in these situations there are no obvious starting points to start from. The uncertainties concern the starting points that you had taken for granted, and you are therefore referred to yourself, whether you think alone or with others.

This thinking, which paradoxically we do not always think we are doing, is rarely highlighted in the finished studies that are published as scientific articles. The final publication therefore does not give a completely true picture of what the research process looked like in its entirety, which is of course not an objection. On the contrary, it would be comical if autobiographical details were highlighted in scientific publications. There you cannot usually refer to informal conversations with colleagues in corridors or seminar rooms. Nevertheless, these conversations take place as soon as we encounter uncertainties. Conversations where we think for ourselves, even when it happens together. It would hardly be research otherwise.

Do you see how we ourselves get stuck in an unclear starting point when we have difficulty imagining the possibility of academic work that is neither theoretical nor empirical? We then start from a picture of scientific research, which focuses on what already completed studies look like in article form. It can be said that we start from a “façade conception” of scientific work, which hides a lot of what happens in practice behind the façade. This can be hard to come to terms with for new PhD students, who may think that researchers just pick their theoretical and empirical starting points and then elaborate on them. A PhD student can feel bad as a researcher, because the work does not match the image you get of research by reading finished articles, where everything seems to go smoothly. If it did, it would hardly be research. Yet, when seeking funding and ethics approval, researchers are forced to present their project plans as if everything had already gone smoothly. That is, as if the research had already been completed and published.

If what I am writing here gives you an idea of how easily we humans get stuck in unclear starting points, then this blog post has already served as a simple example of the third possibility. In this post, we think together, for ourselves, about an unclear starting point, the façade conception, which we did not think we were starting from. We open our eyes to an assumption which at first we did not see, because we looked at everything through it, as through the spectacles on the nose. Such self-examination of our own starting points can sometimes be the main objective, namely in philosophical studies. There, the questions themselves are already expressions of unclear assumptions. We get entangled in our starting points. But because they sit on our noses, we also get entangled in the illusion that the questions are about something outside of us, something that can only be studied theoretically and empirically.

Today I therefore want to illustrate how differently we can work as researchers. This by suggesting the reading of two publications on the same problem, where one publication is empirical, while the other is neither empirical nor theoretical, but purely philosophical. The empirical article is authored by colleagues at CRB; the philosophical article by me. Both articles touch on ethical issues of embryo donation for stem cell research. Research that in the future may lead to treatments for, for example, Parkinson’s disease.

The empirical study is an interview study with individuals who have undergone infertility treatment at an IVF clinic. They were interviewed about how they viewed leftover frozen embryos from IVF treatment, donation of leftover embryos in general and for cell-based treatment of Parkinson’s disease in particular, and much more. Such empirical studies are important as a basis for ethical and legal discussions about embryonic stem cell research, and about the possibility of further developing the research into treatments for diseases that today lack effective treatments. Read the interview study here: Would you consider donating your left-over embryos to treat Parkinson’s disease? Interviews with individuals who underwent IVF in Sweden.

The philosophical study examines concerns about exploitation of embryo donors to stem cell research. These concerns must be discussed openly and conscientiously. But precisely because issues of exploitation are so important, the debate about them risks being polarized around opposing starting points, which are not seen and cannot be reconsidered. Debates often risk locking positions, rather than opening our minds. The philosophical study describes such tendencies to be misled by our own concepts when we debate medical research, the pharmaceutical industry and risks of exploitation in donation to research. It wants to clarify the conditions for a more thoughtful and open discussion. Read the philosophical study here: The Invisible Patient: Concerns about Donor Exploitation in Stem Cell Research.

It is easy to see the relevance of the empirical study, as it has results to refer to in the debate. Despite the empirical nature of the study, I dare to suggest that the researchers also “philosophized” about uncertainties that appeared during the course of the work; that they thought for themselves. Perhaps it is not quite as easy to see the relevance of the purely philosophical study, since it does not result in new findings or normative positions that can be referred to in the debate. It only helps us to see how certain mental starting points limit our understanding, if they are not noticed and re-examined. Of what use are such philosophical exercises?

Perhaps the use of philosophy is similar to the use of a key that fits in the lock, when we want to get out of a locked room. The only thing is that in philosophy we often need the “key” already to see that we are locked up. Philosophical keys are thus forged as needed, to help us see our attachments to unclear starting points that need to be reconsidered. You cannot refer to such keys. You must use them yourself, on yourself.

While I was writing this “key” post, diligent colleagues at CRB published another empirical study on the use of human embryonic stem cells for medical treatments. This time an online survey among a random selection of Swedish citizens (reference and link below). The authors emphasize that even empirical studies can unlock polarized debates. This by supplementing the views of engaged debaters, who can sometimes have great influence, with findings on the views of the public and affected groups: voices that are not always heard in the debate. Empirical studies thus also function as keys to more open and thoughtful discussions. In this case, the “keys” are findings that can be referred to in debates.

– Two types of keys, which can contribute in different ways to more open debates.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bywall, K.S., Holte, J., Brodin, T. et al. Would you consider donating your left-over embryos to treat Parkinson’s disease? Interviews with individuals that underwent IVF in Sweden. BMC Med Ethics 23, 124 (2022). https://doi.org/10.1186/s12910-022-00864-y

Segerdahl, P. The Invisible Patient: Concerns about Donor Exploitation in Stem Cell Research. Health Care Analysis 30, 240–253 (2022). https://doi.org/10.1007/s10728-022-00448-2

Grauman, Å., Hansson, M., Nyholm, D. et al. Attitudes and values among the Swedish general public to using human embryonic stem cells for medical treatment. BMC Med Ethics 23, 138 (2022). https://doi.org/10.1186/s12910-022-00878-6

This post in Swedish

We recommend readings

Does public health need virtue ethics?

So-called virtue ethics may seem too inward-looking to be of any practical use in a complex world. It focuses on good character traits of a morally virtuous person, such as courage, sincerity, compassion, humility and responsibility. It emphasizes how we should be rather than how we should act. How can we find effective guidance in such “heroic” ethics when we seek the morally correct action in ethically difficult situations, or the correct regulation of various parts of the public sector? How can such ancient ethics provide binding reasons for what is morally correct? Humbly referring to one’s superior character traits is hardly the form of a binding argument, is it?

It is tempting to make fun of the apparently ineffective virtue ethics. But it has, in my view, two traits of greatest importance. The first is that it trusts the human being: in actual situations we can see what must be done, and what must be carefully considered. The second is that virtue ethics thus also supports our freedom. A virtuous person does not need to cling to standards of good behavior to avoid bad behavior, but will spontaneously behave well: with responsibility, humility, compassion, etc. So a counter-question could be: What good will it be for someone to gain a whole world of moral correctness, yet forfeit themselves and their own freedom? – This was a personal introduction to today’s post.

In an article in Public Health Ethics, Jessica Nihlén Fahlquist discusses public health as a domain of work where moral virtues may need to be developed and supported in the professionals. Unlike medical care, public health focuses on good and equal health in entire risk groups and populations. Due to this more universal perspective of collective health, there can be a risk that the interests, rights and values ​​of individuals are sometimes overlooked. The work therefore needs to balance the general public health objectives against the values ​​of individuals. This may require a well-developed sensitivity, which can be understood in terms of virtue ethics.

Furthermore, public health is often characterized by a greater distance between professionals and the public than in medical care, where the one-on-one meeting with the patient supports a caring attitude in the clinician towards the individual. Imagination and empathy may therefore be needed in public health to assess the needs of individuals and the effects of the work on individuals. Finally, there is power asymmetry between public health professionals and the people affected by the public health work. This requires responsibility on the part of those who use the resources and knowledge that public health authorities possess. This can also be understood in terms of virtue ethics.

Jessica Nihlén Fahlquist emphasizes three virtues that she argues are needed in public health: responsibility, compassion and humility. She concretises the virtues through three ideals to personally strive for in public health. The ideals are described in short italicized paragraphs, which provide three understandable profiles of how a responsible, compassionate and humble person should be in their work with public health – three clear role models.

The ethical problems are made concrete through two examples, breastfeeding and vaccination, which illustrate challenges and opportunities for virtue ethics in public health work. Read the article here: Public Health and the Virtues of Responsibility, Compassion and Humility.

Jessica Nihlén Fahlquist does not rule out the importance of other moral philosophical perspectives in public health. But the three virtue ethical ideals (and probably also other similar ideals) should complement the prevailing perspectives, she argues. Everything has its place, but finding the right place may require good character traits!

If you would also like to read a more recent and shorter discussion by Jessica Nihlén Fahlquist on these important issues, you will find a reference below.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Jessica Nihlén Fahlquist, Public Health and the Virtues of Responsibility, Compassion and Humility, Public Health Ethics, Volume 12, Issue 3, November 2019, Pages 213–224, https://doi.org/10.1093/phe/phz007

Jessica Nihlén Fahlquist, Individual Virtues and Structures of Virtue in Public Health, Public Health Ethics, Volume 15, Issue 1, April 2022, Pages 11–15, https://doi.org/10.1093/phe/phac004

This post in Swedish

We like challenging questions

A charming idea about consciousness

Some ideas can have such a charm that you only need to hear them once to immediately feel that they are probably true: “there must be some grain of truth in it.” Conspiracy theories and urban myths probably spread in part because of how they manage to charm susceptible human minds by ringing true. It is said that even some states of illness are spread because the idea of ​​the illness has such a strong impact on many of us. In some cases, we only need to hear about the diagnosis to start showing the symptoms and maybe we also receive the treatment. But even the idea of diseases spread by ideas has charm, so we should be on our guard.

The temptation to fall for the charm of certain ideas naturally also exists in academia. At the same time, philosophy and science are characterized by self-critical examination of ideas that may sound so attractive that we do not notice the lack of examination. As long as the ideas are limited hypotheses that can in principle be tested, it is relatively easy to correct one’s hasty belief in them. But sometimes these charming ideas consist of grand hypotheses about elusive phenomena that no one knows how to test. People can be so convinced by such ideas that they predict that future science just needs to fill in the details. A dangerous rhetoric to get caught up in, which also has its charm.

Last year I wrote a blog post about a theory at the border between science and philosophy that I would like to characterize as both grand and charming. This is not to say that the theory must be false, just that in our time it may sound immediately convincing. The theory is an attempt to explain an elusive “phenomenon” that perplexes science, namely the nature of consciousness. Many feel that if we could explain consciousness on purely scientific grounds, it would be an enormously significant achievement.

The theory claims that consciousness is a certain mathematically defined form of information processing. Associating consciousness with information is timely, we are immediately inclined to listen. What type of information processing would consciousness be? The theory states that consciousness is integrated information. Integration here refers not only to information being stored as in computers, but to all this diversified information being interconnected and forming an organized whole, where all parts are effectively available globally. If I understand the matter correctly, you can say that the integrated information of a system is the amount of generated information that exceeds the information generated by the parts. The more information a system manages to integrate, the more consciousness the system has.

What, then, is so charming about the idea that ​​consciousness is integrated information? Well, the idea might seem to fit with how we experience our conscious lives. At this moment you are experiencing multitudes of different sensory impressions, filled with details of various kinds. Visual impressions are mixed with impressions from the other senses. At the same time, however, these sensory impressions are integrated into a unified experience from a single viewpoint, your own. The mathematical theory of information processing where diversification is combined with integration of information may therefore sound attractive as a theory of consciousness. We may be inclined to think: Perhaps it is because the brain processes information in this integrative way that our conscious lives are characterized by a personal viewpoint and all impressions are organized as an ego-centred subjective whole. Consciousness is integrated information!

It becomes even more enticing when it turns out that the theory, called Integrated Information Theory (IIT), contains a calculable measure (Phi) of the amount of integrated information. If the theory is correct, then one would be able to quantify consciousness and give different systems different Phi for the amount of consciousness. Here the idea becomes charming in yet another way. Because if you want to explain consciousness scientifically, it sounds like a virtue if the theory enables the quantification of how much consciousness a system generates. The desire to explain consciousness scientifically can make us extra receptive to the idea, which is a bit deceptive.

In an article in Behavioral and Brain Sciences, Björn Merker, Kenneth Williford and David Rudrauf examine the theory of consciousness as integrated information. The review is detailed and comprehensive. It is followed up by comments from other researchers, and ends with the authors’ response. What the three authors try to show in the article is that even if the brain does integrate information in the sense of the theory, the identification of consciousness with integrated information is mistaken. What the theory describes is efficient network organization, rather than consciousness. Phi is a measure of network efficiency, not of consciousness. What the authors examine in particular is that charming feature I just mentioned: the theory can seem to “fit” with how we experience our conscious lives from a unified ego-centric viewpoint. It is true that integrated information constitutes a “unity” in the sense that many things are joined in a functionally organized way. But that “unity” is hardly the same “unity” that characterizes consciousness, where the unity is your own point of view on your experiences. Effective networks can hardly be said to have a “viewpoint” from a subjective “ego-centre” just because they integrate information. The identification of features of our conscious lives with the basic concepts of the theory is thus hasty, tempting though it may be.

The authors do not deny that the brain integrates information in accordance with the theory. The theory mathematically describes an efficient way to process information in networks with limited energy resources, something that characterizes the brain, the authors point out. But if consciousness is identified with integrated information, then many other systems that process information in the same efficient way would also be conscious. Not only other biological systems besides the brain, but also artifacts such as some large-scale electrical power grids and social networks. Proponents of the theory seem to accept this, but we have no independent reason to suppose that systems other than the brain would have consciousness. Why then insist that other systems are also conscious? Well, perhaps because one is already attracted by the association between the basic concepts of the theory and the organization of our conscious experiences, as well as by the possibility of quantifying consciousness in different systems. The latter may sound like a scientific virtue. But if the identification is false from the beginning, then the virtue appears rather as a departure from science. The theory might flood the universe with consciousness. At least that is how I understand the gist of ​​the article.

Anyone who feels the allure of the theory that consciousness is integrated information should read the careful examination of the idea: The integrated information theory of consciousness: A case of mistaken identity.

The last word has certainly not been said and even charming ideas can turn out to be true. The problem is that the charm easily becomes the evidence when we are under the influence of the idea. Therefore, I believe that the careful discussion of the theory of consciousness as integrated information is urgent. The article is an excellent example of the importance of self-critical examination in philosophy and science.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Merker, B., Williford, K., & Rudrauf, D. (2022). The integrated information theory of consciousness: A case of mistaken identity. Behavioral and Brain Sciences, 45, E41. doi:10.1017/S0140525X21000881

This post in Swedish

We like critical thinking

Does the brain make room for free will?

The question of whether we have free will has been debated throughout the ages and everywhere in the world. Can we influence our future or is it predetermined? If everything is predetermined and we lack free will, why should we act responsibly and by what right do we hold each other accountable?

There have been different ideas about what predetermines the future and excludes free will. People have talked about fate and about the gods. Today, we rather imagine that it is about necessary causal relationships in the universe. It seems that the strict determinism of the material world must preclude the free will that we humans perceive ourselves to have. If we really had free will, we think, then nature would have to give us a space of our own to decide in. A causal gap where nature does not determine everything according to its laws, but allows us to act according to our will. But this seems to contradict our scientific world view.

In an article in the journal Intellectica, Kathinka Evers at CRB examines the plausibility of this choice between two extreme positions: either strict determinism that excludes free will, or free will that excludes determinism.

Kathinka Evers approaches the problem from a neuroscientific perspective. This particular perspective has historically tended to support one of the positions: strict determinism that excludes free will. How can the brain make room for free will, if our decisions are the result of electrochemical processes and of evolutionarily developed programs? Is it not right there, in the brain, that our free will is thwarted by material processes that give us no space to act?

Some authors who have written about free will from a neuroscientific perspective have at times explained away freedom as the brain’s user’s illusion: as a necessary illusion, as a fictional construct. Some have argued that since social groups function best when we as individuals assume ourselves to be responsible actors, we must, after all, keep this old illusion alive. Free will is a fiction that works and is needed in society!

This attitude is unsound, says Kathinka Evers. We cannot build our societies on assumptions that contradict our best knowledge. It would be absurd to hold people responsible for actions that they in fact have no ability to influence. At the same time, she agrees that the notion of free will is socially important. But if we are to retain the notion, it must be consistent with our knowledge of the brain.

One of the main points of the article is that our knowledge of the brain could actually provide some room for free will. The brain could function beyond the opposition between indeterminism and strict determinism, some neuroscientific theories suggest. This does not mean that there would be uncaused neural events. Rather, a determinism is proposed where the relationship between cause and effect is variable and contingent, not invariable and necessary, as we commonly assume. As far as I understand, it is about the fact that the brain has been shown to function much more independently, actively and flexibly than in the image of it as a kind of programmed machine. Different incoming nerve signals can stabilize different neural patterns of connections in the brain, which support the same behavioural ability. And the same incoming nerve signal can stabilize different patterns of connections in the brain that result in the same behavioural ability. Despite great variation in how individuals’ neural patterns of connections are stabilized, the same common abilities are supported. This model of the brain is thus deterministic, while being characterized by variability. It describes a kind of kaleidoscopically variable causality in the brain between incoming signals and resulting behaviours and abilities.

Kathinka Evers thus hypothetically suggests that this variability in the brain, if real, could provide empirical evidence that free will is compatible with determinism.

Read the philosophically exciting article here: Variable determinism in social applications: translating science to society

Although Kathinka Evers suggests that a certain amount of free will could be compatible with what we know about the brain, she emphasizes that neuroscience gives us increasingly detailed knowledge about how we are conditioned by inherited programs, for example, during adolescence, as well as by our conditions and experiences in childhood. We should, after all, be cautiously restrained in praising and blaming each other, she concludes the article, referring to the Stoic Epictetus, one of the philosophers who thought about free will and who rather emphasized freedom from the notion of a free will.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers Kathinka (2021/2). Variable Determinism in Social Applications: Translating Science to Society. In Monier Cyril & Khamassi Mehdi (Eds), Liberty and cognition, Intellectica, 75, pp.73-89.

This post in Swedish

We like challenging questions

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

Self-confidence in the midst of uncertainty

Feeling confident is natural when we have the knowledge that the task requires. However, self-confidence can be harmful if we think that we know what we do not know. It can be really problematic if we make a habit of pretending that we know. Perhaps because we demand it of ourselves.

There is also another kind of self-confidence, which can seem unnatural. I am thinking of a rarely noticed form of self-confidence, which can awaken just when we are uncertain about how to think and act. But how can self-confidence arise precisely when we are uncertain? It sounds not only unnatural, but also illogical. And was it not harmful to exhibit self-confidence in such situations?

I am thinking of the self-confidence to be just as uncertain as we are, because our uncertainty is a fact that we are certain of: I do not know. It is easy to overlook the fact that even uncertainty is a reality that can be ascertained and investigated in ourselves. Sometimes it is important to take note of our uncertainty. That is sticking to the facts too!

What happens if we do not trust uncertainty when we are uncertain? I think we then tend to seek guidance from others, who seem to know what we do not know. It seems not only natural, but also logical. It is reasonable to do so, of course, if relevant knowledge really exists elsewhere. Asking others, who can be judged to know better, also requires a significant measure of self-confidence and good judgment, in the midst of uncertainty.

But suppose we instinctively seek guidance from others as soon as we are uncertain, because we do not dare to stick to uncertainty in such moments. What happens if we always run away from uncertainty, without stopping and paying attention to it, as if uncertainty were something impermissible? In such a judgmental attitude to uncertainty, knowledge and certainty can become a demand that we feel must be met, towards ourselves and towards each other, if only as a facade. We are then back where we started, in pretended knowledge, which now might become a collective high-risk game and not just an individual bad habit.

Collective knowledge games can of course work, if sufficiently many influential players have the knowledge that the tasks require and knowledge is disseminated in a well-organized manner. Maybe we think that it should be possible to build such a society, a secure knowledge society. The question I wonder about is how sustainable this is in the long run, if the emphasis on certainty does not simultaneously emphasize also uncertainty and questioning. Not for the sake of questioning, but because uncertainty is also a fact that needs attention.

In philosophy and ethics, it is often uncertainty that primarily drives the work. This may sound strange, but even uncertainty can be investigated. If we ask a tentative question about something we sincerely wonder about, clearer questions can soon arise that we continue to wonder about, and soon the investigation will begin. The investigation comes to life because we dare to trust ourselves, because we dare to give ourselves time to think, in the midst of uncertainty, which can become clarity if we do not run away from it. In the investigation, we can of course notice that we need more knowledge about specific issues, knowledge that is acquired from others or that we ourselves develop through empirical studies. But it is not only specific knowledge that informs the investigation. The work with the questions that express our uncertainty clarifies ourselves and makes our thinking clearer. Knowledge gets a well-considered context, where it is needed, which enlightens knowledge.

A “pure” game of knowledge is hardly sustainable in the long run, if its demands are not open also to the other side of knowledge, to the uncertainty that can be difficult to separate from ourselves. Such openness requires that we trust not only the rules of the game, but also ourselves. But do we dare to trust ourselves when we are uncertain?

I think we dare, if we see uncertainty as a fact that can be investigated and clarified, instead of judging it as something dangerous that should not be allowed to be a fact. That is when it can become dangerous.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

How can we detect consciousness in brain-damaged patients?

Detecting consciousness in brain-damaged patients can be a huge challenge and the results are often uncertain or misinterpreted. In a previous post on this blog I described six indicators of consciousness that I introduced together with a neuroscientist and another philosopher. Those indicators were originally elaborated targeting animals and AI systems. Our question was: what capacities (deducible from behavior and performance or relevant cerebral underpinnings) make it reasonable to attribute consciousness to these non-human agents? In the same post, I mentioned that we were engaged in a multidisciplinary exploration of the clinical relevance of selected indicators, specifically for testing them on patients with Disorders of Consciousness (DoCs, for instance, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State, Cognitive-Motor Dissociation). While this multidisciplinary work is still in progress, we recently published an ethical reflection on the clinical relevance of the indicators of consciousness, taking DoCs as a case study.

To recapitulate, indicators of consciousness are conceived as particular capacities that can be deduced from the behavior or cognitive performance of a subject and that serve as a basis for a reasonable inference about the level of consciousness of the subject in question. Importantly, also the neural correlates of the relevant behavior or cognitive performance may make possible deducing the indicators of consciousness.  This implies the relevance of the indicators to patients with DoCs, who are often unable to behave or to communicate overtly. Responses in the brain can be used to deduce the indicators of consciousness in these patients.

On the basis of this relevance, we illustrate how the different indicators of consciousness might be applied to patients with DoCs with the final goal of contributing to improve the assessment of their residual conscious activity. In fact, a still astonishing rate of misdiagnosis affects this clinical population. It is estimated that up to 40 % of patients with DoCs are wrongly diagnosed as being in Vegetative State/Unresponsive Wakefulness Syndrome, while they are actually in a Minimally Conscious State. The difference of these diagnoses is not minimal, since they have importantly different prognostic implications, which raises a huge ethical problem.

We also argue for the need to recognize and explore the specific quality of the consciousness possibly retained by patients with DoCs. Because of the devastating damages of their brain, it is likely that their residual consciousness is very different from that of healthy subjects, usually assumed as a reference standard in diagnostic classification. To illustrate, while consciousness in healthy subjects is characterized by several distinct sensory modalities (for example, seeing, hearing and smelling), it is possible that in patients with DoCs, conscious contents (if any) are very limited in sensory modalities. These limitations may be evaluated based on the extent of the brain damage and on the patients’ residual behaviors (for instance, sniffing for smelling). Also, consciousness in healthy subjects is characterized by both dynamics and stability: it includes both dynamic changes and short-term stabilization of contents. Again, in the case of patients with DoCs, it is likely that their residual consciousness is very unstable and flickering, without any capacity for stabilization. If we approach patients with DoCs without acknowledging that consciousness is like a spectrum that accommodates different possible shapes and grades, we exclude a priori the possibility of recognizing the peculiarity of consciousness possibly retained by these patients.

The indicators of consciousness we introduced offer a potential help to identify the specific conscious abilities of these patients. While in this paper we argue for the rationale behind the clinical use of these indicators, and for their relevance to patients with DoCs, we also acknowledge that they open up new lines of research with concrete application to patients with DoCs. As already mentioned, this more applied work is in progress and we are confident of being able to present relevant results in the weeks to come.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Pennartz, C., Annen, J. et al. Indicators and criteria of consciousness: ethical implications for the care of behaviourally unresponsive patients. BMC Med Ethics 2330 (2022). https://doi.org/10.1186/s12910-022-00770-3

We have a clinical perspective

Fact resistance, human nature and contemplation

Sometimes we all resist facts. I saw a cyclist slip on the icy road. When I asked if it went well, she was on her feet in an instant and denied everything: “I did not fall!” It is human to deny facts. They can hurt and be disturbing.

What are we resisting? The usual answer is that fact-resistant individuals or groups resist facts about the world around us, such as statistics on violent crime, on vaccine side effects, on climate change or on the spread of disease. It then becomes natural to offer resistance to fact resistance by demanding more rigour in the field of knowledge. People should learn to turn more rigorously to the world they live in! The problem is that fact-resistant attitudes do just that. They are almost bewitched by the world and by the causes of what are perceived as outrageous problems in it. And now we too are bewitched by fact resistance and speculate about the causes of this outrageous problem.

Of course, we believe that our opposition is justified. But who does not think so? Legitimate resistance is met by legitimate resistance, and soon the conflict escalates around its double spiral of legitimacy. The possibility of resolving it is blocked by the conflict itself, because all parties are equally legitimate opponents of each other. Everyone hears their own inner voices warning them from acknowledging their mistakes, from acknowledging their uncertainty, from acknowledging their human resistance to reality, as when we fall off the bike and wish it had never happened. The opposing side would immediately seize the opportunity! Soon, our mistake is a scandal on social media. So we do as the person who slipped on the icy road, we deny everything without thinking: “I was not wrong, I had my own facts!” We ignore the fact that life thereby becomes a lie, because our inner voices warn us from acknowledging our uncertainty. We have the right to be recognized, our voices insist, at least as an alternative to the “established view.”

Conflicts give us no time for reflection. Yet, there is really nothing stopping us from sitting down, in the midst of conflict, and resolving it within ourselves. When we give ourselves time to think for ourselves, we are freer to acknowledge our uncertainty and examine our spirals of thought. Of course, this philosophical self-examination does not resolve the conflict between legitimate opponents which escalates around us as increasingly impenetrable and real. It only resolves the conflict within ourselves. But perhaps our thoughtful philosophical voice still gives a hint of how, just by allowing us to soar in uncertainty, we already see the emptiness of the conflict and are free from it?

If we more often dared to soar in uncertainty, if it became more permissible to say “I do not know,” if we listened more attentively to thoughtful voices instead of silencing them with loud knowledge claims, then perhaps fact resistance also decreases. Perhaps fact resistance is not least resistance to an inner fact. To a single inner fact. What fact? Our insecurity as human beings, which we do not permit ourselves. But if you allow yourself to slip on the icy road, then you do not have to deny that you did!

A more thoughtful way of being human should be possible. We shape the societies that shape us.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

How can neuroethics and AI ethics join their forces?

As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.

In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.

In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.

Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.

In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?

This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.

In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0

We transcend disciplinary borders

« Older posts Newer posts »