A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: May 2021

New dissertation on patient preferences in medical approvals

During the spring, several doctoral students at CRB successfully defended their dissertations. Karin Schölin Bywall defended her dissertation on May 12, 2021. The dissertation, like the two previous ones, reflects a trend in bioethics from theoretical investigations to empirical studies of people’s perceptions of bioethical issues.

An innovative approach in Karin Schölin Bywall’s dissertation is that she identifies a specific area of ​​application where the preference studies that are increasingly used in bioethics can be particularly beneficial. It is about patients’ influence on the process of medical approval. Patients already have such an influence, but their views are obtained somewhat informally, from a small number of invited patients. Karin Schölin Bywall explores the possibility of strengthening patients’ influence scientifically. Preference studies can give decision-makers an empirically more well-founded understanding of what patients actually prefer when they weigh efficacy against side effects and other drug properties.

If you want to know more about the possibility of using preference studies to scientifically strengthen patients’ influence in medical approvals, read Karin Schölin Bywall’s dissertation: Getting a Say: Bringing patients’ views on benefit-risk into medical approvals.

If you want a concise summary of the dissertation, read Anna Holm’s news item on our website: Bringing patients’ views into medical approvals.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Schölin Bywall, K. (2021) Getting a Say: Bringing patients’ views on benefit-risk into medical approvals. [Dissertation]. Uppsala University.

This post in Swedish

We want solid foundations

Can you be cloned?

Why can we feel metaphysical nausea at the thought of cloned humans? I guess it has to do with how we, without giving ourselves sufficient time to reflect, are captivated by a simple image of individuality and cloning. The image then controls our thinking. We may imagine that cloning consists in multiplying our unique individuality in the form of indistinguishable copies. We then feel dizzy at the unthinkable thought that our individual selves would be multiplied as copies all of which in some strange way are me, or cannot be distinguished from me.

In a contribution to a philosophical online magazine, Kathinka Evers diagnoses this metaphysical nausea about cloning. If you have the slightest tendency to worry that you may be multiplied as “identical copies” that cannot be distinguished from you, then give yourself the seven minutes it takes to read the text and free yourself from the ailment:

“I cannot be cloned: the identity of clones and what it tells us about the self.”

Of course, Kathinka Evers does not deny that cloning is possible or associated with risks of various kinds. She questions the premature image of cloning by giving us time to reflect on individual identity, without being captivated by the simple image.

We are disturbed by the thought that modern research in some strange way could do what should be unthinkable. When it becomes clear that what we are worried about is unthinkable, the dizziness disappears. In her enlightening diagnosis of our metaphysical nausea, Kathinka Evers combines philosophical reflection with illuminating facts about, among other things, genetics and personality development.

Give yourself the seven minutes it takes to get rid of metaphysical nausea about cloning!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Can AI be conscious? Let us think about the question

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

We want solid foundations