A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Month: October 2025

Can counseling be unphilosophical?

A fascinating paper by Fredrik Andersen, Rani Lill Anjum, and Elena Rocca, “Philosophical bias is the one bias that science cannot avoid,” reminds us of something fundamental, but often forgotten, about the nature of scientific inquiry. Every scientist, whether they realize it or not, operates with fundamental assumptions about causality, determinism, reductionism, and the nature of reality itself. These “philosophical biases” are, they write, unavoidable foundations that shape how we see, interpret, and engage with the world.

The authors show us, for instance, how molecular biologists and ecologists approached GM crop safety with entirely different philosophical frameworks. Molecular biologists focused on structural equivalence between GM and conventional crops, operating from an entity-based ontology where understanding parts leads to understanding wholes. Ecologists emphasized unpredictable environmental effects, working from a process-based ontology where relationships and emergence matter more than individual components. Both approaches were scientifically rigorous. Both produced valuable insights. Yet neither could claim philosophical neutrality.

If science cannot escape philosophical presuppositions, what about counseling and psychotherapy? When a counselor sits with a client struggling with identity, purpose, or belonging, what is actually happening in that encounter? The moment guidance is offered, or even when certain questions are asked rather than others, something interesting occurs. But what exactly?

Consider five questions that might help us see what’s already present in counseling practice:

How do we understand what makes someone themselves? When a counselor helps a client explore their identity, are they working with a theory of personal continuity? When they encourage someone to “be true to yourself,” what assumptions about authenticity are at play? Even the counselor who focuses purely on behavioral techniques is making a statement about whether human flourishing can be addressed without engaging questions about what it means to exist as this particular person. Can we really separate therapeutic intervention from some implicit understanding of selfhood?

What are we assuming about the relationship between mind and body, symptom and meaning? A client arrives with anxiety. One practitioner might reach for cognitive restructuring techniques, another for somatic awareness practices, another for meaning-making conversations. Each choice reflects philosophical commitments about how mind and body relate, whether psychological and physical wellbeing can be separated, and what we’re actually addressing when we work with distress. But do these commitments disappear simply because they remain unspoken?

When we speak of human connection and belonging, what vision of relationship are we already inhabiting? Counselors regularly address questions of intimacy, community, and social bonds. In doing so, might they be operating with implicit theories about what constitutes genuine connection? When guiding someone toward “healthier relationships,” are we working with philosophical assumptions about autonomy and interdependence, about what humans fundamentally need from each other? Can therapeutic work with relationships remain neutral about what relationships fundamentally are?

What understanding of human possibility guides our sense of what can change? Every therapeutic approach carries assumptions about human agency and potential. When we help someone envision different futures, when we work with hope or despair, when we distinguish between realistic and unrealistic goals, aren’t we already operating with philosophical commitments about what enables or constrains human possibility? A therapist who insists that their work deals only with “what’s practicable” seems to be making a philosophical claim; that human existence can be adequately understood through purely pragmatic or practical categories.

How do questions of meaning, purpose, and value show up in therapeutic work, even when uninvited? A client asks not just “How can I feel less anxious?” but “Why do I feel my life lacks direction?” or “What makes any of this worthwhile?” These questions of meaning arise in therapeutic encounters even in approaches that don’t explicitly address them. When such questions surface, can a counselor respond without engaging philosophical dimensions? And if we attempt to redirect toward purely behavioral or emotional terrain, aren’t we implicitly suggesting that questions of meaning and purpose are separate from genuine wellbeing?

Just as scientists benefit from making their philosophical presuppositions explicit and debatable, might therapeutic practitioners benefit from acknowledging and refining the philosophical commitments that already shape their work?

Read the full paper: Philosophical bias is the one bias that science cannot avoid.

Written by…

Luis de Miranda, philosophical practitioner and associated researcher at the Center for Research Ethics and Bioethics at Uppsala University.

Andersen, F., Anjum, R. L., & Rocca, E. (2019). Philosophical bias is the one bias that science cannot avoid. eLife, 8, e44929. https://doi.org/10.7554/eLife.44929

We like challenging questions

How to tell if AI has feelings when it is designed to reflect human traits?

Debates about the possibility that artificial systems can develop the capacity for subjective experiences are becoming increasingly common. Indeed, the topic is fascinating and the discussion is gaining interest also from the public. Yet the risk of ideological and imaginative rather than scientific and rational reflections is quite high. Several factors make the idea of engineering subjective experience, such as developing sensitive robots, either very attractive or extremely frightening. How can we avoid getting stuck in either of these, in my opinion, equally unfortunate extremes? I believe we need a balanced and “pragmatic” analysis of both the logical conceivability and the technical feasibility of artificial consciousness. This is what we are trying to do at CRB within the framework of the CAVAA project.

In this post, I want to illustrate what I mean by a pragmatic analysis by summarizing an article I recently wrote together with Kathinka Evers. We review five strategies that researchers in the field have developed to address the issue whether artificial systems may have the capacity for subjective experiences, either positive experiences such as pleasure or negative ones such as pain. This issue is challenging when it comes to humans and other animals, but becomes even more difficult for systems whose nature, architecture, and functions are very different from our own. In fact, there is an additional challenge that may be particularly tricky when it comes to artificial systems: the gaming problem. The gaming problem has to do with the fact that artificial systems are trained with human-generated data to reflect human traits. Functional and behavioral markers of sentience are therefore unreliable and cannot be considered evidence of subjective experience.

We identify five strategies in the literature that may be used to face this challenge. A theory-based strategy starts from selected theories of consciousness to derive relevant indicators of sentience (structural or functional features that indicate conscious capacities), and then checks whether artificial systems exhibit them. A life-based strategy starts from the connection between consciousness and biological life; not to rule out that artificial systems can be conscious, but to argue that they must be alive in some sense in order to possibly be conscious. A brain-based strategy starts from the features of the human brain that we have identified as crucial for consciousness to then check whether artificial systems possess them or similar ones. A consciousness-based strategy searches for other forms of biological consciousness besides human consciousness, to identify what (if anything) is truly indispensable for consciousness and what is not. In this way, one aims to overcome the controversy between the many theories of consciousness and move towards identifying reliable evidence for artificial consciousness. An indicator-based strategy develops a list of indicators, features that we tend to agree characterize conscious experience, and which can be seen as indicative (probabilistic rather than definitive evidence) of the presence of consciousness in artificial systems.

In the article we describe the advantages and disadvantages of the five strategies above. For example, the theory-based strategy has the advantage of a broad base of empirically validated theories, but it is necessarily selective with respect to which theories individual proponents of the strategy draw upon. The life-based approach has the advantage of starting from the well-established fact that all known examples of conscious systems are biological, but it can be interpreted as ruling out, from the outset, the possibility of alternative forms of AI consciousness beyond the biological ones. The brain-based strategy has the advantage of being based on empirical evidence about the brain bases of consciousness. It avoids speculation about hypothetical alternative forms of consciousness, and it is pragmatic in the sense that it translates into specific approaches to testing machine consciousness. However, because the brain-based approach is limited to human-like forms of consciousness, it may lead to overlooking alternative forms of machine consciousness. The consciousness-based strategy has the advantage of avoiding anthropomorphic and anthropocentric temptations to assume that the human form of consciousness is the only possible one. One of the shortcomings of the consciousness-based approach is that it risks addressing a major challenge (identifying AI consciousness) by taking on a possibly even greater challenge (providing a comparative understanding of different forms of consciousness in nature). Finally, the indicator-based strategy has the advantage of relying on what we tend to agree characterizes conscious activity, and of remaining neutral in relation to specific theories of consciousness: it is compatible with different theoretical accounts. Yet it has the drawback that it is developed with reference to biological consciousness, so its relevance and applicability to AI consciousness may be limited.

How can we move forward towards a good strategy for addressing the gaming problem and reliably assess subjective experience in artificial systems? We suggest that the best approach is to combine the different strategies described above. We have two main reasons for this proposal. First, consciousness has different dimensions and levels: combining different strategies increases the chances of covering this complexity. Second, to address the gaming problem, it is crucial to look for as many indicators as possible, from structural to architectural, from functional to indicators related to social and environmental dimensions. 

The question of sentient AI is fascinating, but an important reason why the question engages more and more people is probably that the systems are trained to reflect human traits. It is so easy to imagine AI with feelings! Personally, I find it at least as fascinating to contribute to a scientific and rational approach to this question. If you are interested in reading more, you can find the preprint of our article here: Is it possible to identify phenomenal consciousness in artificial systems in the light of the gaming problem?

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., & Evers, K. (2025). Is it possible to identify phenomenal consciousness in artificial systems in the light of the gaming problem? https://doi.org/10.5281/zenodo.17255877

We like challenging questions

Paediatric nurses’ experiences of not being able to provide the best possible care

Inadequate staffing, competing tasks and unexpected events can sometimes make it difficult to provide patients with the best possible care. This can be particularly stressful when caring for children with severe diseases. For a nurse, experiencing situations where you cannot provide children with cancer with the best possible care (which means more than just the best possible medical treatment) is an important cause of stress.

To provide a basis for better support for paediatric nurses, a research group interviewed 25 nurses at three Swedish paediatric oncology units. The aim of the interview study was to understand what the nurses experienced as particularly important in situations where they felt they had not been able to provide the best possible care, and how they handled the challenges.

The most important concern for the nurses was to uphold the children’s best interests. One thing that could make this difficult was lack of time, but also disagreements about the child’s best interests could interfere with how the nurses wanted to care for the children. The researchers analyze the paediatric nurses’ handling of challenging situations as a juggling of compassion and competing demands. How do you handle a situation where someone is crying and needs comfort, while a chemotherapy machine somewhere in the ward is beeping and no colleagues are available? What do you do when the most urgent thing is not perceived as the most important?

In the analysis of how the nurses juggled compassion and competing demands, the researchers identified five strategies. One strategy was to prioritize: for example, forego less urgent tasks, such as providing emotional support. Another strategy was to shift up a gear: multitasking, working faster, skipping lunch. A third strategy was to settle for good enough: when you can’t provide the best possible care, you strive to at least provide good enough care. A fourth strategy was acquiescing in situations with different perceptions of the patient’s best interests: for example, continuing to treat a patient because the physician has decided so, even though one believes that prolonged treatment is futile. Regarding this strategy, the nurses requested better dialogue with physicians about difficult patient cases, in order to understand the decisions and prevent acquiescing. The fifth and final strategy was pulling together: to support each other and work as a team with a common goal. Often, there was no need to ask for support; colleagues could spontaneously show solidarity by, for example, staying after their work shifts to help.

In their conclusion, the authors write that adequate staffing, collegial support and good interprofessional communication can help nurses deal with challenges in the care of children with cancer. Read the article here: Juggling Compassion and Competing Demands: A Grounded Theory Study of Pediatric Nurses’ Experiences.

While reading, it may be worth keeping in mind that the study focuses only on situations where it was felt that the best possible care could not be given. The authors point out that the interviews overflowed with descriptions of excellent care and good communication, as well as how rewarding and joyful the work of a paediatric nurse can be.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ventovaara P, Af Sandeberg M, Blomgren K, Pergert P. Juggling Compassion and Competing Demands: A Grounded Theory Study of Pediatric Nurses’ Experiences. Journal of Pediatric Hematology/Oncology Nursing. 2025;42(3):76-84. doi:10.1177/27527530251342164

This post in Swedish

Ethics needs empirical input