A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Year: 2025 (Page 1 of 3)

When nurses become researchers: ethical challenges in doctoral supervision

Nurses who choose to pursue a doctorate and conduct research in the nursing and health sciences contribute greatly to the development of healthcare: the dissertation projects are often collaborations with healthcare. However, doctoral education in the field contains challenges for both doctoral students and their supervisors. One challenge is that many combine research with part-time work in healthcare. It is difficult to combine two such important and demanding professions, especially if both the doctoral student and the supervisor do so.

To get a clearer picture of the challenges and possible strategies for dealing with them, a systematic literature review of English-language studies of challenges and strategies in nursing doctoral supervision was conducted. The literature review is authored by, among others, Tove Godskesen and Stefan Eriksson, and hopefully it can contribute to improved supervision of nurses who choose to become researchers.

One challenge described in the literature has to do with the transition from a professional life with clear tasks to research that is conducted to a greater extent independently. Doctoral students may be concerned about unclear and difficult-to-reach supervision; at the same time, supervisors may think that doctoral students have their own responsibility to seek support and feedback from them when necessary. Another challenge has already been indicated: supervisors working part-time in healthcare may have difficulty maintaining a consistent meeting schedule with their doctoral students to provide feedback. In addition, difficulties were reported when the proportion of doctoral students is high in relation to the number of potential supervisors. Another challenge has to do with the fact that doctoral students are not always prepared for academic tasks such as writing scientific texts and applying for grants. The doctoral students’ first study can therefore be particularly time-consuming to write and supervise.

Strategies for dealing with these challenges include, among other things, clear agreements from the beginning about what the doctoral student and supervisor can expect from each other. Perhaps in the form of written agreements and checklists. Education of doctoral students for various academic tasks and roles was also mentioned, such as training in grant writing, academic publishing and research methodology. However, supervisors also need education and training to function well in their roles towards their doctoral students. Another strategy reported in the literature was mentoring to initiate doctoral students into an academic environment.

In their discussion, the authors suggest, among other things, that the principles of bioethics (autonomy, beneficence, non-maleficence, justice) can be used as a framework for dealing with ethical challenges when supervising doctoral students in the nursing and health sciences. Ethically well-thought-out supervision is a foundation for successful doctoral education in the field, they write in their conclusion. Read the article here: Ethical Challenges and Strategies in Nursing Doctoral Supervision: A Systematic Mixed-Method Review.

The research seminar does not seem to be mentioned in the literature, I personally note. Regularly participating in a research seminar is an important part of doctoral education and effectively initiates the doctoral student into an academic culture. The seminar enables, not least, feedback from other doctoral students and from senior researchers other than the supervisors. The fact that the group of doctoral students is large can actually be an advantage for the seminar. My experience is that the seminar becomes livelier with a larger proportion of doctoral students, who find it easier to make themselves heard.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Godskesen, T., M. Grandahl, A. N. Hagen, and S. Eriksson. 2025. “Ethical Challenges and Strategies in Nursing Doctoral Supervision: A Systematic Mixed-Method Review.” Journal of Advanced Nursing 1–18. https://doi.org/10.1111/jan.70298

This post in Swedish

We recommend readings

Conditions for studies of medicine safety during breastfeeding

Reliable information on medicine safety during breastfeeding is lacking for many medications. In order to avoid the risk of harming the baby, mothers taking medication for various diseases may be advised by their doctor to discontinue the medication during breastfeeding (or the woman herself may choose to discontinue). Alternatively, the woman may be advised to continue the medication but refrain from breastfeeding. Both options are unfortunate. The mother needs the prescribed medication and breastfeeding has benefits for both the baby and the mother.

Why is there a lack of reliable information on medicine safety during breastfeeding? This is because breastfeeding mothers are usually excluded from clinical studies. Therefore, there is limited knowledge of the extent to which different drugs are transferred to the baby via breast milk. The lack of reliable safety information applies to both already approved and new drugs. However, since many mothers take medications while breastfeeding, it should be possible to establish lactation studies that systematically provide scientific evidence for better safety information. Which drugs can be used during breastfeeding?

A new article with Mats G. Hansson as lead author and Erica Sundell as one of the co-authors describes how, within the framework of current regulatory requirements, two breastfeeding studies have been started that can help solve the dilemma that breastfeeding mothers and their doctors often face. One study concerns a drug for diabetes, the other a drug for inflammation and rheumatic disorders. The studies are part of the European project ConcePTION, which will produce evidence on drug safety during pregnancy and breastfeeding. Breast milk samples from the mother and blood samples (plasma) from the mother and child are analyzed to measure how much of the drugs are transferred to the child during breastfeeding. The samples are stored in a biobank for future research, and the studies thus contribute to creating an infrastructure for lactation studies of medicine safety.

Recruitment of research participants and sample collection started in the spring of 2024 and will end at the turn of the year 2025/2026. The purpose of the article is to use the experiences from setting up the two studies as a template for initiating clinical lactation studies. What should be considered? What are the conditions for this type of research? The article concisely describes relevant conditions and procedures for informed consent, sampling, transport and storage of samples, and laboratory analysis. The article also discusses the different conditions for studies of already approved drugs and for new drugs.

The article is important reading for researchers and others who can in one way or another contribute to initiating studies for better information on medicine safety during breastfeeding. Because it so concisely describes the conditions for new studies, the article is also interesting as a concrete example of how problems can be solved by starting new research.

Read the article here: Setting up mother–infant pair lactation studies with biobanking for research according to regulatory requirements.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Hansson M, Björkgren I, Svedenkrans J, et al. Setting up mother–infant pair lactation studies with biobanking for research according to regulatory requirements. British Journal of Clinical Pharmacology. 2025; 1-6. https://doi.org/10.1002/bcp.70201

This post in Swedish

Part of international collaborations

The importance of letting things take their time

To be an ethicist and philosopher is to be an advocate for time: “Wait, we need time to think this through.” This idea of letting things take their time rarely gains traction in society. It starts already in school, where the focus is often on being able to calculate quickly and recite as many words as possible in one minute. It then continues at the societal level.

A good example is technological development, which is moving faster than ever. Humans have always used more or less advanced and functional technology, always searching for better ways to solve problems. With the Industrial Revolution, things began to accelerate, and since then, the pace has only increased. We got factories, car traffic, air travel, nuclear power, genetically modified crops, and prenatal diagnostics. We got typewriters, computers, and telephones. We got different ways to play and reproduce music. Now we have artificial intelligence (AI), which it is often said will revolutionize most parts of society.

The development and implementation of AI is progressing at an unparalleled speed. Various government authorities use AI, healthcare allows AI tools to take on more and more tasks. Schools and universities wrestle with the question of how AI should be used by students, teachers, and researchers. Teachers have been left at a loss because AI established itself so quickly, and different teachers draw different boundaries for what counts as cheating, resulting in great uncertainty for students about what applies. People use AI for everything from planning their day to getting help with mental health issues. AI is used as a relationship expert, but also as the very object of romantic or friendship relationships. Today, there are AI systems that can call elderly and sick people to ask how they are feeling, whether they have taken their medication, and perhaps whether they have had any social contact recently.

As with all technology, there are advantages and disadvantages to AI, and it can be used in both good and bad ways. AI can be used to improve life for people and the environment, but like all technology, it can also be harmful to people and the environment. People and societies can do things better and more easily with AI, but like all technology, it can also have negative consequences such as environmental damage, unemployment, and discrimination.

Researchers in the Netherlands have discussed the problems that arise with new technology in terms of “social experiments.” They argue that there is an important difference compared to the careful testing that, for example, new pharmaceuticals undergo before they are approved. New technologies are not tested in such a structured way.

The EU has introduced a basic legal framework for AI (the EU AI Act), which can be seen as an attempt to introduce the new technology in a way that is less experimental on people and societies: more “responsible” and “trustworthy” AI. The new law is criticized by some European tech companies, who claim that the law means we will fall behind countries that have no regulations, such as the USA and China. Doing things in a thoughtful and ethically sound way is apparently considered less important than quickly getting the technology in place. On the contrary, caution is seen as risky, which says something about the concept of risk that currently drives such rapid development that perhaps not even the technology can deliver what the market expects.

Just as with previous important technologies, we need to think things through beforehand. If AI is to help us without harmful consequences, development must be allowed to take its time. This is even more important with AI than with previous technologies because AI has an unusually large potential to affect our lives. Ethical research points to several problems related to justice and trust. One problem is that we cannot explain why AI in, for example, healthcare reaches a certain conclusion about a specific individual. With previous technology, someone human being – if not the user, then at least the developer – has always been able to explain the causality in the system. Can we trust a technology in healthcare that we cannot control or explain in essential ways?

There are technology optimists and technology pessimists. Some are enthusiastic about new technologies and believe it is the solution to all our problems. Others think the precautionary principle should apply to all new technology and do not want to accept any risks at all. Instead, we should see the middle way. The middle way consists of letting things take their time to show their real possibilities beyond the optimists’ and pessimists’ preconceived notions. Advocating an ethical approach is not about stopping development but about slowing down the process. We need time to reflect on where it might be appropriate to introduce AI and where we should refrain from using the technology. We should also consider how the AI we choose to use is introduced in a good way so that we have time to detect risks of injustice, discrimination, and reduced trust and can minimize them.

It is not easy and not popular to be the one who says, “Wait, we need to think this through.” Yet it is so important that we take the time. We must think ahead so that things do not go wrong when they could so easily have gone right. It might be worth considering what could happen if we learned in school that it is more important to do things right than to do them quickly.

Jessica Nihlén Fahlquist

Written by…

Jessica Nihlén Fahlquist, senior lecturer in biomedical ethics and associate professor in practical philosophy at the Centre for Research Ethics & Bioethics.

This post in Swedish

Approaching future issues

Can counseling be unphilosophical?

A fascinating paper by Fredrik Andersen, Rani Lill Anjum, and Elena Rocca, “Philosophical bias is the one bias that science cannot avoid,” reminds us of something fundamental, but often forgotten, about the nature of scientific inquiry. Every scientist, whether they realize it or not, operates with fundamental assumptions about causality, determinism, reductionism, and the nature of reality itself. These “philosophical biases” are, they write, unavoidable foundations that shape how we see, interpret, and engage with the world.

The authors show us, for instance, how molecular biologists and ecologists approached GM crop safety with entirely different philosophical frameworks. Molecular biologists focused on structural equivalence between GM and conventional crops, operating from an entity-based ontology where understanding parts leads to understanding wholes. Ecologists emphasized unpredictable environmental effects, working from a process-based ontology where relationships and emergence matter more than individual components. Both approaches were scientifically rigorous. Both produced valuable insights. Yet neither could claim philosophical neutrality.

If science cannot escape philosophical presuppositions, what about counseling and psychotherapy? When a counselor sits with a client struggling with identity, purpose, or belonging, what is actually happening in that encounter? The moment guidance is offered, or even when certain questions are asked rather than others, something interesting occurs. But what exactly?

Consider five questions that might help us see what’s already present in counseling practice:

How do we understand what makes someone themselves? When a counselor helps a client explore their identity, are they working with a theory of personal continuity? When they encourage someone to “be true to yourself,” what assumptions about authenticity are at play? Even the counselor who focuses purely on behavioral techniques is making a statement about whether human flourishing can be addressed without engaging questions about what it means to exist as this particular person. Can we really separate therapeutic intervention from some implicit understanding of selfhood?

What are we assuming about the relationship between mind and body, symptom and meaning? A client arrives with anxiety. One practitioner might reach for cognitive restructuring techniques, another for somatic awareness practices, another for meaning-making conversations. Each choice reflects philosophical commitments about how mind and body relate, whether psychological and physical wellbeing can be separated, and what we’re actually addressing when we work with distress. But do these commitments disappear simply because they remain unspoken?

When we speak of human connection and belonging, what vision of relationship are we already inhabiting? Counselors regularly address questions of intimacy, community, and social bonds. In doing so, might they be operating with implicit theories about what constitutes genuine connection? When guiding someone toward “healthier relationships,” are we working with philosophical assumptions about autonomy and interdependence, about what humans fundamentally need from each other? Can therapeutic work with relationships remain neutral about what relationships fundamentally are?

What understanding of human possibility guides our sense of what can change? Every therapeutic approach carries assumptions about human agency and potential. When we help someone envision different futures, when we work with hope or despair, when we distinguish between realistic and unrealistic goals, aren’t we already operating with philosophical commitments about what enables or constrains human possibility? A therapist who insists that their work deals only with “what’s practicable” seems to be making a philosophical claim; that human existence can be adequately understood through purely pragmatic or practical categories.

How do questions of meaning, purpose, and value show up in therapeutic work, even when uninvited? A client asks not just “How can I feel less anxious?” but “Why do I feel my life lacks direction?” or “What makes any of this worthwhile?” These questions of meaning arise in therapeutic encounters even in approaches that don’t explicitly address them. When such questions surface, can a counselor respond without engaging philosophical dimensions? And if we attempt to redirect toward purely behavioral or emotional terrain, aren’t we implicitly suggesting that questions of meaning and purpose are separate from genuine wellbeing?

Just as scientists benefit from making their philosophical presuppositions explicit and debatable, might therapeutic practitioners benefit from acknowledging and refining the philosophical commitments that already shape their work?

Read the full paper: Philosophical bias is the one bias that science cannot avoid.

Written by…

Luis de Miranda, philosophical practitioner and associated researcher at the Center for Research Ethics and Bioethics at Uppsala University.

Andersen, F., Anjum, R. L., & Rocca, E. (2019). Philosophical bias is the one bias that science cannot avoid. eLife, 8, e44929. https://doi.org/10.7554/eLife.44929

We like challenging questions

How to tell if AI has feelings when it is designed to reflect human traits?

Debates about the possibility that artificial systems can develop the capacity for subjective experiences are becoming increasingly common. Indeed, the topic is fascinating and the discussion is gaining interest also from the public. Yet the risk of ideological and imaginative rather than scientific and rational reflections is quite high. Several factors make the idea of engineering subjective experience, such as developing sensitive robots, either very attractive or extremely frightening. How can we avoid getting stuck in either of these, in my opinion, equally unfortunate extremes? I believe we need a balanced and “pragmatic” analysis of both the logical conceivability and the technical feasibility of artificial consciousness. This is what we are trying to do at CRB within the framework of the CAVAA project.

In this post, I want to illustrate what I mean by a pragmatic analysis by summarizing an article I recently wrote together with Kathinka Evers. We review five strategies that researchers in the field have developed to address the issue whether artificial systems may have the capacity for subjective experiences, either positive experiences such as pleasure or negative ones such as pain. This issue is challenging when it comes to humans and other animals, but becomes even more difficult for systems whose nature, architecture, and functions are very different from our own. In fact, there is an additional challenge that may be particularly tricky when it comes to artificial systems: the gaming problem. The gaming problem has to do with the fact that artificial systems are trained with human-generated data to reflect human traits. Functional and behavioral markers of sentience are therefore unreliable and cannot be considered evidence of subjective experience.

We identify five strategies in the literature that may be used to face this challenge. A theory-based strategy starts from selected theories of consciousness to derive relevant indicators of sentience (structural or functional features that indicate conscious capacities), and then checks whether artificial systems exhibit them. A life-based strategy starts from the connection between consciousness and biological life; not to rule out that artificial systems can be conscious, but to argue that they must be alive in some sense in order to possibly be conscious. A brain-based strategy starts from the features of the human brain that we have identified as crucial for consciousness to then check whether artificial systems possess them or similar ones. A consciousness-based strategy searches for other forms of biological consciousness besides human consciousness, to identify what (if anything) is truly indispensable for consciousness and what is not. In this way, one aims to overcome the controversy between the many theories of consciousness and move towards identifying reliable evidence for artificial consciousness. An indicator-based strategy develops a list of indicators, features that we tend to agree characterize conscious experience, and which can be seen as indicative (probabilistic rather than definitive evidence) of the presence of consciousness in artificial systems.

In the article we describe the advantages and disadvantages of the five strategies above. For example, the theory-based strategy has the advantage of a broad base of empirically validated theories, but it is necessarily selective with respect to which theories individual proponents of the strategy draw upon. The life-based approach has the advantage of starting from the well-established fact that all known examples of conscious systems are biological, but it can be interpreted as ruling out, from the outset, the possibility of alternative forms of AI consciousness beyond the biological ones. The brain-based strategy has the advantage of being based on empirical evidence about the brain bases of consciousness. It avoids speculation about hypothetical alternative forms of consciousness, and it is pragmatic in the sense that it translates into specific approaches to testing machine consciousness. However, because the brain-based approach is limited to human-like forms of consciousness, it may lead to overlooking alternative forms of machine consciousness. The consciousness-based strategy has the advantage of avoiding anthropomorphic and anthropocentric temptations to assume that the human form of consciousness is the only possible one. One of the shortcomings of the consciousness-based approach is that it risks addressing a major challenge (identifying AI consciousness) by taking on a possibly even greater challenge (providing a comparative understanding of different forms of consciousness in nature). Finally, the indicator-based strategy has the advantage of relying on what we tend to agree characterizes conscious activity, and of remaining neutral in relation to specific theories of consciousness: it is compatible with different theoretical accounts. Yet it has the drawback that it is developed with reference to biological consciousness, so its relevance and applicability to AI consciousness may be limited.

How can we move forward towards a good strategy for addressing the gaming problem and reliably assess subjective experience in artificial systems? We suggest that the best approach is to combine the different strategies described above. We have two main reasons for this proposal. First, consciousness has different dimensions and levels: combining different strategies increases the chances of covering this complexity. Second, to address the gaming problem, it is crucial to look for as many indicators as possible, from structural to architectural, from functional to indicators related to social and environmental dimensions. 

The question of sentient AI is fascinating, but an important reason why the question engages more and more people is probably that the systems are trained to reflect human traits. It is so easy to imagine AI with feelings! Personally, I find it at least as fascinating to contribute to a scientific and rational approach to this question. If you are interested in reading more, you can find the preprint of our article here: Is it possible to identify phenomenal consciousness in artificial systems in the light of the gaming problem?

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., & Evers, K. (2025). Is it possible to identify phenomenal consciousness in artificial systems in the light of the gaming problem? https://doi.org/10.5281/zenodo.17255877

We like challenging questions

Paediatric nurses’ experiences of not being able to provide the best possible care

Inadequate staffing, competing tasks and unexpected events can sometimes make it difficult to provide patients with the best possible care. This can be particularly stressful when caring for children with severe diseases. For a nurse, experiencing situations where you cannot provide children with cancer with the best possible care (which means more than just the best possible medical treatment) is an important cause of stress.

To provide a basis for better support for paediatric nurses, a research group interviewed 25 nurses at three Swedish paediatric oncology units. The aim of the interview study was to understand what the nurses experienced as particularly important in situations where they felt they had not been able to provide the best possible care, and how they handled the challenges.

The most important concern for the nurses was to uphold the children’s best interests. One thing that could make this difficult was lack of time, but also disagreements about the child’s best interests could interfere with how the nurses wanted to care for the children. The researchers analyze the paediatric nurses’ handling of challenging situations as a juggling of compassion and competing demands. How do you handle a situation where someone is crying and needs comfort, while a chemotherapy machine somewhere in the ward is beeping and no colleagues are available? What do you do when the most urgent thing is not perceived as the most important?

In the analysis of how the nurses juggled compassion and competing demands, the researchers identified five strategies. One strategy was to prioritize: for example, forego less urgent tasks, such as providing emotional support. Another strategy was to shift up a gear: multitasking, working faster, skipping lunch. A third strategy was to settle for good enough: when you can’t provide the best possible care, you strive to at least provide good enough care. A fourth strategy was acquiescing in situations with different perceptions of the patient’s best interests: for example, continuing to treat a patient because the physician has decided so, even though one believes that prolonged treatment is futile. Regarding this strategy, the nurses requested better dialogue with physicians about difficult patient cases, in order to understand the decisions and prevent acquiescing. The fifth and final strategy was pulling together: to support each other and work as a team with a common goal. Often, there was no need to ask for support; colleagues could spontaneously show solidarity by, for example, staying after their work shifts to help.

In their conclusion, the authors write that adequate staffing, collegial support and good interprofessional communication can help nurses deal with challenges in the care of children with cancer. Read the article here: Juggling Compassion and Competing Demands: A Grounded Theory Study of Pediatric Nurses’ Experiences.

While reading, it may be worth keeping in mind that the study focuses only on situations where it was felt that the best possible care could not be given. The authors point out that the interviews overflowed with descriptions of excellent care and good communication, as well as how rewarding and joyful the work of a paediatric nurse can be.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ventovaara P, Af Sandeberg M, Blomgren K, Pergert P. Juggling Compassion and Competing Demands: A Grounded Theory Study of Pediatric Nurses’ Experiences. Journal of Pediatric Hematology/Oncology Nursing. 2025;42(3):76-84. doi:10.1177/27527530251342164

This post in Swedish

Ethics needs empirical input

Interprofessional collaboration in hospital care of patients who self-harm

Patients who are treated in hospital for self-harm can sometimes arouse strong emotions in the staff. At the same time, the patients may be dissatisfied with their care, which sometimes involves restrictions and safety measures to prevent self-harm. In addition to such tensions between patients and staff, the healthcare staff is divided into different professions with their own roles and responsibilities. These professional groups may have different perspectives and thus conflicting opinions about what care individual patients should receive. In order for patients to receive good and cohesive care, good interprofessional collaboration is therefore required between, for example, nurses and psychiatrists.

A Swedish interview study examined how nurses and psychiatrists think about their responsibility and autonomy in relation to each other in different situations on the ward. In general, they considered themselves autonomous, they could take their professional responsibility without being influenced by other colleagues. Both groups agreed that psychiatrists had the ultimate responsibility for the patients’ care, and it emerged that the nurses saw themselves as the patients’ advocates. If decisions made by the psychiatrist went against the patient’s wishes, they saw it as their task to explain the patient’s views, even if they did not agree with them.

However, sometimes the scope for action could be affected by decisions made by colleagues. For example, one could experience that the scope for taking responsibility for a patient was reduced if colleagues had already isolated the patient. In other cases, one could experience that colleagues’ decisions increased one’s responsibility, for example if decisions based on ignorance about a patient risked leading to new self-harm that the nurses had to deal with.

An important theme in the interviews was how one could sometimes renunciate some of one’s professional autonomy in order to achieve interprofessional collaboration. The interviewees agreed that one ultimately had to stand united behind decisions and set aside one’s own agendas and opinions. Consensus was considered essential and was sought even if it meant reducing one’s own autonomy and power. Consensus was achieved through discussions in the team where participants humbly respected each other’s professional roles, knowledge and experiences.

In their conclusion, the authors emphasize that the study shows how nurses and psychiatrists are prepared to set aside hierarchies and their own autonomy in order to achieve collaboration and shared responsibility in the care of patients with self-harm. Since this has not been visible in previous studies, they suggest that attitudes and skills towards interprofessional collaboration may have improved. As this is essential for good cohesive care of patients, it is important to continue to support such attitudes and skills.

If you want to see all the results from the interview study and read the authors’ discussion about responsibility and autonomy in interprofessional collaboration, you can find the article here: Navigating consensus, interprofessional collaboration between nurses and psychiatrists in hospital care for patients with deliberate self-harm.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Löfström, E. et al. (2025) ‘Navigating consensus, interprofessional collaboration between nurses and psychiatrists in hospital care for patients with deliberate self-harm’, Journal of Interprofessional Care, 39(3), pp. 479–486. doi: 10.1080/13561820.2025.2482691

This post in Swedish

We recommend readings

What is philosophical health and can it be mapped?

Philosophers such as Socrates and philosophical schools such as Stoicism have had a certain influence on psychology and psychotherapy, and thus also on human health. But if philosophy can support human health via psychology, can it not support health more directly, on its own? A growing trend today is to offer philosophical conversations as a form of philosophical practice that can support human health in existential dimensions. The trend to offer philosophical conversations is linked to a concept of health that is not only about physical and mental health, and which does not understand health as merely the absence of disease: philosophical health.

What does it mean to talk about philosophical health? Given all the health ideals that already affect self-esteem, should we now also be influenced by philosophical health ideals that make our lives feel hopelessly ill-conceived and pointless? No, on the contrary, the stress that human ideals and norms create can be an important topic to philosophize about, in peace and quiet. Instead of being burdened by more ideas about how we should live, instead of giving the appearance of fulfilling the ideals, we can freely examine this underlying stress: the daily feeling of being compelled to live the way we imagine we should. No wonder philosophical practice is a growing trend. Finally, we get time to think openly about what other trends usually make us hide: ourselves, when we do not identify with the trends, the norms and the ideals.

Philosophizing sounds heavy and demanding but can actually be the exact opposite. I have written an essay about how philosophical self-examination, in its best moments, can lighten the mind by unexpectedly illuminating our many tacit demands and expectations. Unfortunately, the essay is not published with open access, but here is the link: The Wisdom of Intellectual Asceticism.

A colleague at CRB, Luis de Miranda, has long worked with philosophical health both as a practitioner and researcher. He emphasizes that human health also includes existential dimensions such as harmony, meaning and purpose in life, and that in order to support wellbeing in these intimately universal dimensions, people also need opportunities to reflect on life. In a new article (written with six co-authors), he develops the concept in the form of a research tool that could map philosophical health: a philosophical health compass. The idea is that the compass will make it possible to study philosophical health in more quantitative ways, for example making comparisons between populations and assessing the effects of different forms of philosophical practice.

The compass consists of a questionnaire. Respondents are asked to consider statements about 6 existential dimensions of life, revolving around the body, the self, belonging, possibilities in life, purpose, and finally, their own philosophical reflection. Each dimension is explored through 8 statements. Respondents indicate on a 5-point scale how well the total of 48 statements apply to them.

I cannot judge how well the 48 statements are chosen, or how easy it is for people to take a position on them, but the statements are more concrete than you might think and it will be exciting to see what happens when the compass is put to the test. Will it be able to measure the wisdom of the crowd, can philosophically relevant differences and changes be mapped? If you want to take a closer look at the philosophical health compass, you can find the article here: The philosophical health compass: A new and comprehensive assessment tool for researching existential dimensions of wellbeing.

Luis de Miranda warns that the compass may risk not only supporting philosophical health, but also undermining it if the compass is interpreted as an ideal that determines the qualities that distinguish good philosophical health. Using the compass wisely requires great openness and sensitivity, he emphasizes. Yes, hopefully the compass will raise many philosophical questions about what philosophical health is, and how it can be studied. For what is the great openness and responsiveness that Luis de Miranda emphasizes, if not philosophical inquisitiveness itself?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Pär Segerdahl; The Wisdom of Intellectual Asceticism. Common Knowledge 1 January 2025; 31 (1): 74–88. doi: https://doi.org/10.1215/0961754X-11580693

de Miranda, L., Ingvolstad Malmgren, C., Carroll, J. E., Gould, C. S., King, R., Funke, C., & Arslan, S. (2025). The philosophical health compass: A new and comprehensive assessment tool for researching existential dimensions of wellbeing. Methodological Innovations, 0(0). https://doi.org/10.1177/20597991251352420

This post in Swedish

Thinking about thinking

What does precision medicine and AI mean for the relationship between doctor and patient?

In a sense, all care strives to be tailored to the individual patient. But the technical possibilities to obtain large amounts of biological data from individuals have increased so significantly that today one is talking about a paradigm shift and a new way of working with disease: precision medicine. Instead of giving all patients with a certain type of cancer the same standard treatment, for example, it is possible to map unique genetic changes in individual patients and determine which of several alternative treatments is likely to work best on the individual patient’s tumor. Other types of individual biological data can also be produced to identify the right treatment for the patient and avoid unnecessary side effects.

Of course, AI will play an increasingly important role in precision medicine. It can help identify relevant patterns in the large amount of biological data and provide support for precision medicine decisions about the treatment of individual patients. But what can all this mean for the relationship between doctor and patient?

The question is examined in a research article in BMC Medical Informatics and Decision Making, with Mirko Ancillotti as main author. The researchers interviewed ten physicians from six European countries. All physicians worked with patients with colorectal cancer. In the interviews, the physicians highlighted, among other things, that although it is possible to compile large amounts of individual biological data, it is still difficult to tailor treatments for colorectal cancer because there are only a few therapies available. The physicians also discussed the difficulties of distinguishing between experimental and conventional treatments when testing new ways to treat colorectal cancer in precision medicine.

Furthermore, the physicians generally viewed AI as a valuable future partner in the care of patients with colorectal cancer. AI can compile large amounts of data from different sources and provide new insights, make actionable recommendations and support less experienced physicians, they said in the interviews. At the same time, issues of trust were evident in the interviews. For example, the physicians wondered how they can best rely on AI results when they do not know how the AI ​​system arrived at them. They also discussed responsibility. Most said that even when AI is used, the physicians and the team are responsible for the care decisions. However, they said that sometimes responsibility can be shared with AI developers and with those who decide on the use of AI in healthcare.

Finally, the physicians described challenges in communicating with patients. How do you explain the difference between experimental and conventional treatment in precision medicine? How do you explain how AI works and how it helps to tailor the patient’s treatment? How do you avoid hype and overconfidence in “new” treatments and how do you explain that precision medicine can also mean that that a patient is not offered a certain treatment?

If you would like to see more results and the authors’ discussion, you can find the article here: Exploring doctors’ perspectives on precision medicine and AI in colorectal cancer: opportunities and challenges for the doctor-patient relationship.

Some of the study’s conclusions are that good integration of AI and precision medicine requires clearer regulation and ethical guidelines, and that physicians need support to meet the challenge of explaining how AI is used to tailor patient treatment. It is also important that AI remains an auxiliary tool and not an independent decision-maker. Otherwise, patients’ trust can be eroded, as can physicians’ autonomy, the authors argue.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ancillotti, M., Grauman, Å., Veldwijk, J. et al. Exploring doctors’ perspectives on precision medicine and AI in colorectal cancer: opportunities and challenges for the doctor-patient relationship. BMC Med Inform Decis Mak 25, 283 (2025). https://doi.org/10.1186/s12911-025-03134-0

This post in Swedish

We have a clinical perspective

Mind the gap between ethics in principle and ethics in practice

When ethical dilemmas are discussed using case descriptions or vignettes, we tend to imagine the cases as taken from reality. Of course, the vignettes are usually invented and the descriptions adapted to illustrate ethical principles, but when we discuss the cases, we tend to have the attitude that they are real. Or at least real possibilities: “What should we do if we encounter a case like this?”

Discussing ethical cases is an extremely good exercise in ethical reasoning and an important part of the education and training of healthcare professionals. But sometimes we may also need to keep in mind that these discussions are adapted exercises in the ethics gym, so to speak. Reality rarely delivers separate dilemmas that can be handled one by one. Often, life is rather a continuous flow of more or less clearly experienced challenges that change faster than we can describe them. We cannot always say what the problem situation actually looks like. Therefore, it may sometimes be wise not to decide or act immediately, but to wait for the situation to take a different and perhaps clearer form. And then the ethical problem may in practice have been partially resolved, or become more manageable, or become obsolete.

Does that sound irresponsible? Judging by two texts that I want to recommend today, responsible healthcare professionals may, on the contrary, experience a friction between ethics in principle and ethics in practice, and that it would be unethical not to take this seriously. The first text is an essay by Joar Björk (who is both an ethicist and a palliative care physician). In the journal Palliative Care and Social Practice, he discusses a fictional patient case. A man with disseminated prostate cancer is cared for by a palliative care team. In the vignette, the man has previously expressed that he wants complete knowledge of his situation and what his death might look like. But when the team has time to talk to him, he suddenly changes his mind and says that he does not want to know anything, and that the issue should not be raised again. How to act now?

According to Joar Björk, the principle-based ethical standard recommendation here would probably be the following. Respect for the patient’s autonomy requires that the team not try to carry out the conversation. Only if there is good reason to believe that a conversation can have great medical benefit can one consider trying to inform the patient in some way.

Note that the principle-based recommendation treats the situation that has arisen as a separate case: as a ready-made vignette that cannot be changed. But in practice, palliative teams care for their patients continuously for a long time: so much is constantly changing. Of course, they are aware that they cannot impose information on patients who state that they do not want it, as it violates the principle of autonomy. But in practice, the unexpected situation is an unclear ethical challenge for the care team. What really happened, why did he change his mind? Does the man suddenly refuse to accept his situation and the proximity of death? Maybe the team should cautiously try to talk more to him rather than less? How can the team plan the man’s care – maybe soon a hospital bed will be needed in his home – if they are not allowed to talk to him about his situation? As palliative care teams develop good listening and communication skills, the situation may very well soon look different. Everything changes!

Joar Björk’s reflections give the reader an idea of ​​how ethical challenges in practice take on different forms than in the vignettes that are so important in ethics teaching and training. How does he deal with the gap between ethics in principle and ethics in practice? As I understand him, Joar Björk does not advocate any definite view on how to proceed. But he is trying to formulate what he calls a palliative care ethos, which could provide better ethical guidance in cases such as the one just described. Several authors working in palliative care have attempted to formulate aspects of such a care ethos. In his essay, Joar Björk summarizes their efforts in 11 points. What previously sounded passive and irresponsible – to wait and see – appears in Joar Björk’s list in the form of words of wisdom such as “Everything changes” and “Adaptation and improvisation.”

Can healthcare professionals then find better ethical guidance in such practical attitudes than in well-established bioethical principles? Joar Björk tentatively discusses how the 11 points taken together could provide guidance that is more sensitive to the practical contexts of palliative care. I myself wonder, however – but I do not know – whether it would not be wise to allow the gap between ethics in principle and ethics in practice to be as wide as it is. The 11 points probably have their origin in an ethical care practice that already functions as the points describe it. The practice works that way without healthcare professionals using the points as some kind of soft guidance. Joar Björk thus describes a palliative care ethics in practice; a description that can help us think more clearly about the differences between the two forms of ethics. Reflecting on the 11 points can, for example, make healthcare professionals more aware of the specifics of their practice, so that they do not wrongly blame themselves if they do not always relate to situations that arise as if they were separate cases that illustrate ethical principles.

Perhaps it is impudent of me to suggest this possibility in a blog post that recommends reading, but Joar Björk’s reflections are so thought-provoking that I cannot help it. Read his essay here: Ethical reflection: The palliative care ethos and patients who refuse information.

You will certainly find Joar Björk’s reflections interesting. Therefore, I would like to mention a new book that also reflects on the gap between ethics in principle and ethics in practice. The book is written by Stephen Scher and Kaisa Kozlowska and is published with open access. You can find it here: Revitalizing Health Care Ethics. The Clinician’s Voice.

So, I think it is difficult to see clearly the difference between ethics in principle and ethics in practice. We tend to transfer characteristics from one to the other, and become dissatisfied when it does not work. The book by Scher and Kozlowska therefore uses the warning “Mind the gap” to draw attention to the difference. If we mind the gap between the platform and the train – if we do not imagine the train as an overly mobile platform, and the platform as an overly stationary train – then perhaps the two forms of ethics can accept and find better support in each other. More often than we think, we are dissatisfied for the simple reason that we fail to keep different things apart.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Björk J. Ethical reflection: The palliative care ethos and patients who refuse information. Palliative Care and Social Practice. 2025;19. doi:10.1177/26323524251355287

Stephen Scher, Kasia Kozlowska. 2025. Revitalizing Health Care Ethics. The Clinician’s Voice. Palgrave Macmillan Cham.

This post in Swedish

We recommend readings

« Older posts