A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Month: November 2025

When nurses become researchers: ethical challenges in doctoral supervision

Nurses who choose to pursue a doctorate and conduct research in the nursing and health sciences contribute greatly to the development of healthcare: the dissertation projects are often collaborations with healthcare. However, doctoral education in the field contains challenges for both doctoral students and their supervisors. One challenge is that many combine research with part-time work in healthcare. It is difficult to combine two such important and demanding professions, especially if both the doctoral student and the supervisor do so.

To get a clearer picture of the challenges and possible strategies for dealing with them, a systematic literature review of English-language studies of challenges and strategies in nursing doctoral supervision was conducted. The literature review is authored by, among others, Tove Godskesen and Stefan Eriksson, and hopefully it can contribute to improved supervision of nurses who choose to become researchers.

One challenge described in the literature has to do with the transition from a professional life with clear tasks to research that is conducted to a greater extent independently. Doctoral students may be concerned about unclear and difficult-to-reach supervision; at the same time, supervisors may think that doctoral students have their own responsibility to seek support and feedback from them when necessary. Another challenge has already been indicated: supervisors working part-time in healthcare may have difficulty maintaining a consistent meeting schedule with their doctoral students to provide feedback. In addition, difficulties were reported when the proportion of doctoral students is high in relation to the number of potential supervisors. Another challenge has to do with the fact that doctoral students are not always prepared for academic tasks such as writing scientific texts and applying for grants. The doctoral students’ first study can therefore be particularly time-consuming to write and supervise.

Strategies for dealing with these challenges include, among other things, clear agreements from the beginning about what the doctoral student and supervisor can expect from each other. Perhaps in the form of written agreements and checklists. Education of doctoral students for various academic tasks and roles was also mentioned, such as training in grant writing, academic publishing and research methodology. However, supervisors also need education and training to function well in their roles towards their doctoral students. Another strategy reported in the literature was mentoring to initiate doctoral students into an academic environment.

In their discussion, the authors suggest, among other things, that the principles of bioethics (autonomy, beneficence, non-maleficence, justice) can be used as a framework for dealing with ethical challenges when supervising doctoral students in the nursing and health sciences. Ethically well-thought-out supervision is a foundation for successful doctoral education in the field, they write in their conclusion. Read the article here: Ethical Challenges and Strategies in Nursing Doctoral Supervision: A Systematic Mixed-Method Review.

The research seminar does not seem to be mentioned in the literature, I personally note. Regularly participating in a research seminar is an important part of doctoral education and effectively initiates the doctoral student into an academic culture. The seminar enables, not least, feedback from other doctoral students and from senior researchers other than the supervisors. The fact that the group of doctoral students is large can actually be an advantage for the seminar. My experience is that the seminar becomes livelier with a larger proportion of doctoral students, who find it easier to make themselves heard.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Godskesen, T., M. Grandahl, A. N. Hagen, and S. Eriksson. 2025. “Ethical Challenges and Strategies in Nursing Doctoral Supervision: A Systematic Mixed-Method Review.” Journal of Advanced Nursing 1–18. https://doi.org/10.1111/jan.70298

This post in Swedish

We recommend readings

Conditions for studies of medicine safety during breastfeeding

Reliable information on medicine safety during breastfeeding is lacking for many medications. In order to avoid the risk of harming the baby, mothers taking medication for various diseases may be advised by their doctor to discontinue the medication during breastfeeding (or the woman herself may choose to discontinue). Alternatively, the woman may be advised to continue the medication but refrain from breastfeeding. Both options are unfortunate. The mother needs the prescribed medication and breastfeeding has benefits for both the baby and the mother.

Why is there a lack of reliable information on medicine safety during breastfeeding? This is because breastfeeding mothers are usually excluded from clinical studies. Therefore, there is limited knowledge of the extent to which different drugs are transferred to the baby via breast milk. The lack of reliable safety information applies to both already approved and new drugs. However, since many mothers take medications while breastfeeding, it should be possible to establish lactation studies that systematically provide scientific evidence for better safety information. Which drugs can be used during breastfeeding?

A new article with Mats G. Hansson as lead author and Erica Sundell as one of the co-authors describes how, within the framework of current regulatory requirements, two breastfeeding studies have been started that can help solve the dilemma that breastfeeding mothers and their doctors often face. One study concerns a drug for diabetes, the other a drug for inflammation and rheumatic disorders. The studies are part of the European project ConcePTION, which will produce evidence on drug safety during pregnancy and breastfeeding. Breast milk samples from the mother and blood samples (plasma) from the mother and child are analyzed to measure how much of the drugs are transferred to the child during breastfeeding. The samples are stored in a biobank for future research, and the studies thus contribute to creating an infrastructure for lactation studies of medicine safety.

Recruitment of research participants and sample collection started in the spring of 2024 and will end at the turn of the year 2025/2026. The purpose of the article is to use the experiences from setting up the two studies as a template for initiating clinical lactation studies. What should be considered? What are the conditions for this type of research? The article concisely describes relevant conditions and procedures for informed consent, sampling, transport and storage of samples, and laboratory analysis. The article also discusses the different conditions for studies of already approved drugs and for new drugs.

The article is important reading for researchers and others who can in one way or another contribute to initiating studies for better information on medicine safety during breastfeeding. Because it so concisely describes the conditions for new studies, the article is also interesting as a concrete example of how problems can be solved by starting new research.

Read the article here: Setting up mother–infant pair lactation studies with biobanking for research according to regulatory requirements.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Hansson M, Björkgren I, Svedenkrans J, et al. Setting up mother–infant pair lactation studies with biobanking for research according to regulatory requirements. British Journal of Clinical Pharmacology. 2025; 1-6. https://doi.org/10.1002/bcp.70201

This post in Swedish

Part of international collaborations

The importance of letting things take their time

To be an ethicist and philosopher is to be an advocate for time: “Wait, we need time to think this through.” This idea of letting things take their time rarely gains traction in society. It starts already in school, where the focus is often on being able to calculate quickly and recite as many words as possible in one minute. It then continues at the societal level.

A good example is technological development, which is moving faster than ever. Humans have always used more or less advanced and functional technology, always searching for better ways to solve problems. With the Industrial Revolution, things began to accelerate, and since then, the pace has only increased. We got factories, car traffic, air travel, nuclear power, genetically modified crops, and prenatal diagnostics. We got typewriters, computers, and telephones. We got different ways to play and reproduce music. Now we have artificial intelligence (AI), which it is often said will revolutionize most parts of society.

The development and implementation of AI is progressing at an unparalleled speed. Various government authorities use AI, healthcare allows AI tools to take on more and more tasks. Schools and universities wrestle with the question of how AI should be used by students, teachers, and researchers. Teachers have been left at a loss because AI established itself so quickly, and different teachers draw different boundaries for what counts as cheating, resulting in great uncertainty for students about what applies. People use AI for everything from planning their day to getting help with mental health issues. AI is used as a relationship expert, but also as the very object of romantic or friendship relationships. Today, there are AI systems that can call elderly and sick people to ask how they are feeling, whether they have taken their medication, and perhaps whether they have had any social contact recently.

As with all technology, there are advantages and disadvantages to AI, and it can be used in both good and bad ways. AI can be used to improve life for people and the environment, but like all technology, it can also be harmful to people and the environment. People and societies can do things better and more easily with AI, but like all technology, it can also have negative consequences such as environmental damage, unemployment, and discrimination.

Researchers in the Netherlands have discussed the problems that arise with new technology in terms of “social experiments.” They argue that there is an important difference compared to the careful testing that, for example, new pharmaceuticals undergo before they are approved. New technologies are not tested in such a structured way.

The EU has introduced a basic legal framework for AI (the EU AI Act), which can be seen as an attempt to introduce the new technology in a way that is less experimental on people and societies: more “responsible” and “trustworthy” AI. The new law is criticized by some European tech companies, who claim that the law means we will fall behind countries that have no regulations, such as the USA and China. Doing things in a thoughtful and ethically sound way is apparently considered less important than quickly getting the technology in place. On the contrary, caution is seen as risky, which says something about the concept of risk that currently drives such rapid development that perhaps not even the technology can deliver what the market expects.

Just as with previous important technologies, we need to think things through beforehand. If AI is to help us without harmful consequences, development must be allowed to take its time. This is even more important with AI than with previous technologies because AI has an unusually large potential to affect our lives. Ethical research points to several problems related to justice and trust. One problem is that we cannot explain why AI in, for example, healthcare reaches a certain conclusion about a specific individual. With previous technology, someone human being – if not the user, then at least the developer – has always been able to explain the causality in the system. Can we trust a technology in healthcare that we cannot control or explain in essential ways?

There are technology optimists and technology pessimists. Some are enthusiastic about new technologies and believe it is the solution to all our problems. Others think the precautionary principle should apply to all new technology and do not want to accept any risks at all. Instead, we should see the middle way. The middle way consists of letting things take their time to show their real possibilities beyond the optimists’ and pessimists’ preconceived notions. Advocating an ethical approach is not about stopping development but about slowing down the process. We need time to reflect on where it might be appropriate to introduce AI and where we should refrain from using the technology. We should also consider how the AI we choose to use is introduced in a good way so that we have time to detect risks of injustice, discrimination, and reduced trust and can minimize them.

It is not easy and not popular to be the one who says, “Wait, we need to think this through.” Yet it is so important that we take the time. We must think ahead so that things do not go wrong when they could so easily have gone right. It might be worth considering what could happen if we learned in school that it is more important to do things right than to do them quickly.

Jessica Nihlén Fahlquist

Written by…

Jessica Nihlén Fahlquist, senior lecturer in biomedical ethics and associate professor in practical philosophy at the Centre for Research Ethics & Bioethics.

This post in Swedish

Approaching future issues