A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: Musings (Page 1 of 18)

Time to forget time

A theme in recent blog posts has been our need for time. Patients need time to be listened to; time to ask questions; time to decide whether they want to be included in clinical studies, and time for much more. Healthcare workers need time to understand the patients’ situation; time to find solutions to the individual problems of patients suffering from rheumatoid arthritis, and time for much more. This theme, our need for time, got me thinking about what is so great about time.

It could be tempting to conduct time and motion studies of our need for time. How much time does the patient need to spend with the doctor to feel listened to? How much time does the nurse need to spend with the patient to get the experience of providing good care? The problem with such studies is that they destroy the greatness of time. To give the patient or the nurse the measured time, prescribed by the time study, is to glance at the clock. Would you feel listened to if the person you were talking to had a stopwatch hanging around their neck? Would you be a good listener yourself if you waited for the alarm signal from the stopwatch hanging around your neck?

Time studies do not answer our question of what we need, when we need time. If it was really a certain amount of time we needed, say fifteen minutes, then it should make no difference if a ticking stopwatch hung around the neck. But it makes a difference! The stopwatch steals our time. So, what is so great about time?

I think the answer is well on its way to revealing itself, precisely because we give it time to come at its own pace. What we need when we need time, is to forget time! That is the great thing about having time. That we no longer think about it.

Again, it can be tempting to conduct time studies. How much time does the patient and the doctor need to forget time? Again, time studies ruin the greatness of time. How? They frame everything in time. They force us to think about time, even when the point is to forget it.

Our need for time is not about measured quantities of time, but about the timeless quality of not thinking about time. Thinking about time steals time from us. Since it is not really about time, it does not have to take that long.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

Moral stress: what does the COVID-19 pandemic teach us about the concept?

Newly formed concepts can sometimes satisfy such urgent linguistic needs that they immediately seem completely self-evident. Moral stress is probably such a concept. It is not many decades old. Nevertheless, the concept probably appeared from the beginning as an all-too-familiar reality for many healthcare workers.

An interesting aspect of these immediately self-evident concepts is that they effortlessly find their own paths through language, despite our efforts to define the right path. They are simply too striking in living spoken language to be captured in the more rigid written language of definitions. However, the first definition of moral stress was fairly straightforward. This is how Andrew Jameton defined the concept:

“Moral distress arises when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action.”

Although the definition is not complicated in the written language, it still prevents the concept from speaking freely, as it wants to. For, do we not spontaneously want to talk about moral stress in other situations as well? For example, in situations where two different actions can be perceived as the right ones, but if we choose one action it excludes the other? Or in situations where something other than “institutional constraints” prevents the right course of action? Perhaps a sudden increase in the number of patients.

Here is a later definition of moral stress, which leaves more open (by Kälvemark, Höglund and Hansson):

“Traditional negative stress symptoms that occur due to situations that involve an ethical dimension where the health care provider feels he/she is not able to preserve all interests at stake.”

This definition allows the concept to speak more freely, in more situations than the first, although it is possibly slightly more complicated in the written language. That is of course no objection. A definition has other functions than the concept being defined, it does not have to be catchy like a song chorus. But if we compare the definitions, we can notice how both express the authors’ ideas about morality, and thus about moral stress. In the first definition, the author has the idea that morality is a matter of conscience and that moral stress occurs when institutional constraints of the profession prevent the practitioner from acting as conscience demands. Roughly. In the second definition, the authors have the idea that morality is rather a kind of balancing of different ethical values and interests and that moral stress arises in situations that prevent the trade-offs from being realized. Roughly.

Why do I dwell on the written and intellectual aspects of the definitions, even though it is hardly an objection to a definition? It has to do with the relationship between our words and our ideas about our words. Successful words find their own paths in language despite our ideas about the path. In other words: despite our definitions. Jameton both coined and defined moral (di)stress, but the concept almost immediately stood, and walked, on its own feet. I simply want to remind you that spoken-language spontaneity can have its own authority, its own grounding in reality, even when it comes to newly formed concepts introduced through definitions.

An important reason why the newly formed concept of moral stress caught on so immediately is probably that it put into words pressing problems for healthcare workers. Issues that needed to be noticed, discussed and dealt with. One way to develop the definition of moral stress can therefore be to listen to how healthcare workers spontaneously use the concept about situations they themselves have experienced.

A study in BMC Medical Ethics does just this. Together with three co-authors, Martina E. Gustavsson investigated how Swedish healthcare workers (assistants, nurses, doctors, etc.) described moral stress during the COVID-19 pandemic. After answering a number of questions, the participants were requested to describe, in a free text response, situations during the pandemic in which they experienced moral stress. These free text answers were conceptually analyzed with the aim of formulating a refined definition of moral stress.

An overarching theme in the free text responses turned out to be: being prevented from providing good care to needy patients. The healthcare workers spoke of a large number of obstacles. They perceived problems that needed to be solved, but felt that they were not taken seriously, that they were inadequate or forced to act outside their areas of expertise. What stood in the way of good care? The participants in the study spoke, among other things, about unusual conditions for decision-making during the pandemic, about tensions in the work team (such as colleagues who did not dare to go to work for fear of being infected), about substandard communication with the organizational management. All this created moral stress.

But they also talked about the pandemic itself as an obstacle. The prioritization of COVID-19 patients meant that other patients received worse care and were exposed to the risk of infection. The work was also hindered by a lack of resources, such as personal protective equipment, while the protective equipment prevented staff from comforting worried patients. The visiting restrictions also forced staff to act as guards against patients’ relatives and isolate infected patients from their children and partners. Finally, the pandemic prevented good end-of-life care. This too was morally stressful.

How can the healthcare workers’ free text responses justify a refined definition of moral stress? Martina E. Gustafsson and co-authors consider the definition above by Kälvemark, Höglund and Hansson as a good definition to start from. But one type of situation that the participants in the study described probably falls outside that definition, namely the situation of not being taken seriously, of feeling inadequate and powerless. The study therefore proposes the following definition, which includes these situations:

“Moral stress is the kind of stress that arises when confronted with a moral challenge, a situation in which it is difficult to resolve a moral problem and in which it is difficult to act, or feeling insufficient when you act, in accordance with your own moral values.”

Here, too, one can sense an idea of morality, and thus of moral stress. The authors think of morality as being about solving moral problems, and that moral stress arises when this endeavor encounters challenges, or when one feels inadequate in the attempts to solve the problems. The definition can be considered a refined idea of what moral stress is. It describes more precisely the relevant situations where healthcare workers spontaneously want to talk about moral stress.

Obviously, we can learn a lot about the concept of moral stress from the experience of the COVID-19 pandemic. Read the study here, which contains poignant descriptions of morally stressful situations during the pandemic: “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic.”

Finally, I would like to mention two general lessons about language, which in my view the study highlights. The first is that we can learn a lot about our concepts through the difficulties of defining them. The study took this “definition resistance” seriously by listening to how healthcare workers spontaneously talk about moral stress. This created friction that helped refine the definition. The second lesson is that we often use words despite our ideas about what the words mean or should mean. Spoken language spontaneity has a natural weight and authority that we easily overlook, but from which we have much to learn – as in this empirical study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gustavsson, M.E., von Schreeb, J., Arnberg, F.K. et al. “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic”. BMC Med Ethics 24, 110 (2023). https://doi.org/10.1186/s12910-023-00993-y

This post in Swedish

Minding our language

Two orientations of philosophical thought

There are many philosophical movements and several ways of dividing philosophy. I would like to draw attention to two orientations of philosophical thought that are never usually mentioned, but which I believe characterize philosophical thinking. Although unnamed, the two orientations are so different from each other that they can make philosophers roll their eyes when they run into each other: “What kind of nonsense is this?”

I am not referring to the division between analytic and continental philosophy, which is a known source of rolling eyes. I am referring to a division that rather applies to ourselves as thinking beings: our innermost philosophical disposition, so to speak.

So do not think of famous philosophers or of the philosophical movements they are considered to represent. Now it is just about ourselves. Think about what it is like to discuss a question that is felt to be urgent, for example: “Why has humanity failed to create a peaceful world?” How do we usually react to such questions? I dare say many of us wish we could answer them. This is the nature of a question. A question demands an answer, just as a greeting demands a greeting back. And since the answer to an important question should have the same urgency as the question, it feels very important to answer. This has the consequence that the discussion of the question soon turns into a discussion of several different answers, which compete with each other. Perhaps a few particularly committed participants argue among themselves for and against increasingly complicated answers at a speed that leaves the others behind. It feels humiliating to sit there and not be able to propose a single answer with accompanying arguments that it must be the right answer.

Many of us are probably also familiar with how afterwards, when we have time to think in peace and quiet, we can suddenly see possibilities that never occurred to us during the discussion: “So obvious! Why didn’t I see that?” When we are given time to think for ourselves, we are free from a limitation that governed the discussion. What limitation? The limitation that the question must be answered and the answer defended as the correct answer. Why were we so stimulated to find the answer to the question and defend it against the competitors? Was it a good question that gave rise to all these divergent answers, as if someone had thrown a match into a stockpile of fireworks? Already in its wording, the question blames humanity for not being able to resolve its conflicts. Is this not already a conflict? The question pits us against humanity, and when the answers and arguments start to hail, the debaters are also pitted against each other. The discussion becomes yet another example of our tendency to end up on different sides in conflicts.

If we notice how our noble philosophical discussion about world peace threatens to degenerate into the very strife we debate and we want to seek the answer in a more responsible way, then perhaps we decide to review the answers and arguments that have been piled up. We classify them as positions and schools of thought and practice identifying them to avoid well known fallacies, which are classified with equal philosophical rigor. In the future, this hard work will finally lead us to the definitively correct answer, we think. But the focus is still on the answers and the arguments, rather than on the question that ignited the entire discussion. The discussion continues to exemplify our tendency toward conflict, but now in terms of a rigorous philosophical classification of the various known positions on the issue.

The difference between the two orientations concerns where we place our emphasis: on the question or on the answer? Either we feel the question propels us, like a starting shot that makes us run for the answer at the finish line. The answer may be in terms of the human mind, the structure of society, our evolutionary history and much more. Or we feel the question paralyzes us, like an electric shock that numbs us so that we have to sit down at the starting line and examine the question. What already happened in the question? Am I not also humanity? Who am I to ask the question? Does not the question make a false distinction between me and humanity, similar to those made in all conflicts? Is that why I cannot discuss the question without becoming an example of the problem myself?

Consider the two philosophical orientations side by side. One of them experiences the question as a stimulating starting signal and runs for the answer. The other experiences the question as a numbing electric shock and remains seated at the starting line. It cannot surprise us that these two philosophical dispositions have difficulty understanding each other. If you emphasize the answer and run for it, stopping at the question seems not only irresponsible, but also unsportsmanlike and inhibiting. Is it forbidden to seek the right answer to urgent questions? If, on the other hand, you emphasize the question and stay seated at the starting line, it seems rash to run for the answer, even when the race follows a rigorously ordered pattern. Did not the starting shot go off too early so that the race should be declared invalid, even though it otherwise went according to the strict rules of the art?

When we consider the two orientations side by side, we can note another difference. Emphasizing the answer directs our attention to the subject of the question: “humanity throughout history.” Emphasizing the question directs our attention to the subject who asks it: to myself. Again, it can hardly surprise us that the two orientations have difficulty understanding each other. Both may seem to be avoiding the subject!

Here one might want to object that even this distinction between two philosophical orientations places people on different sides of a conflict. But maybe we can recognize ourselves in both tendencies, although we lean more in one direction? Is not philosophical thinking often a dialogue between these tendencies? Do we not become more peaceful when we see the two philosophical dispositions side by side? Perhaps we understand each other better when we see the possibility of emphasizing both the question and the answer. We suddenly realize why we sound so different when we philosophize, despite the fact that we are all thinking beings, and we no longer need to exclaim: “What kind of nonsense is this?”

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

When ordinary words get scientific uses

A few weeks ago, Josepine Fernow wrote an urgent blog post about science and language. She linked to a research debate about conceptual challenges for neuroscience, challenges that arise when ordinary words get specialized uses in science as technically defined terms.

In the case under debate, the word “sentience” had been imported into the scientific study of the brain. A research group reported that they were able to determine that in vitro neurons from humans and mice have learning abilities and that they exhibit “sentience” in a simulated game world. Of course, it caused quite a stir that some neurons grown in a laboratory could exhibit sentience! But the research team did not mean what attracted attention. They meant something very technical that only a specialist in the field can understand. The surprising thing about the finding was therefore the choice of words.

When the startling choice of words was questioned by other researchers, the research team defended themselves by saying that they defined the term “sentience” strictly scientifically, so that everyone should have understood what they meant, at least the colleagues in the field. Well, not all people are specialists in the relevant field. Thus the discovery – whatever it was that was discovered – raised a stir among people as if it were a discovery of sentience in neurons grown in a laboratory.

The research group’s attitude towards their own technical language is similar to an attitude I encountered long ago in a famous theorist of language, Noam Chomsky. This is what Chomsky said about the scientific study of the nature of language: “every serious approach to the study of language departs from the common-sense usage, replacing it by some technical concept.” Chomsky is of course right that linguistics defines its own technical concepts of language. But one can sense a certain hubris in the statement, because it sounds as if only a linguistic theorist could understand “language” in a way that is worthy of serious attention. This is untenable, because it raises the question what a technical concept of language is. In what sense is a technical concept a concept of language? Is it a technical concept of language in the common sense? Or is it a technical concept of language in the same inaccessible sense? In the latter case, the serious study of language seems to degenerate into a navel-gazing that does not access language.

For a technical concept of language to be a concept of language, our ordinary notions must be taken into account. Otherwise, the technical concept ceases to be a concept of language.

This is perhaps something to consider in neuroscience as well. Namely to the extent that one wants to shed light on phenomena such as consciousness and sentience. Of course, neuroscience will define its own technical concepts of these phenomena, as in the debated case. But if the technical concepts are to function as concepts of consciousness and sentience, then one cannot neglect our ordinary uses of words.

Science is very serious and important. But if the special significance of science goes to our heads, then our attitude risks undermining the great importance of science for humanity. Here you can read the views of three neuroethicists on these important linguistic issues: Conceptual conundrums for neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Minding our language

Resolving conflicts where they arise

I believe that many of us feel that the climate of human conversation is getting colder, that it is becoming harder for us to talk and get along with each other. Humanity feels colder than in a long time. At the same time, the global challenges are escalating. The meteorological signs speak for a warmer planet, while people speak a colder language. It should be the other way around. To cool the planet down, humanity should first get warmer.

How can humanity get warmer? How can we deal with the conflicts that make our human climate resemble a cold war on several fronts: between nations, between rich and poor, between women and men, and so on?

Observe what happens within ourselves when the question is asked and demands its answer. We immediately turn our attention to the world and to the actions we think could solve the problem there. A world government? Globally binding legislation? A common human language in a worldwide classless society that does not distinguish between woman and man, between skin colors, between friend and stranger?

Notice again what happens within ourselves when we analyze the question, either in this universalist way or in some other way. We create new conflicts between ourselves as analysts and the world where the problems are assumed to arise. The question itself is a conflict. It incriminates a world that must necessarily change. This creates new areas of conflict between people who argue for conflicting analyses and measures. One peace movement will fight another peace movement, and those who do not take the necessary stand on these enormous issues… well, how should we handle them?

Observe for the third time what happens within ourselves when we have now twice in a row directed our attention towards ourselves. First, we noted our inner tendency to react outwardly. Then we noted how this extroverted tendency created new conflicts not only between ourselves and an incriminated world that must change, but also between ourselves and other people with other analyses of an incriminated world that must change. What do we see, then, when we observe ourselves for the third time?

We see how we look for the source of all conflict everywhere but within ourselves. Even when we incriminate ourselves, we speak as if we were someone other than the one analyzing the problem and demanding action (“I should learn to shut up”). Do you see the extroverted pattern within you? It is like a mental elbow that pushes away a problematic world. Do you see how the conflicts arise within ourselves, through this constant outward reactivity? We think we take responsibility for the world around us, but we are only projecting our mental reflexes.

There was once a philosopher named Socrates. He was likened to an electric ray as he seemed to numb those he was talking to with his unexpected questions, so that they could no longer react with worldly analyses and sharp-witted arguments. He was careful to point out that he himself was equally numbed. He saw the extroverted tendency within himself. Every time he saw it, he became silent and motionless. Sometimes he could stand for hours on a street corner. He saw the source of all conflict in the human mind that always thinks it knows, that always thinks it has the analysis and all the arguments. He called this inner numbness his wisdom and he described it like this: “what I do not know, I do not think I know either.”

Naturally, a philosopher thus numbed could not harbor any conflict, because the moment it began to take shape, he would note the tendency within himself and be numbed. He mastered the art of resolving conflicts where they arise: within ourselves. Free from the will to change an incriminated world, he would thereby have revolutionized everything.

Socrates’ wisdom may seem too simple for the complex problems of our time. But given our three observations of how all conflict arises in the human mind, you see how we ourselves are the origin of all complexity. This simple wisdom can warm a humanity that has forgotten to examine itself.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

Does the severity of an illness qualify the moral motivation to act?

I have to admit that I had a little trouble cracking the code in the article which I will now try to summarize briefly. I hope that the title I have chosen is not already a misunderstanding. Moral philosophy is not easy, but the subject of the article is urgent so I still want to try.

Illness is generally perceived as something bad, as an evil. If we are to speak in terms of value, we can say that illness has negative value. Individual cases of illness usually create a moral motivation in us to mitigate the ill person’s negative condition. How strong this motivation is depends on several factors, but the severity of the disease is a relevant factor. The motivation to act typically increases with the severity of the disease.

This of course comes as no surprise. The motivation to alleviate a person’s cold is not very strong because a cold is not a severe condition. A runny nose is nothing to complain about. But in the face of more severe conditions such as blood poisoning, diabetes and cancer, the moral drive to act increases. “This condition is very severe” we say and feel that it is very important to act.

So what is the problem that motivates the article? If I am interpreting the authors correctly, the problem is that it is not so easy to convert this obvious use of language into a rule to follow. I recently bought a kettle that came with this warning: “Do not fill the kettle with an excessive amount of water.” The warning is, in a way, self-evident. Of course, you should not fill the kettle with an excessive amount of water! The motivation to pour should have stopped before the water level got excessively high. Even though the language is perfectly obvious, the rule is not as obvious, because when is the water level excessively high? When should we stop pouring?

The problem with the word “severity” is similar, or at least that is my interpretation. “Severity” is an obvious linguistic tool when we discuss illness and the need to do something about it. But at the same time, it is difficult to define the term as a description of when conditions are (more or less) severe and when it is (more or less) motivated to do something about them. Some philosophers have therefore criticized the use of “severity” in discussions about, for example, priority setting in healthcare. The situation would become somewhat paradoxical, since an obviously relevant concept would be excluded because it is unclear how it can be transformed into a description that can be followed as if it were a simple rule.

If I understand the article correctly, the authors want to defend the concept of severity by showing that severity qualifies our moral motivation to act when someone is ill. They do this by describing six other concepts that it is more generally accepted should qualify how morally important it is to do something about a condition, including the concepts of need and lack of well-being. None of the six concepts coincides completely with the concept of severity, but when we try to assess how they affect the need to act, we will often simultaneously assess the severity. And when we assess the severity of an illness, we will often at the same time assess how the illness affects well-being, for example.

The authors’ conclusion is that the concept of severity is a morally relevant concept that should be considered in future discussions, as severity qualifies the moral motivation to act. However, I may have misunderstood the reasoning, so if you want to be on the safe side, you can read the article here: Severity as a moral qualifier of malady.

I want to end the post with a personal side note: I am inclined to say that the philosophical difficulty in defining the concept of severity (when we talk about disease) is similar to the difficulty in defining the concept of excess (when we talk about water levels). What makes these concepts so useful is their great pliability. It is difficult to say what “severe disease” or “excessively high water level” is, because it depends on so much. Pliable words like these are like tracking dogs that sensitively move through the terrain in all possible relevant directions. But if we try to reconstruct the tracking dog’s sensitivity in general intellectual terms, without access to the dog’s sense of smell, experiences and instincts, we run into great difficulties.

Should these philosophical difficulties motivate us to get rid of the dog? Of course not! Just as we learn incredibly much from following a tracking dog, we learn incredibly much from following the words “severe disease,” even if the journey is arduous. This underlines the authors’ conclusion: severity should be considered a morally significant concept that continues to deserve our attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Solberg, C.T., Barra, M., Sandman, L. et al. Severity as a moral qualifier of malady. BMC Medical Ethics 24, 25 (2023). https://doi.org/10.1186/s12910-023-00903-2

This post in Swedish

We like challenging questions

The significance of the academic seminar

Ever since I was a doctoral student in philosophy, I have experienced the seminar, usually held once a week, as the heart of the academic environment. Why is the seminar so important?

If we are to stick to the etymology of the word, we should use a different image than that of the heart. The seminar is the nursery where seeds germinate and seedlings grow strong in a favourable environment, to then be planted out. That image fits well with doctoral education. The seminar is the place where doctoral students get training in presenting and discussing their scientific work. They get the opportunity to present their studies and texts and receive constructive criticism from senior researchers and from other doctoral students. In this way, their theses will be as brilliant as possible and they can practice the academic forms of giving and receiving constructive criticism, of defending their positions and changing their minds.

But there are also other seedlings in the academy than doctoral students and thesis drafts. Even senior researchers’ studies and texts are originally seedlings. Even these need to grow before they can be planted in scientific journals or at book publishers. The seminar never ceases to be a nursery. I dare say that the seminar is just as important for established researchers as it is for doctoral students.

The seminar is also the weekly event where something finally happens together with others. Academics often work in a certain solitude, especially when writing. Colleagues who may not have met since the last seminar reunite and continue the conversation in the familiar seminar room. Is the seminar like a recurring dance arrangement for lonely academics? Yes, the seminar probably also resembles an academic dance palace. In addition, sometimes you can invite presenters to the seminar, maybe even stars, then the event will be really brilliant.

The seminar is not least one of every academic institution’s most important places for discussion where colleagues meet regularly and learn to understand each other. Despite working from different theoretical, methodological and linguistic starting points. The academy is not homogenous, but is full of theories, methods and languages, even within the same discipline. If we do not meet every week and continue the conversation together, we soon become strangers who do not understand each other.

All these images reveal essential aspects of the academic seminar: the image of nursery as well as the image of the dance palace and the image of the place of discussion. Yet they do not reveal the significance of the seminar that I experience most strongly. I must return to the image of the heart, of the life-sustaining centre. I want to say that the seminar is the place where an academic subject becomes alive and real. The subject can be philosophy or literature, mathematics or neuroscience, law or economy. What can such strange subjects mean in the heart of a human being? At the seminar, living philosophers, literary scholars, mathematicians, lawyers or economists meet each other. At the seminar, they bring their academic subjects to life, for themselves and for younger researchers in the making. Each seminar pumps new reality into the subject, which would otherwise be pale and abstract. At the seminar you can see, hear and even smell what philosophy and other academic subjects really are. They never become more real than in the seminar.

I think we could go on forever looking for different meanings of the academic seminar.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about education

Philosophers in democratic conversations about ethics, research and society

Philosophers have an ambiguous position in the knowledge society which could support democratic conversations where truth and openness are united. On the one hand, philosophers are driven by a strong desire for the truth. They ask questions more often than they give answers, and they do not give answers until they have thoroughly explored the questions and judged that they can establish the truth, to speak a little pompously. On the other hand, philosophers cannot communicate their conclusions to society with the same authority that empirical scientists can communicate their findings. Philosophical reasoning, however rigorous it may appear to be, does not function as scientific evidence. It would be doubtful if a philosopher said, “A very clear reasoning which I recently carried out shows that…,” and expected people to accept the conclusion, as we expect people to accept the results of empirical studies.

Despite their strong desire to find the truth, philosophers can thus rarely “inform” about the truths they believe they have found, but must exercise restraint and present these truths as proposals, and then appeal to their interlocutors to judge the proposal for themselves. That is, to think for themselves. The desire to communicate one’s philosophical conclusions to others thus results in conversations on more or less equal terms, where more or less clear reasoning is developed together during the course of the conversation. The philosopher’s ambiguous position in the knowledge society can here act as a catalyst for conversations where the aspiration to think correctly, and the will to think freely, support each other.

The ambiguous position of philosophy in the knowledge society is evident in medical ethics, because here philosophy is in dialogue with patients, healthcare professionals and medical researchers. In medical ethics, there are sometimes so-called “ethics rounds,” where an ethicist visits the hospital and discusses patient cases with the staff from ethical perspectives. The role of the ethicist or philosopher in these conversations is not to draw the correct ethical conclusions and then inform the staff of the morally right thing to do. By striving for truth and by asking questions, the philosopher rather supports the staff’s own ethical reasoning. Of course, one or another of the philosopher’s own conclusions can be expressed in the conversation, but as a suggestion and as an invitation to the staff to investigate for themselves whether it can be so. Often the most important thing is to identify the crucial issues. The philosopher’s ambiguous standing can in these contexts act as a catalyst for good conversations.

Another area where the ambiguous position of philosophy in the knowledge society is evident is in research communication of ethics research, like the one we do here at CRB. Ethicists sometimes conduct empirical studies of various kinds (surveys, interviews and experiments). They can then naturally expect people (the general public or relevant groups) to take the results to heart. But these empirical studies are usually done to shed light on some ethical difficulty and to draw ethical, normative conclusions on good grounds. Again, these conclusions can rarely be communicated as research findings, so the communicator also has to exercise restraint and present the conclusions as relevant proposals to continue thinking and talking about. Research communication becomes not only informative and explanatory, but also thoughtful. It appeals to people to think for themselves. Awareness of the ambiguous position of philosophy can thus support research communication that raises open questions, in addition to disseminating and explaining scientific findings.

Since political conclusions based on scientific studies seem to have a similar ambiguous status to ethical and philosophical conclusions, philosophy could also inspire wiser democratic conversations about how research should be implemented in society. This applies not least to controversial issues, which often polarize and encourage debaters to make strong claims to possess the best evidence and the most rigorous reasoning, which they believe justifies their positions. But such a truth authority on how we should live and organize society hardly exists, even if we strive for the truth. As soon as we talk to each other, we can only make suggestions and appeal to our interlocutors to judge the matter for themselves, just as we ourselves listen to our interlocutors’ objections, questions and suggestions.

Strong pursuit of truth requires great openness. When we philosophize, these aspects are at best united. In this way, philosophy could inspire democratic conversations where people actually talk to each other and seek the truth together. Not just make their voices heard.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

Keys to more open debates

We are used to thinking that research is either theoretical or empirical, or a combination of theoretical and empirical approaches. I want to suggest that there are also studies that are neither theoretical nor empirical, even though it may seem unthinkable at first. This third possibility often occurs together with the other two, with which it is then interwoven without us particularly noticing it.

What is this third, seemingly unthinkable possibility? To think for yourself! Research rarely runs completely friction-free. At regular intervals, uncertainties appear around both theoretical and empirical starting points, which we have to clarify for ourselves. We then need to reflect on our starting points and perhaps even reconsider them. I am not referring primarily to how new scientific findings can justify re-examination of hypotheses, but to the continuous re-examinations that must be made in the research process that leads to these new findings. It happens so naturally in research work that you do not always think about the fact that you, as a researcher, also think for yourself, reconsider your starting points during the course of the work. Of course, thinking for yourself does not necessarily mean that you think alone. It often happens in conversations with colleagues or at research seminars. But in these situations there are no obvious starting points to start from. The uncertainties concern the starting points that you had taken for granted, and you are therefore referred to yourself, whether you think alone or with others.

This thinking, which paradoxically we do not always think we are doing, is rarely highlighted in the finished studies that are published as scientific articles. The final publication therefore does not give a completely true picture of what the research process looked like in its entirety, which is of course not an objection. On the contrary, it would be comical if autobiographical details were highlighted in scientific publications. There you cannot usually refer to informal conversations with colleagues in corridors or seminar rooms. Nevertheless, these conversations take place as soon as we encounter uncertainties. Conversations where we think for ourselves, even when it happens together. It would hardly be research otherwise.

Do you see how we ourselves get stuck in an unclear starting point when we have difficulty imagining the possibility of academic work that is neither theoretical nor empirical? We then start from a picture of scientific research, which focuses on what already completed studies look like in article form. It can be said that we start from a “façade conception” of scientific work, which hides a lot of what happens in practice behind the façade. This can be hard to come to terms with for new PhD students, who may think that researchers just pick their theoretical and empirical starting points and then elaborate on them. A PhD student can feel bad as a researcher, because the work does not match the image you get of research by reading finished articles, where everything seems to go smoothly. If it did, it would hardly be research. Yet, when seeking funding and ethics approval, researchers are forced to present their project plans as if everything had already gone smoothly. That is, as if the research had already been completed and published.

If what I am writing here gives you an idea of how easily we humans get stuck in unclear starting points, then this blog post has already served as a simple example of the third possibility. In this post, we think together, for ourselves, about an unclear starting point, the façade conception, which we did not think we were starting from. We open our eyes to an assumption which at first we did not see, because we looked at everything through it, as through the spectacles on the nose. Such self-examination of our own starting points can sometimes be the main objective, namely in philosophical studies. There, the questions themselves are already expressions of unclear assumptions. We get entangled in our starting points. But because they sit on our noses, we also get entangled in the illusion that the questions are about something outside of us, something that can only be studied theoretically and empirically.

Today I therefore want to illustrate how differently we can work as researchers. This by suggesting the reading of two publications on the same problem, where one publication is empirical, while the other is neither empirical nor theoretical, but purely philosophical. The empirical article is authored by colleagues at CRB; the philosophical article by me. Both articles touch on ethical issues of embryo donation for stem cell research. Research that in the future may lead to treatments for, for example, Parkinson’s disease.

The empirical study is an interview study with individuals who have undergone infertility treatment at an IVF clinic. They were interviewed about how they viewed leftover frozen embryos from IVF treatment, donation of leftover embryos in general and for cell-based treatment of Parkinson’s disease in particular, and much more. Such empirical studies are important as a basis for ethical and legal discussions about embryonic stem cell research, and about the possibility of further developing the research into treatments for diseases that today lack effective treatments. Read the interview study here: Would you consider donating your left-over embryos to treat Parkinson’s disease? Interviews with individuals who underwent IVF in Sweden.

The philosophical study examines concerns about exploitation of embryo donors to stem cell research. These concerns must be discussed openly and conscientiously. But precisely because issues of exploitation are so important, the debate about them risks being polarized around opposing starting points, which are not seen and cannot be reconsidered. Debates often risk locking positions, rather than opening our minds. The philosophical study describes such tendencies to be misled by our own concepts when we debate medical research, the pharmaceutical industry and risks of exploitation in donation to research. It wants to clarify the conditions for a more thoughtful and open discussion. Read the philosophical study here: The Invisible Patient: Concerns about Donor Exploitation in Stem Cell Research.

It is easy to see the relevance of the empirical study, as it has results to refer to in the debate. Despite the empirical nature of the study, I dare to suggest that the researchers also “philosophized” about uncertainties that appeared during the course of the work; that they thought for themselves. Perhaps it is not quite as easy to see the relevance of the purely philosophical study, since it does not result in new findings or normative positions that can be referred to in the debate. It only helps us to see how certain mental starting points limit our understanding, if they are not noticed and re-examined. Of what use are such philosophical exercises?

Perhaps the use of philosophy is similar to the use of a key that fits in the lock, when we want to get out of a locked room. The only thing is that in philosophy we often need the “key” already to see that we are locked up. Philosophical keys are thus forged as needed, to help us see our attachments to unclear starting points that need to be reconsidered. You cannot refer to such keys. You must use them yourself, on yourself.

While I was writing this “key” post, diligent colleagues at CRB published another empirical study on the use of human embryonic stem cells for medical treatments. This time an online survey among a random selection of Swedish citizens (reference and link below). The authors emphasize that even empirical studies can unlock polarized debates. This by supplementing the views of engaged debaters, who can sometimes have great influence, with findings on the views of the public and affected groups: voices that are not always heard in the debate. Empirical studies thus also function as keys to more open and thoughtful discussions. In this case, the “keys” are findings that can be referred to in debates.

– Two types of keys, which can contribute in different ways to more open debates.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bywall, K.S., Holte, J., Brodin, T. et al. Would you consider donating your left-over embryos to treat Parkinson’s disease? Interviews with individuals that underwent IVF in Sweden. BMC Med Ethics 23, 124 (2022). https://doi.org/10.1186/s12910-022-00864-y

Segerdahl, P. The Invisible Patient: Concerns about Donor Exploitation in Stem Cell Research. Health Care Analysis 30, 240–253 (2022). https://doi.org/10.1007/s10728-022-00448-2

Grauman, Å., Hansson, M., Nyholm, D. et al. Attitudes and values among the Swedish general public to using human embryonic stem cells for medical treatment. BMC Med Ethics 23, 138 (2022). https://doi.org/10.1186/s12910-022-00878-6

This post in Swedish

We recommend readings

Does the brain make room for free will?

The question of whether we have free will has been debated throughout the ages and everywhere in the world. Can we influence our future or is it predetermined? If everything is predetermined and we lack free will, why should we act responsibly and by what right do we hold each other accountable?

There have been different ideas about what predetermines the future and excludes free will. People have talked about fate and about the gods. Today, we rather imagine that it is about necessary causal relationships in the universe. It seems that the strict determinism of the material world must preclude the free will that we humans perceive ourselves to have. If we really had free will, we think, then nature would have to give us a space of our own to decide in. A causal gap where nature does not determine everything according to its laws, but allows us to act according to our will. But this seems to contradict our scientific world view.

In an article in the journal Intellectica, Kathinka Evers at CRB examines the plausibility of this choice between two extreme positions: either strict determinism that excludes free will, or free will that excludes determinism.

Kathinka Evers approaches the problem from a neuroscientific perspective. This particular perspective has historically tended to support one of the positions: strict determinism that excludes free will. How can the brain make room for free will, if our decisions are the result of electrochemical processes and of evolutionarily developed programs? Is it not right there, in the brain, that our free will is thwarted by material processes that give us no space to act?

Some authors who have written about free will from a neuroscientific perspective have at times explained away freedom as the brain’s user’s illusion: as a necessary illusion, as a fictional construct. Some have argued that since social groups function best when we as individuals assume ourselves to be responsible actors, we must, after all, keep this old illusion alive. Free will is a fiction that works and is needed in society!

This attitude is unsound, says Kathinka Evers. We cannot build our societies on assumptions that contradict our best knowledge. It would be absurd to hold people responsible for actions that they in fact have no ability to influence. At the same time, she agrees that the notion of free will is socially important. But if we are to retain the notion, it must be consistent with our knowledge of the brain.

One of the main points of the article is that our knowledge of the brain could actually provide some room for free will. The brain could function beyond the opposition between indeterminism and strict determinism, some neuroscientific theories suggest. This does not mean that there would be uncaused neural events. Rather, a determinism is proposed where the relationship between cause and effect is variable and contingent, not invariable and necessary, as we commonly assume. As far as I understand, it is about the fact that the brain has been shown to function much more independently, actively and flexibly than in the image of it as a kind of programmed machine. Different incoming nerve signals can stabilize different neural patterns of connections in the brain, which support the same behavioural ability. And the same incoming nerve signal can stabilize different patterns of connections in the brain that result in the same behavioural ability. Despite great variation in how individuals’ neural patterns of connections are stabilized, the same common abilities are supported. This model of the brain is thus deterministic, while being characterized by variability. It describes a kind of kaleidoscopically variable causality in the brain between incoming signals and resulting behaviours and abilities.

Kathinka Evers thus hypothetically suggests that this variability in the brain, if real, could provide empirical evidence that free will is compatible with determinism.

Read the philosophically exciting article here: Variable determinism in social applications: translating science to society

Although Kathinka Evers suggests that a certain amount of free will could be compatible with what we know about the brain, she emphasizes that neuroscience gives us increasingly detailed knowledge about how we are conditioned by inherited programs, for example, during adolescence, as well as by our conditions and experiences in childhood. We should, after all, be cautiously restrained in praising and blaming each other, she concludes the article, referring to the Stoic Epictetus, one of the philosophers who thought about free will and who rather emphasized freedom from the notion of a free will.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers Kathinka (2021/2). Variable Determinism in Social Applications: Translating Science to Society. In Monier Cyril & Khamassi Mehdi (Eds), Liberty and cognition, Intellectica, 75, pp.73-89.

This post in Swedish

We like challenging questions

« Older posts