A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: philosophy (Page 1 of 18)

Time to forget time

A theme in recent blog posts has been our need for time. Patients need time to be listened to; time to ask questions; time to decide whether they want to be included in clinical studies, and time for much more. Healthcare workers need time to understand the patients’ situation; time to find solutions to the individual problems of patients suffering from rheumatoid arthritis, and time for much more. This theme, our need for time, got me thinking about what is so great about time.

It could be tempting to conduct time and motion studies of our need for time. How much time does the patient need to spend with the doctor to feel listened to? How much time does the nurse need to spend with the patient to get the experience of providing good care? The problem with such studies is that they destroy the greatness of time. To give the patient or the nurse the measured time, prescribed by the time study, is to glance at the clock. Would you feel listened to if the person you were talking to had a stopwatch hanging around their neck? Would you be a good listener yourself if you waited for the alarm signal from the stopwatch hanging around your neck?

Time studies do not answer our question of what we need, when we need time. If it was really a certain amount of time we needed, say fifteen minutes, then it should make no difference if a ticking stopwatch hung around the neck. But it makes a difference! The stopwatch steals our time. So, what is so great about time?

I think the answer is well on its way to revealing itself, precisely because we give it time to come at its own pace. What we need when we need time, is to forget time! That is the great thing about having time. That we no longer think about it.

Again, it can be tempting to conduct time studies. How much time does the patient and the doctor need to forget time? Again, time studies ruin the greatness of time. How? They frame everything in time. They force us to think about time, even when the point is to forget it.

Our need for time is not about measured quantities of time, but about the timeless quality of not thinking about time. Thinking about time steals time from us. Since it is not really about time, it does not have to take that long.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We challenge habits of thought

Moral stress: what does the COVID-19 pandemic teach us about the concept?

Newly formed concepts can sometimes satisfy such urgent linguistic needs that they immediately seem completely self-evident. Moral stress is probably such a concept. It is not many decades old. Nevertheless, the concept probably appeared from the beginning as an all-too-familiar reality for many healthcare workers.

An interesting aspect of these immediately self-evident concepts is that they effortlessly find their own paths through language, despite our efforts to define the right path. They are simply too striking in living spoken language to be captured in the more rigid written language of definitions. However, the first definition of moral stress was fairly straightforward. This is how Andrew Jameton defined the concept:

“Moral distress arises when one knows the right thing to do, but institutional constraints make it nearly impossible to pursue the right course of action.”

Although the definition is not complicated in the written language, it still prevents the concept from speaking freely, as it wants to. For, do we not spontaneously want to talk about moral stress in other situations as well? For example, in situations where two different actions can be perceived as the right ones, but if we choose one action it excludes the other? Or in situations where something other than “institutional constraints” prevents the right course of action? Perhaps a sudden increase in the number of patients.

Here is a later definition of moral stress, which leaves more open (by Kälvemark, Höglund and Hansson):

“Traditional negative stress symptoms that occur due to situations that involve an ethical dimension where the health care provider feels he/she is not able to preserve all interests at stake.”

This definition allows the concept to speak more freely, in more situations than the first, although it is possibly slightly more complicated in the written language. That is of course no objection. A definition has other functions than the concept being defined, it does not have to be catchy like a song chorus. But if we compare the definitions, we can notice how both express the authors’ ideas about morality, and thus about moral stress. In the first definition, the author has the idea that morality is a matter of conscience and that moral stress occurs when institutional constraints of the profession prevent the practitioner from acting as conscience demands. Roughly. In the second definition, the authors have the idea that morality is rather a kind of balancing of different ethical values and interests and that moral stress arises in situations that prevent the trade-offs from being realized. Roughly.

Why do I dwell on the written and intellectual aspects of the definitions, even though it is hardly an objection to a definition? It has to do with the relationship between our words and our ideas about our words. Successful words find their own paths in language despite our ideas about the path. In other words: despite our definitions. Jameton both coined and defined moral (di)stress, but the concept almost immediately stood, and walked, on its own feet. I simply want to remind you that spoken-language spontaneity can have its own authority, its own grounding in reality, even when it comes to newly formed concepts introduced through definitions.

An important reason why the newly formed concept of moral stress caught on so immediately is probably that it put into words pressing problems for healthcare workers. Issues that needed to be noticed, discussed and dealt with. One way to develop the definition of moral stress can therefore be to listen to how healthcare workers spontaneously use the concept about situations they themselves have experienced.

A study in BMC Medical Ethics does just this. Together with three co-authors, Martina E. Gustavsson investigated how Swedish healthcare workers (assistants, nurses, doctors, etc.) described moral stress during the COVID-19 pandemic. After answering a number of questions, the participants were requested to describe, in a free text response, situations during the pandemic in which they experienced moral stress. These free text answers were conceptually analyzed with the aim of formulating a refined definition of moral stress.

An overarching theme in the free text responses turned out to be: being prevented from providing good care to needy patients. The healthcare workers spoke of a large number of obstacles. They perceived problems that needed to be solved, but felt that they were not taken seriously, that they were inadequate or forced to act outside their areas of expertise. What stood in the way of good care? The participants in the study spoke, among other things, about unusual conditions for decision-making during the pandemic, about tensions in the work team (such as colleagues who did not dare to go to work for fear of being infected), about substandard communication with the organizational management. All this created moral stress.

But they also talked about the pandemic itself as an obstacle. The prioritization of COVID-19 patients meant that other patients received worse care and were exposed to the risk of infection. The work was also hindered by a lack of resources, such as personal protective equipment, while the protective equipment prevented staff from comforting worried patients. The visiting restrictions also forced staff to act as guards against patients’ relatives and isolate infected patients from their children and partners. Finally, the pandemic prevented good end-of-life care. This too was morally stressful.

How can the healthcare workers’ free text responses justify a refined definition of moral stress? Martina E. Gustafsson and co-authors consider the definition above by Kälvemark, Höglund and Hansson as a good definition to start from. But one type of situation that the participants in the study described probably falls outside that definition, namely the situation of not being taken seriously, of feeling inadequate and powerless. The study therefore proposes the following definition, which includes these situations:

“Moral stress is the kind of stress that arises when confronted with a moral challenge, a situation in which it is difficult to resolve a moral problem and in which it is difficult to act, or feeling insufficient when you act, in accordance with your own moral values.”

Here, too, one can sense an idea of morality, and thus of moral stress. The authors think of morality as being about solving moral problems, and that moral stress arises when this endeavor encounters challenges, or when one feels inadequate in the attempts to solve the problems. The definition can be considered a refined idea of what moral stress is. It describes more precisely the relevant situations where healthcare workers spontaneously want to talk about moral stress.

Obviously, we can learn a lot about the concept of moral stress from the experience of the COVID-19 pandemic. Read the study here, which contains poignant descriptions of morally stressful situations during the pandemic: “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic.”

Finally, I would like to mention two general lessons about language, which in my view the study highlights. The first is that we can learn a lot about our concepts through the difficulties of defining them. The study took this “definition resistance” seriously by listening to how healthcare workers spontaneously talk about moral stress. This created friction that helped refine the definition. The second lesson is that we often use words despite our ideas about what the words mean or should mean. Spoken language spontaneity has a natural weight and authority that we easily overlook, but from which we have much to learn – as in this empirical study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gustavsson, M.E., von Schreeb, J., Arnberg, F.K. et al. “Being prevented from providing good care: a conceptual analysis of moral stress among health care workers during the COVID-19 pandemic”. BMC Med Ethics 24, 110 (2023). https://doi.org/10.1186/s12910-023-00993-y

This post in Swedish

Minding our language

Neuroethics: don’t let the name fool you

Names easily give the impression that the named is something separate and autonomous: something to which you can attach a label. If you want to launch something and get attention – “here is something completely new to reckon with” – it is therefore a good idea to immediately create a new name that spreads the image of something very special.

Despite this, names usually lag behind what they designate. The named has already taken shape, without anyone noticing it as anything special. In the freedom from a distinctive designation, roots have had time to spread and branches to stretch far. Since everything that is given freedom to grow is not separate and autonomous, but rooted, interwoven and in exchange with its surroundings, humans eventually notice it as something interesting and therefore give it a special name. New names can thus give a misleading image of the named as newer and more separate and autonomous than it actually is. When the name arrives, almost everything is already prepared in the surroundings.

In an open peer commentary in the journal AJOB Neuroscience, Kathinka Evers, Manuel Guerrero and Michele Farisco develop a similar line of reasoning about neuroethics. They comment on an article published in the same issue that presents neuroethics as a new field only 15 years old. The authors of the article are concerned by the still unfinished and isolated nature of the field and therefore launch a vision of a “translational neuroethics,” which should resemble that tree that has had time to grow together with its surroundings. In the vision, the new version of neuroethics is thus described as integrated, inclusive and impactful.

In their commentary, Kathinka Evers and co-authors emphasize that it is only the label “neuroethics” that has existed for 15 years. The kind of questions that neuroethics works with were already dealt with in the 20th century in applied ethics and bioethics, and some of the conceptual problems have been discussed in philosophy since antiquity. Furthermore, ethics committees have dealt with neuroethical issues long before the label existed. Viewed in this way, neuroethics is not a new and separate field, but rather a long-integrated and cooperating sub-discipline to neuroscience, philosophy and bioethics – depending on which surroundings we choose to emphasize.

Secondly, the commentators point out, the three characteristics of a “translational neuroethics” – integration, inclusiveness and impact – are a prerequisite for something to be considered a scientific field. An isolated field that does not include knowledge and perspectives from surrounding sciences and areas of interest, and that lacks practical impact, is hardly what we see today as a research field. The three characteristics are therefore not entirely successful as a vision of a future development of neuroethics. If the field is to deserve its name at all, the characteristics must already permeate neuroethics. Do they do that?

Yes, say the commentators if I understand them correctly. But in order to see this we must not be deceived by the distinctive designation, which gives the image of something new, separate and autonomous. We must see that work on neuroethical issues has been going on for a long time in several different philosophical and scientific contexts. Already when the field got its distinctive name, it was integrated, inclusive and impactful, not least within the academically established discipline of bioethics. Some problematic tendencies toward isolation have indeed existed, but they were related to the distinctive label, as it was sometimes used by isolated groups to present their activities as something new and special to be reckoned with.

The open commentary is summarized by the remark that we should avoid the temptation to see neuroethics as a completely new, autonomous and separate discipline: the temptation that the name contributes to. Such an image makes us myopic, the commentators write, which paradoxically can make it more difficult to support the three objectives of the vision. It is both truer and more fruitful to consider neuroethics and bioethics as distinct but not separate fields. If this is true, we do not need to launch an even newer version of neuroethics under an even newer label.

Read the open commentary here: Neuroethics & bioethics: distinct but not separate. If you want to read the article that is commented on, you will find the reference at the bottom of this post.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

K. Evers, M. Guerrero & M. Farisco (2023) Neuroethics & Bioethics: Distinct but Not Separate, AJOB Neuroscience, 14:4, 414-416, DOI: 10.1080/21507740.2023.2257162

Anna Wexler & Laura Specker Sullivan (2023) Translational Neuroethics: A Vision for a More Integrated, Inclusive, and Impactful Field, AJOB Neuroscience, 14:4, 388-399, DOI: 10.1080/21507740.2021.2001078

This post in Swedish

Minding our language

Two orientations of philosophical thought

There are many philosophical movements and several ways of dividing philosophy. I would like to draw attention to two orientations of philosophical thought that are never usually mentioned, but which I believe characterize philosophical thinking. Although unnamed, the two orientations are so different from each other that they can make philosophers roll their eyes when they run into each other: “What kind of nonsense is this?”

I am not referring to the division between analytic and continental philosophy, which is a known source of rolling eyes. I am referring to a division that rather applies to ourselves as thinking beings: our innermost philosophical disposition, so to speak.

So do not think of famous philosophers or of the philosophical movements they are considered to represent. Now it is just about ourselves. Think about what it is like to discuss a question that is felt to be urgent, for example: “Why has humanity failed to create a peaceful world?” How do we usually react to such questions? I dare say many of us wish we could answer them. This is the nature of a question. A question demands an answer, just as a greeting demands a greeting back. And since the answer to an important question should have the same urgency as the question, it feels very important to answer. This has the consequence that the discussion of the question soon turns into a discussion of several different answers, which compete with each other. Perhaps a few particularly committed participants argue among themselves for and against increasingly complicated answers at a speed that leaves the others behind. It feels humiliating to sit there and not be able to propose a single answer with accompanying arguments that it must be the right answer.

Many of us are probably also familiar with how afterwards, when we have time to think in peace and quiet, we can suddenly see possibilities that never occurred to us during the discussion: “So obvious! Why didn’t I see that?” When we are given time to think for ourselves, we are free from a limitation that governed the discussion. What limitation? The limitation that the question must be answered and the answer defended as the correct answer. Why were we so stimulated to find the answer to the question and defend it against the competitors? Was it a good question that gave rise to all these divergent answers, as if someone had thrown a match into a stockpile of fireworks? Already in its wording, the question blames humanity for not being able to resolve its conflicts. Is this not already a conflict? The question pits us against humanity, and when the answers and arguments start to hail, the debaters are also pitted against each other. The discussion becomes yet another example of our tendency to end up on different sides in conflicts.

If we notice how our noble philosophical discussion about world peace threatens to degenerate into the very strife we debate and we want to seek the answer in a more responsible way, then perhaps we decide to review the answers and arguments that have been piled up. We classify them as positions and schools of thought and practice identifying them to avoid well known fallacies, which are classified with equal philosophical rigor. In the future, this hard work will finally lead us to the definitively correct answer, we think. But the focus is still on the answers and the arguments, rather than on the question that ignited the entire discussion. The discussion continues to exemplify our tendency toward conflict, but now in terms of a rigorous philosophical classification of the various known positions on the issue.

The difference between the two orientations concerns where we place our emphasis: on the question or on the answer? Either we feel the question propels us, like a starting shot that makes us run for the answer at the finish line. The answer may be in terms of the human mind, the structure of society, our evolutionary history and much more. Or we feel the question paralyzes us, like an electric shock that numbs us so that we have to sit down at the starting line and examine the question. What already happened in the question? Am I not also humanity? Who am I to ask the question? Does not the question make a false distinction between me and humanity, similar to those made in all conflicts? Is that why I cannot discuss the question without becoming an example of the problem myself?

Consider the two philosophical orientations side by side. One of them experiences the question as a stimulating starting signal and runs for the answer. The other experiences the question as a numbing electric shock and remains seated at the starting line. It cannot surprise us that these two philosophical dispositions have difficulty understanding each other. If you emphasize the answer and run for it, stopping at the question seems not only irresponsible, but also unsportsmanlike and inhibiting. Is it forbidden to seek the right answer to urgent questions? If, on the other hand, you emphasize the question and stay seated at the starting line, it seems rash to run for the answer, even when the race follows a rigorously ordered pattern. Did not the starting shot go off too early so that the race should be declared invalid, even though it otherwise went according to the strict rules of the art?

When we consider the two orientations side by side, we can note another difference. Emphasizing the answer directs our attention to the subject of the question: “humanity throughout history.” Emphasizing the question directs our attention to the subject who asks it: to myself. Again, it can hardly surprise us that the two orientations have difficulty understanding each other. Both may seem to be avoiding the subject!

Here one might want to object that even this distinction between two philosophical orientations places people on different sides of a conflict. But maybe we can recognize ourselves in both tendencies, although we lean more in one direction? Is not philosophical thinking often a dialogue between these tendencies? Do we not become more peaceful when we see the two philosophical dispositions side by side? Perhaps we understand each other better when we see the possibility of emphasizing both the question and the answer. We suddenly realize why we sound so different when we philosophize, despite the fact that we are all thinking beings, and we no longer need to exclaim: “What kind of nonsense is this?”

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Philosophically anchored psychotherapy

Philosophy is often regarded as impractical and useless. At the same time, philosophy has a therapeutic aspect. Socrates practiced philosophy with people he met in Athens. He tried to persuade them to care not only about their bodies, their money and the affairs of the state, but to also examine themselves and take care of their soul. The same can be said of the Stoics, who emphasized that philosophy must be put into practice and actually change our ways of life. They gave public inspirational speeches about the importance of bringing order to our chaotic souls and they talked to people about how we can live completely fulfilling lives. How impractical and useless is that?

Both Socrates’ art of conversation and the life advice of the Stoics have inspired the emergence of cognitive behavioral therapies. In recent times, Asian philosophy and meditation have also inspired psychotherapy in the form of so-called mindfulness, used as a method to manage stress, anxiety and pain. However, there is a tendency to gloss over the philosophical influence behind these methods, as if philosophy were something impractical and useless! There is a risk that, in an effort to present a clinically effective facade, one covers up the philosophical depth, while the problems one tries to treat are often connected with superficial hopes for quick and effective solutions.

Can today’s psychotherapies more openly and directly draw inspiration from philosophy? Are there already such bridges to philosophy that can be strengthened? If so, what distinguishes them? These questions are investigated by Sylvia Martin, researcher at CRB and a practicing psychotherapist herself. In a review article, she focuses on work with values in various forms of cognitive behavioral therapy as a bridge to philosophy that could be strengthened. I will give an example from the article of such work, which suggests how patients can be supported to find a more stable and fulfilling attitude to life.

Many people seek meaning in life through various objectives and projects, which they then try to realize. They believe that happiness will only come if they get to travel to Beijing, find a new job, buy a house or get a dog. Objectives do not provide stable meaning and fulfillment. On the contrary. The satisfaction when objectives are realized is short-lived and soon turns into a feeling of emptiness that must be filled by new exciting projects. There is of course nothing wrong with travel, jobs, houses or dogs, but when chasing new objectives becomes a pattern it can be unfortunate. Soon a whole life is filled with objectives that do not give the stable fulfillment that one is really longing for. The pattern of seeking new objectives and projects that will give meaning and satisfaction becomes a self-destructive lifestyle, which it eventually becomes difficult to get out of. But through therapy, people can be helped to see the unfortunate pattern. For example, they can be given the task of imagining the objective of “traveling to Beijing”: how they save money for the trip, learn Chinese and plan the trip. They can imagine all the fun they have in Beijing. But how does it feel to come home again? To come home is to return to meaninglessness and immediately the same old emptiness must be filled by a new project. Values such as compassion and truth differ from objectives by being more like a road that never ends. Values can be cultivated and deepened without end. The path becomes the destination, fulfillment lies in walking it, and the elusive notion of “finally finding fulfillment” dissolves. But all of this of course assumes that the therapy is not perceived as a “trip to Beijing” that will finally bring fulfillment. There are no easy solutions to the problem of a meaningless life, such as new trips, new jobs, new houses… or new therapies.

Philosophically anchored psychotherapy can contribute to the deepening required, so that the work with values does not become another project that reinforces superficial attitudes to life. Perhaps the impression that philosophy is impractical and useless is even related to the restless attitude that a meaningful life requires objectives to be effectively realized? Philosophy is not a project, but more like a lifelong path. Read Sylvia Martin’s review article here: Using values in cognitive and behavioral therapy: a bridge back to philosophy.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Martin, S. Using values in cognitive and behavioral therapy: a bridge back to philosophy. Journal of Evaluation in Clinical Practice. 2023; 1- 7. doi:10.1111/jep.13872

This post in Swedish

We challenge habits of thought

Encourage children to take responsibility for others?

It happens that academics write visionary texts that highlight great human challenges. I blogged about such a philosophically visionary article a few years ago; an article in which Kathinka Evers discussed the interaction between society and the brain. In the article, she developed the idea that we have a “proactive” responsibility to adapt our societies to what we know about the brain’s strengths and weaknesses. Above all, she emphasized that the knowledge we have today about the changeability of the brain gives us a proactive responsibility for our own human nature, as this nature is shaped and reshaped in interaction with the societies we build.

Today I want to recommend a visionary philosophical article by Jessica Nihlén Fahlquist, an article that I think has points of contact with Kathinka Evers’ paper. Here, too, the article highlights our responsibility for major human challenges, such as climate and, above all, public health. Here, too, human changeability is emphasized, not least during childhood. Here, too, it is argued that we have a responsibility to be proactive (although the term is not used). But where Kathinka Evers starts from neuroscience, Jessica Nihlén Fahlquist starts from virtue ethics and from social sciences that see children as social actors.

Jessica Nihlén Fahlquist points out that we live in more complex societies and face greater global challenges than ever before in human history. But humans are also complex and can under favorable circumstances develop great capacities for taking responsibility. Virtue ethics has this focus on the human being and on personal character traits that can be cultivated and developed to varying degrees. Virtue ethics is sometimes criticized for not being sufficiently action-guiding. But it is hard to imagine that we can deal with major human challenges through action-guiding rules and regulations alone. Rules are never as complex as human beings. Action-guiding rules assume that the challenges are already under some sort of control and thus are not as uncertain anymore. Faced with complex challenges with great uncertainties, we may have to learn to trust the human being. Do we dare to trust ourselves when we often created the problems?

Jessica Nihlén Fahlquist reasons in a way that brings to mind Kathinka Evers’ idea of a proactive responsibility for our societies and our human nature. Nihlén Fahlquist suggests, if I understand her correctly, that we already have a responsibility to create environments that support the development of human character traits that in the future can help us meet the challenges. We already have a responsibility to support greater abilities to take responsibility in the future, one could say.

Nihlén Fahlquist focuses on public health challenges and her reasoning is based on the pandemic and the issue of vaccination of children. Parents have a right and a duty to protect their children from risks. But reasonably, parents can also be considered obliged not to be overprotective, but also to consider the child’s development of agency and values. The virus that spread during the pandemic did not cause severe symptoms in children. Vaccination therefore does not significantly protect the child’s own health, but would be done with others in mind. Studies show that children may be capable of reasoning in terms of such responsibility for others. Children who participate in medical research can, for example, answer that they participate partly to help others. Do we dare to encourage capable children to take responsibility for public health by letting them reason about their own vaccination? Is it even the case that we should support children to cultivate such responsibility as a virtue?

Nihlén Fahlquist does not claim that children themselves have this responsibility to get vaccinated out of solidarity with others. But if some children prove to be able to reason in such a morally complex way about their own vaccination, one could say that these children’s sense of responsibility is something unexpected and admirable, something that we cannot demand from a child. By encouraging and supporting the unexpected and admirable in children, it can eventually become an expected responsibility in adults, suggests Jessica Nihlén Fahlquist. Virtue ethics makes it meaningful to think in terms of such possibilities, where humans can change and their virtues can grow. Do we dare to believe in such possibilities in ourselves? If you do not expect the unexpected you will not discover it, said a visionary Greek philosopher named Heraclitus.

Jessica Nihlén Fahlquist’s article is multifaceted and innovative. In this post, I have only emphasized one of her lines of thought, which I hope has made you curious about an urgent academic text: Taking risks to protect others – pediatric vaccination and moral responsibility.

In summary, Jessica Nihlén Fahlquist argues that vaccination should be regarded as an opportunity for children to develop their sense of responsibility and that parents, schools, healthcare professionals and public health authorities should include children in debates about ethical public health issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Jessica Nihlén Fahlquist, Taking Risks to Protect Others – Pediatric Vaccination and Moral Responsibility, Public Health Ethics, 2023;, phad005, https://doi.org/10.1093/phe/phad005

This post in Swedish

Approaching future issues

When ordinary words get scientific uses

A few weeks ago, Josepine Fernow wrote an urgent blog post about science and language. She linked to a research debate about conceptual challenges for neuroscience, challenges that arise when ordinary words get specialized uses in science as technically defined terms.

In the case under debate, the word “sentience” had been imported into the scientific study of the brain. A research group reported that they were able to determine that in vitro neurons from humans and mice have learning abilities and that they exhibit “sentience” in a simulated game world. Of course, it caused quite a stir that some neurons grown in a laboratory could exhibit sentience! But the research team did not mean what attracted attention. They meant something very technical that only a specialist in the field can understand. The surprising thing about the finding was therefore the choice of words.

When the startling choice of words was questioned by other researchers, the research team defended themselves by saying that they defined the term “sentience” strictly scientifically, so that everyone should have understood what they meant, at least the colleagues in the field. Well, not all people are specialists in the relevant field. Thus the discovery – whatever it was that was discovered – raised a stir among people as if it were a discovery of sentience in neurons grown in a laboratory.

The research group’s attitude towards their own technical language is similar to an attitude I encountered long ago in a famous theorist of language, Noam Chomsky. This is what Chomsky said about the scientific study of the nature of language: “every serious approach to the study of language departs from the common-sense usage, replacing it by some technical concept.” Chomsky is of course right that linguistics defines its own technical concepts of language. But one can sense a certain hubris in the statement, because it sounds as if only a linguistic theorist could understand “language” in a way that is worthy of serious attention. This is untenable, because it raises the question what a technical concept of language is. In what sense is a technical concept a concept of language? Is it a technical concept of language in the common sense? Or is it a technical concept of language in the same inaccessible sense? In the latter case, the serious study of language seems to degenerate into a navel-gazing that does not access language.

For a technical concept of language to be a concept of language, our ordinary notions must be taken into account. Otherwise, the technical concept ceases to be a concept of language.

This is perhaps something to consider in neuroscience as well. Namely to the extent that one wants to shed light on phenomena such as consciousness and sentience. Of course, neuroscience will define its own technical concepts of these phenomena, as in the debated case. But if the technical concepts are to function as concepts of consciousness and sentience, then one cannot neglect our ordinary uses of words.

Science is very serious and important. But if the special significance of science goes to our heads, then our attitude risks undermining the great importance of science for humanity. Here you can read the views of three neuroethicists on these important linguistic issues: Conceptual conundrums for neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Minding our language

Resolving conflicts where they arise

I believe that many of us feel that the climate of human conversation is getting colder, that it is becoming harder for us to talk and get along with each other. Humanity feels colder than in a long time. At the same time, the global challenges are escalating. The meteorological signs speak for a warmer planet, while people speak a colder language. It should be the other way around. To cool the planet down, humanity should first get warmer.

How can humanity get warmer? How can we deal with the conflicts that make our human climate resemble a cold war on several fronts: between nations, between rich and poor, between women and men, and so on?

Observe what happens within ourselves when the question is asked and demands its answer. We immediately turn our attention to the world and to the actions we think could solve the problem there. A world government? Globally binding legislation? A common human language in a worldwide classless society that does not distinguish between woman and man, between skin colors, between friend and stranger?

Notice again what happens within ourselves when we analyze the question, either in this universalist way or in some other way. We create new conflicts between ourselves as analysts and the world where the problems are assumed to arise. The question itself is a conflict. It incriminates a world that must necessarily change. This creates new areas of conflict between people who argue for conflicting analyses and measures. One peace movement will fight another peace movement, and those who do not take the necessary stand on these enormous issues… well, how should we handle them?

Observe for the third time what happens within ourselves when we have now twice in a row directed our attention towards ourselves. First, we noted our inner tendency to react outwardly. Then we noted how this extroverted tendency created new conflicts not only between ourselves and an incriminated world that must change, but also between ourselves and other people with other analyses of an incriminated world that must change. What do we see, then, when we observe ourselves for the third time?

We see how we look for the source of all conflict everywhere but within ourselves. Even when we incriminate ourselves, we speak as if we were someone other than the one analyzing the problem and demanding action (“I should learn to shut up”). Do you see the extroverted pattern within you? It is like a mental elbow that pushes away a problematic world. Do you see how the conflicts arise within ourselves, through this constant outward reactivity? We think we take responsibility for the world around us, but we are only projecting our mental reflexes.

There was once a philosopher named Socrates. He was likened to an electric ray as he seemed to numb those he was talking to with his unexpected questions, so that they could no longer react with worldly analyses and sharp-witted arguments. He was careful to point out that he himself was equally numbed. He saw the extroverted tendency within himself. Every time he saw it, he became silent and motionless. Sometimes he could stand for hours on a street corner. He saw the source of all conflict in the human mind that always thinks it knows, that always thinks it has the analysis and all the arguments. He called this inner numbness his wisdom and he described it like this: “what I do not know, I do not think I know either.”

Naturally, a philosopher thus numbed could not harbor any conflict, because the moment it began to take shape, he would note the tendency within himself and be numbed. He mastered the art of resolving conflicts where they arise: within ourselves. Free from the will to change an incriminated world, he would thereby have revolutionized everything.

Socrates’ wisdom may seem too simple for the complex problems of our time. But given our three observations of how all conflict arises in the human mind, you see how we ourselves are the origin of all complexity. This simple wisdom can warm a humanity that has forgotten to examine itself.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

Does the severity of an illness qualify the moral motivation to act?

I have to admit that I had a little trouble cracking the code in the article which I will now try to summarize briefly. I hope that the title I have chosen is not already a misunderstanding. Moral philosophy is not easy, but the subject of the article is urgent so I still want to try.

Illness is generally perceived as something bad, as an evil. If we are to speak in terms of value, we can say that illness has negative value. Individual cases of illness usually create a moral motivation in us to mitigate the ill person’s negative condition. How strong this motivation is depends on several factors, but the severity of the disease is a relevant factor. The motivation to act typically increases with the severity of the disease.

This of course comes as no surprise. The motivation to alleviate a person’s cold is not very strong because a cold is not a severe condition. A runny nose is nothing to complain about. But in the face of more severe conditions such as blood poisoning, diabetes and cancer, the moral drive to act increases. “This condition is very severe” we say and feel that it is very important to act.

So what is the problem that motivates the article? If I am interpreting the authors correctly, the problem is that it is not so easy to convert this obvious use of language into a rule to follow. I recently bought a kettle that came with this warning: “Do not fill the kettle with an excessive amount of water.” The warning is, in a way, self-evident. Of course, you should not fill the kettle with an excessive amount of water! The motivation to pour should have stopped before the water level got excessively high. Even though the language is perfectly obvious, the rule is not as obvious, because when is the water level excessively high? When should we stop pouring?

The problem with the word “severity” is similar, or at least that is my interpretation. “Severity” is an obvious linguistic tool when we discuss illness and the need to do something about it. But at the same time, it is difficult to define the term as a description of when conditions are (more or less) severe and when it is (more or less) motivated to do something about them. Some philosophers have therefore criticized the use of “severity” in discussions about, for example, priority setting in healthcare. The situation would become somewhat paradoxical, since an obviously relevant concept would be excluded because it is unclear how it can be transformed into a description that can be followed as if it were a simple rule.

If I understand the article correctly, the authors want to defend the concept of severity by showing that severity qualifies our moral motivation to act when someone is ill. They do this by describing six other concepts that it is more generally accepted should qualify how morally important it is to do something about a condition, including the concepts of need and lack of well-being. None of the six concepts coincides completely with the concept of severity, but when we try to assess how they affect the need to act, we will often simultaneously assess the severity. And when we assess the severity of an illness, we will often at the same time assess how the illness affects well-being, for example.

The authors’ conclusion is that the concept of severity is a morally relevant concept that should be considered in future discussions, as severity qualifies the moral motivation to act. However, I may have misunderstood the reasoning, so if you want to be on the safe side, you can read the article here: Severity as a moral qualifier of malady.

I want to end the post with a personal side note: I am inclined to say that the philosophical difficulty in defining the concept of severity (when we talk about disease) is similar to the difficulty in defining the concept of excess (when we talk about water levels). What makes these concepts so useful is their great pliability. It is difficult to say what “severe disease” or “excessively high water level” is, because it depends on so much. Pliable words like these are like tracking dogs that sensitively move through the terrain in all possible relevant directions. But if we try to reconstruct the tracking dog’s sensitivity in general intellectual terms, without access to the dog’s sense of smell, experiences and instincts, we run into great difficulties.

Should these philosophical difficulties motivate us to get rid of the dog? Of course not! Just as we learn incredibly much from following a tracking dog, we learn incredibly much from following the words “severe disease,” even if the journey is arduous. This underlines the authors’ conclusion: severity should be considered a morally significant concept that continues to deserve our attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Solberg, C.T., Barra, M., Sandman, L. et al. Severity as a moral qualifier of malady. BMC Medical Ethics 24, 25 (2023). https://doi.org/10.1186/s12910-023-00903-2

This post in Swedish

We like challenging questions

The significance of the academic seminar

Ever since I was a doctoral student in philosophy, I have experienced the seminar, usually held once a week, as the heart of the academic environment. Why is the seminar so important?

If we are to stick to the etymology of the word, we should use a different image than that of the heart. The seminar is the nursery where seeds germinate and seedlings grow strong in a favourable environment, to then be planted out. That image fits well with doctoral education. The seminar is the place where doctoral students get training in presenting and discussing their scientific work. They get the opportunity to present their studies and texts and receive constructive criticism from senior researchers and from other doctoral students. In this way, their theses will be as brilliant as possible and they can practice the academic forms of giving and receiving constructive criticism, of defending their positions and changing their minds.

But there are also other seedlings in the academy than doctoral students and thesis drafts. Even senior researchers’ studies and texts are originally seedlings. Even these need to grow before they can be planted in scientific journals or at book publishers. The seminar never ceases to be a nursery. I dare say that the seminar is just as important for established researchers as it is for doctoral students.

The seminar is also the weekly event where something finally happens together with others. Academics often work in a certain solitude, especially when writing. Colleagues who may not have met since the last seminar reunite and continue the conversation in the familiar seminar room. Is the seminar like a recurring dance arrangement for lonely academics? Yes, the seminar probably also resembles an academic dance palace. In addition, sometimes you can invite presenters to the seminar, maybe even stars, then the event will be really brilliant.

The seminar is not least one of every academic institution’s most important places for discussion where colleagues meet regularly and learn to understand each other. Despite working from different theoretical, methodological and linguistic starting points. The academy is not homogenous, but is full of theories, methods and languages, even within the same discipline. If we do not meet every week and continue the conversation together, we soon become strangers who do not understand each other.

All these images reveal essential aspects of the academic seminar: the image of nursery as well as the image of the dance palace and the image of the place of discussion. Yet they do not reveal the significance of the seminar that I experience most strongly. I must return to the image of the heart, of the life-sustaining centre. I want to say that the seminar is the place where an academic subject becomes alive and real. The subject can be philosophy or literature, mathematics or neuroscience, law or economy. What can such strange subjects mean in the heart of a human being? At the seminar, living philosophers, literary scholars, mathematicians, lawyers or economists meet each other. At the seminar, they bring their academic subjects to life, for themselves and for younger researchers in the making. Each seminar pumps new reality into the subject, which would otherwise be pale and abstract. At the seminar you can see, hear and even smell what philosophy and other academic subjects really are. They never become more real than in the seminar.

I think we could go on forever looking for different meanings of the academic seminar.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about education

« Older posts