A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 1 of 26)

Inspired

What does it mean to be inspired by someone? Think of these inspired music albums where artists lovingly pay tribute to a great musician by making their own interpretations of the songs. These interpretations often express deep gratitude for the inspiration received from the musician. We can feel similar gratitude to inspiring people in many different areas.

Why are we inspired by inspiring people? Here is a tempting picture. The person who inspires us has something that we lack. To be inspired is to want what the inspiring person has: “I also want to be able to…”; “I want to be as good as…” and so on. That is why we imitate those who inspire us. That is why we train hard. By imitating, by practicing, the inspiring person’s abilities can be transferred to us who lack them.

This could be called a pneumatic picture of inspiration. The inspiring one is, so to speak, an air tank with overpressure. The rest of us are tanks with negative pressure. The pressure difference causes the inspiration. By imitating the inspiring person, the pressure difference is evened out. The pressure migrates from the inspiring to the inspired. We inhale the air that flows from the tank with overpressure.

This picture is certainly partly correct, but it is hardly the whole truth about inspiration. I am not a musician. There is a big difference in pressure between me and any musician. Why does this pressure difference not cause inspiration? Why do I not start imitating musicians, training hard so that some of the musicians’ overpressure is transferred to me?

The pneumatic picture is not the whole truth, other pictures of inspiration are possible. Here is one. Maybe inspiration is not aroused by difference, not by the fact that we lack what the inspiring person has. Perhaps inspiration is aroused by similarity, by the fact that we sense a deep affinity with the one who inspires us. When we are inspired, we recognize ourselves in the one who inspires us. We discover something we did not know about ourselves. Seeds that we did not know existed in us begin to sprout, when the inspiring person makes us aware that we have the same feeling, the same passion, the same creativity… At that moment, the inspiration is aroused in us.

In this alternative picture of inspiration, there is no transfer of abilities from the inspiring one to the inspired ones. Rather, the abilities grow spontaneously in the inspired ones themselves, when they sense their affinity with the inspiring one. In the inspiring person, this growth has already taken place. Creativity has had time to develop and take shape, so that the rest of us can recognize ourselves in it. This alternative image of inspiration also provides an alternative image of human history in different areas. We are familiar with historical representations of how predecessors inspired their successors, as if the abilities of the predecessors were transferred horizontally in time. In the alternative picture, history is not just horizontal. Above all, it has a vertical depth dimension in each of us. Growing takes place vertically in each new generation, much like seeds sprout in the earth and grow towards the sky. History is, in this alternative image, a series of vertical growing, where it is difficult to distinguish the living creativity in the depth dimension from the imitation on the surface.

Why am I writing a post about inspiration? Apart from the fact that it is inspiring to think about something as vital as inspiration, I want to show how unnoticed we make pictures of facts. We do not see that it is actually just pictures, which could be replaced by completely different pictures. I learned this from the philosopher Ludwig Wittgenstein, who inspired me to examine philosophical questions myself: questions which surprisingly often arise because we are captured in our images of things. Our captivity in certain images prevents us from seeing other possibilities and obvious facts.

In addition, I want to show that it really makes a difference if we are caught in our pictures of things or open to the possibility of completely different pictures. It has been a long time since I wrote about ape language research on this blog, but the attempt to teach apes human language is an example of what a huge difference it can make, if we free ourselves from a picture that prevents us from seeing the possibility of other pictures.

Attempts to teach apes human language were based on the first picture, which highlights the difference between the one who inspires and the one who is inspired. It was thought that because apes lack the language skills that we humans have, there is only one way to teach apes human language. We need to transfer the language skills horizontally to the apes, by training them. This “single” opportunity failed so clearly, and the failure was so well-documented, that only a few researchers were subsequently open to the results of a markedly more successful, at least as well-documented experiment, which was based on the alternative picture of inspiration.

In the alternative experiment, the researchers saw an opportunity that the first picture made it difficult to see. If apes and humans live together daily in a closely united group, so that they have opportunities to sense affinities with each other, then language seeds that we did not know existed in apes could be inspired to sprout and grow spontaneously in the apes themselves. Vertically within the apes, rather than through horizontal transmission, as when humans train animals. In fact, this alternative experiment was so successful that it resulted in a series of spontaneous language growths in apes. As time went on, new-born apes were inspired not only by the humans in the group, but also by the older apes whose linguistic creativity had taken shape.

If you want to read more about this unexpected possibility of inspiration between species, which suggests unexpected affinities, as when humans are inspired by each other, you will find a book reference below. I wrote the book a long time ago with William M. Fields and Sue Savage-Rumbaugh. Both have inspired me – for which I am deeply grateful – for example, in this blog post with its alternative picture of inspiration. That I mention the book again is because I hope that the time is ripe for philosophers, psychologists, anthropologists, educationalists, linguists, neuroscientists and many others to be inspired by the unexpected possibility of human-inspired linguistic creativity in our non-human relatives.

To finally connect the threads of music and ape language research, I can tell you that two great musicians, Paul McCartney and Peter Gabriel, have visited the language-inspired apes. Both of them played music with the apes and Peter Gabriel and Panbanisha even created a song together. Can we live without inspiration?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Segerdahl, P., Fields, W. & Savage-Rumbaugh, S. 2005. Kanzi’s Primal Language. The Cultural Initiation of Primates into Language. Palgrave Macmillan

Segerdahl, P. 2017. Can an Ape Become Your Co-Author? Reflections on Becoming as a Presupposition of Teaching. In: A Companion to Wittgenstein on Education. Pedagogical Investigations. Peters, M. A. and Stickney, J. (Eds.). Singapore: Springer, pp. 539-553

This post in Swedish

We write about apes

Brain-inspired AI: human narcissism again?

This is an age when Artificial Intelligence (AI) is literally exploding and invading almost every aspect of our lives. From entertainment to work, from economics to medicine, from education to marketing, we deal with a number of disparate AI systems that make our lives much easier than a few years ago, but also raise new ethical issues or emphasize old, still open questions.

A basic fact about AI is that it is progressing at an impressive pace, while still being limited with regard to various specific contexts and goals. We often read, also in non-specialized journals, that AI systems are not robust (meaning they are not good at dealing with datasets too much different from the one they have been trained with, so that the risk of cyber-attacks is still pretty high), not fully transparent, and limited in their capacity to generalize, for instance. This suggests that the reliability of AI systems, in other words the possibility to use them for achieving different goals, is limited, and we should not blindly trust them.

A strategy increasingly chosen by AI researchers in order to improve the systems they develop is taking inspiration from biology, and specifically from the human brain. Actually, this is not really new: already the first wave of AI took inspiration from the brain, which was (and still is) the most familiar intelligent system in the world. This trend towards brain-inspired AI is gaining much more momentum today, for two main reasons among others: big data and the very powerful technology to handle big data. And yet, brain-inspired AI raises a number of questions of an even deeper nature, which urge us to stop and think.

Indeed, when compared to the human brain, present AI reveals several differences and limitations with regards to different contexts and goals. For instance, present Machine Learning cannot generalize the abilities it achieves on the basis of specific data in order to use them in different settings and for different goals. Also, AI systems are fragile: a slight change in the characteristics of processed data can have catastrophic consequences. These limitations are arguably dependent on both how AI is conceived (technically speaking: on its underlying architecture), and on how it works (on its underlying technology). I would like to introduce some reflections about the choice to use the human brain as a model for improving AI, including the apparent limitations of this choice to use the brain as a model.

Very roughly, AI researchers are looking at the human brain to infer operational principles and then translate them into AI systems and eventually make these systems better in a number of tasks. But is a brain-inspired strategy the best we can choose? What justifies it? In fact, there are already AI systems that work in ways that do not conform to the human brain. We cannot exclude a priori that AI will eventually develop more successfully along lines that do not fully conform to, or that even deviate from, the way the human brain works.

Also, we should not forget that there is no such thing as the brain: there is a huge diversity both among different people and within the brain itself. The development of our brains reflects a complex interplay between our genetic make-up and our life experiences. Moreover, the brain is a multilevel organ with different structural and functional levels.

Thus, claiming a brain-inspired AI without clarifying which specific brain model is used as a reference (for instance, the neurons’ action potentials rather than the connectomes’ network) is possibly misleading if not nonsensical.

There is also a more fundamental philosophical point worth considering. Postulating that the human brain is paradigmatic for AI risks to implicitly endorse a form of anthropocentrism and anthropomorphism, which are both evidence of our intellectual self-centeredness and of our limited ability to think beyond what we think we are.

While pragmatic reasons might justify the choice to take the brain as a model for AI (after all, for many aspects, the brain is the most efficient intelligent system that we know in nature), I think we should avoid the risk of translating this legitimate technical effort into a further narcissistic, self-referential anthropological model. Our history is already full of such models, and they have not been ethically or politically harmless.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

We need to care about care ethics

At some point in our lives, we will all need to be cared for. When that happens, it is of course crucial that the people who care for us have the medical competence and skills required to diagnose and treat us. But we also need professional care to be nursed back to health. Providing care requires both medical and ethical skills, for example when weighing risks against the benefits of treatment and when giving information or encouraging patients to follow advice and instructions. Patients also need to be given tools and space to exercise their autonomy when making decisions about their own treatment and care. As a researcher in care ethics, this is the kind of questions that I ponder: questions that matter to us throughout life. The one who brings us into this world will need care during pregnancy, birth and after delivering the baby. Newborns, premature babies and children that are injured during birth need to be cared for, together with their families. As a child, you might have an ear infection, or need patching up after falling off your bike. As adults, illness will visit us on several occasions, and being cared for at the end of life is of utmost importance. We often face difficult choices in relation to health, sickness and treatment and need support from health care professionals in order to make autonomous decisions. Care ethics encompasses all of these ethical dilemmas.

The ethical aspects of the encounter between the health care professional and the patient are at the centre of care ethics. This encounter is always asymmetrical. How can we make it a respectful encounter, given that professionals have more knowledge and patients are put in a dependent and exposed position? As individual patients in health care, we are not on home ground, while the health care professional is in a familiar work environment and practices their profession. This asymmetry places great ethical demands on how the meeting between patient and professional takes place. It is precisely in this encounter that the dilemmas of health care ethics arise. However, as a care ethics researcher, I also ask questions about how health care is organised and whether that enables good and ethically acceptable encounters.

Those who organise the health care system and the people providing care need to know something about what is best for the patient. To be able to offer concrete guidance on how to educate, budget, plan and perform care, the ethical dilemmas that arise in health care encounters need to be examined in a structured way. Care ethics offers both theoretical and empirical tools to do just that. The theoretical framework builds in part on traditional principle-based ethics, and in part on the ethics of care. In this tradition, nursing and care are seen as both value and practice. The practice includes moral values, but also gives rise to norms that can guide moral action by rejecting acts of violence and dominance towards other human beings. The ethics of care looks to the needs of the “concrete other.” It considers us as individuals in mutually dependent relationships with one another. It also ascribes emotions a moral value. But not just any emotions; mainly those that are connected to nursing and caring for others, for example compassion and empathy.

Over the years, the care ethics group at the Centre for Research Ethics and Bioethics (CRB) have worked with several different questions. Mona Petterson wrote her PhD thesis on how doctors and nurses view do-not-resuscitate orders. Amal Matar’s thesis covered ethical issues in relation to genetic screening before pregnancy, also known as preconception genetic screening. We have also worked with caregivers’ experiences of health care prioritization, how parents and children view vaccination ethics, and equal access to health care. Our approach to care ethics is rooted in clinical practice and our studies are mainly informed by empirical ethics, where ethical and philosophical reasoning is related to qualitative and quantitative empirical research. Our goal is to contribute concrete clinical guidance on how to manage the ethical dilemmas that health care is faced with. Given the fact that we are all born, and live and die, it is also a given that we all will require care at one point or another. In order to enable health care policy makers and administrators to make decisions that benefit patients, talking about ethics in terms of medical risk versus benefit is not enough. As patients, we are human beings in an asymmetrical relationship where we are dependent on the person offering us care. The ethical dilemmas that arise from that relationship matter for how we perceive the treatment and care we receive. They also affect the extent to which we can exercise our autonomy.

Anna T. Höglund

Written by…

Anna T. Höglund, who is Professor of Care Ethics and Gender Studies at Uppsala University’s Centre for Research Ethics & Bioethics.

This post in Swedish

In dialogue with patients

Co-authorship when not everyone’s research is included in the paper

Questions about authorship are among the most sensitive professional issues for researchers. Apart from the fact that researchers live and make careers on their publications, it is important for scientific and research ethical reasons to know who is responsible for the content of the publications.

A feature of research that can create uncertainty about who should be counted as a co-author of a scientific publication is that such publications usually report research that has mainly already been carried out when the paper is being written. Many researchers may have contributed to the research work, but only a few of them may contribute to the writing of the paper. Should everyone still be counted as an author? Or just those who contribute to the writing of the paper?

The International Committee of Medical Journal Editors (ICMJE) has formulated a recommendation that creates greater clarity. Simplified, the recommendation is the following. Authorship can be given to researchers who clearly meet four criteria. You must: (1) have made substantial contributions to the research study (e.g., designing the study, or collecting, analysing and interpreting data); (2) have contributed to drafting the paper and revising its intellectual content; (3) have approved the final version of the article; (4) have agreed to be responsible for all aspects of the work by ensuring that issues of accuracy and integrity are investigated.

Furthermore, it is recommended that researchers who meet criterion (1) should be invited to participate in the writing process, so that they can also meet criteria (2)(4) and thus be counted as co-authors.

However, research does not always go according to plan. Sometimes the plans need to be adjusted during the research process. This may mean that one of the researchers has already made a significant research effort when the group decides not to include that research in the writing of the paper. How should co-authorship be handled in such a situation, when someone’s results fall out of the publication?

The issue is discussed by Gert Helgesson, Zubin Master and William Bülow in the journal Science and Engineering Ethics. Considering, among other things, how easily disagreement about authorship can disrupt the dynamics of a research group, it is important that there is an established order concerning authorship, which handles situations such as this.

The discussion in the article is based on an imaginary, concrete case: A research group includes three younger researchers, Ann, Bo and Choi. They have all been given individual responsibility for different parts of the planning and execution of the empirical work. They work many hours in the laboratory. When the research group sees the results, they agree on the content of the article to be written. It then turns out that Ann’s and Bo’s analyses are central to the idea in the article, while Choi’s analyses are not. Choi’s results are therefore not included in the article. Should Choi be included as a co-author?

We can easily imagine Choi contributing to the writing process, but what about criterion (1)? If Choi’s results are not described in the article, has she made a significant contribution to the published research study? Helgesson, Master and Bülow point out that the criterion is ambiguous. Of course, making a significant contribution to a research study can mean contributing to the results that are described in the article. But it can also mean contributing to the research process that leads up to the article. The former interpretation excludes Choi as co-author. The latter interpretation makes co-authorship possible for Choi.

The more inclusive interpretation is not unreasonable, as research is a partially uncertain exploratory process. But do any strong reasons support that interpretation? Yes, say Helgesson, Master and Bülow, who state two types of reasons. Firstly, it is about transparency and accountability: what happened and who was involved? Excluding Choi would be misleading. Secondly, it is a matter of proper recognition of merit and of fairness. Choi worked as hard in the laboratory as Ann and Bo and contributed as much to the research that led to the article. Of course, the purpose of the article changed during the process and Choi’s contribution became irrelevant to the content of the article. But her efforts were still relevant to the research process that led up to the article. She also did a good job as a researcher in the group: it seems unfair if her good work by chance should not be recognized in the same way as the other researchers’ work.

The proposal in the article is therefore that the first criterion for authorship should be interpreted as a significant contribution to the research process leading up to the article, and that this should be clarified in the recommendation.

The article also discusses possible counter-arguments to the more inclusive interpretation of the authorship recommendation. If you want to study the reasoning more closely and form your own opinion, read the article: How to Handle Co-authorship When Not Everyone’s Research Contributions Make It into the Paper.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Helgesson, G., Master, Z. & Bülow, W. How to Handle Co-authorship When Not Everyone’s Research Contributions Make It into the Paper. Sci Eng Ethics 27, 27 (2021). https://doi.org/10.1007/s11948-021-00303-y

This post in Swedish

We recommend readings

Conceptual analysis when we get stuck in thoughts

When philosophers are asked what method we use when we philosophize, we are happy to answer: our most important method is conceptual analysis. We apply conceptual analysis to answer philosophical questions such as “What is knowledge?”, “What is justice?”, “What is truth?” What we do is that we propose general definitions of the concepts, which we then fine-tune by using concrete examples to test that the definitions really capture all individual cases of the concepts and only these.

The problem is that both those who ask for the method of philosophy and those who answer “conceptual analysis” seem to assume that philosophy is not challenged by deeply disturbing problems, but defines concepts almost routinely. The general questions above are hardly even questions, other than purely grammatically. Who lies awake wondering “What is knowledge, what is justice, what is truth, what is goodness, what is…?”

In order to get insomnia from the questions, in order for the questions to become living philosophical problems, in order for us to be disturbed by them, we need more than only generally formulated questions.

Moreover, if there was such a thing as a method of answering philosophical questions, then the questions should already have been answered. I mean, if we since the days of Socrates had a method that answers philosophical “What is?”-questions by defining concepts, then there cannot be many questions left to answer. At most, we can refine the definitions, or apply the method to concepts that did not exist 2600 years ago. Basically, philosophy should not have many questions left to be challenged by. Since ancient times, we have a well-proven method!

To understand why philosophers continue to wonder, we need to understand why questions that superficially sound so uninteresting that we fall asleep can sometimes be so deeply perplexing that we lie awake thinking. Let me give you an example that gives a glimpse of the depths of philosophy, a glimpse of that disturbing “extra” that keeps philosophers awake at night.

The example is a “Swedish” disease, which has attracted attention around the world as something very strange. I am thinking of what was first called apathy in refugee children, but which later got the name resignation syndrome. The disease affects certain groups of children seeking asylum in Sweden. Children from the former Yugoslavia and from Central Asian countries of the former Soviet Union have been overrepresented. The children lose physical and mental functions and in the end can neither move nor communicate. They become bedridden, do not respond to pain and must be fed by tube. More than 1000 children have been affected by the disease in Sweden since the 1990s.

Confronted with this disease in refugee children, it may seem natural to think that the condition is reasonably caused by traumatic experiences in the home country and during the flight, as well as by the stress of living under deportation threat. It is not unreasonable to think so. Trauma and stress probably contribute to the disease. There is only one problem. If this were the cause, then resignation syndrome should occur in refugee children in other parts of the world as well. Unfortunately, refugee children with traumatic experiences and stressful deportation threats are not only found in Sweden. So why are (certain groups of) refugee children affected by the syndrome in Sweden in particular?

What is resignation syndrome? Here we have a question that on the surface does not sound more challenging than any other generally formulated “What is?”-question. But the question is today a challenging philosophical problem, at least for Karl Sallin, who is writing his dissertation on the syndrome here at CRB, within the framework of the Human Brain Project. What is that “extra” element that makes the question philosophically challenging for Karl Sallin?

It may seem natural to think that the challenging aspect of the question is simply that we do not yet know the answer. We do not know all the facts. It is not unreasonable to think so. Lack of knowledge naturally contributes to the question. Again, there is only one problem. We already consider ourselves knowing the answer! We think that this extreme form of despair in refugee children must, of course, be caused by traumatic experiences and by the stress that the threat of deportation entails. In the end, they can no longer bear it, but give up! If this reasonable answer were correct, then resignation syndrome should not exist only in Sweden. The philosophical question thus arises because the only reasonable answer conflicts with obvious facts.

That is why the question is philosophically challenging. Not because we do not know the answer. But because we consider ourselves to know what the answer must be! The answer seems so reasonable that we should hardly need to do more research on the matter before we take action by alleviating the children’s stressful situation, which we think is the only possible cause of the syndrome. And that is what happened…

For some years now, the guidelines for Swedish health care staff have emphasized the family’s role in recovery, as well as the importance of working for a residence permit. The guidelines are governed by the seemingly reasonable idea that children’s recovery depends on relieving the stress that causes the syndrome. Once again, there is only one problem. The guidelines never had a positive effect on the syndrome, despite attempts to create peace and stability in the family and work for a residence permit. The syndrome continued to be a “Swedish” disease. Why is the condition so stubbornly linked to Sweden?

Do you see the philosophical problem? It is not just about lack of knowledge. It is about the fact that we already think we have knowledge. The thought that the cause must be stress is so obvious, that we hardly notice that we are thinking it. It seems immediately real. In short, we have got stuck in our own thoughts, which we repeat again and again, even though we repeatedly clash with obvious facts. Like a mosquito trying to get out of a window, but just crashing, crashing, crashing.

When Karl Sallin treats the issue of resignation syndrome as a philosophical issue, he does something extremely unusual, for which there are no routine methods. He directs his attention not only outwards towards the disease, but also inwards towards ourselves. More empirical research alone does not solve the problem. As little as continuing to collide with the glass pane solves the mosquito’s problem. We need to stop and examine ourselves.

This post has now become so long that I have to stop before I can describe Karl Sallin’s dissolution of the mystery. Maybe it is good that we are not rushing forward. Riddles need time, which our impatient intellect rarely gives them. The point about the method of philosophy has hopefully become clear. The reason why philosophers analyse concepts is that we humans sometimes get caught up in our own concepts of reality. In this case, we get stuck in our concept of resignation syndrome as a stress disorder. Perhaps I can still mention that Karl Sallin’s conceptual analysis of our thought pattern about the syndrome dissolves the feeling of being faced with an incomprehensible mystery. The syndrome is no longer in conflict with obvious facts. He also shows that our thought patterns may have contributed to the disease becoming so prominent in Sweden. Our publically stated belief that the disease must be caused by stress, and our attempts to cure the disease by relieving stress, created a cultural context where this “Swedish” disease became possible. The cultural context affected the mind and the brain, which affected the biology of the body. In any case, that is what Karl Sallin suggests: resignation syndrome is a culture-bound disease. This unexpected possibility frees us from the thought we were stuck in as the only alternative.

So why did Socrates ask questions in Athens 2600 years ago? Because he discovered a method that could answer philosophical questions? My guess is that he did it for the same reason that Karl Sallin does it today. Because we humans have a tendency to imagine that we already know the answers. When we clearly see that we do not know what we thought we knew, we are freed from repeatedly colliding with a reality that should be obvious.

In philosophy, it is often the answer that is the question.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Sallin, K., Evers, K., Jarbin, H., Joelsson, L., Petrovic, P. (2021) Separation and not Residency Permit Restores Function in Resignation Syndrome: A Retrospective Cohort Study. Eur Child Adolesc Psychiatry, 10.1007/s00787-021-01833-3

Sallin, K., Lagercrantz, H., Evers, K., Engström, I., Hjern, A., Petrovic, P. (2016) Resignation Syndrome: Catatonia? Culture-Bound? Frontiers in Behavioral Neuroscience, 10:7. 10.3389/fnbeh.2016.00007

This post in Swedish

We challenge habits of thought

Our individual responsibility for antibiotic resistance

Antibiotic resistance is a global threat to public health, as the chances of treating infections decrease when antibiotics lose their effect on bacterial growth. But who is responsible for antibiotic resistance and what is the responsibility?

We may believe that the problem is too big and complex for us as individuals. Antibiotic resistance is a problem for governments and international organizations, we think. Nevertheless, it is not least our individual use of antibiotics that drives the development. For example, we may take antibiotics when it is not really necessary, or perhaps we do not follow the doctor’s prescription but discontinue the antibiotic treatment prematurely and throw leftover pills in the dustbin. Then we go on a journey and spread bacteria that are resistant to the antibiotic that we did not use properly. Or we ignore getting vaccinated because we think that there are antibiotics if we get sick. Well, maybe not for long!

If we have an individual moral responsibility to act with awareness of environmental problems, then it is not unreasonable to think that we also have a responsibility to act with awareness of the antibiotic problem. Mirko Ancillotti (who recently defended his dissertation at CRB) examines this possibility in an article in Bioethics. Do we have an individual moral responsibility for antibiotic resistance and how should the responsibility be understood?

Mirko Ancillotti immediately points out that not all people have the same opportunities to improve their antibiotic behaviour. Apart from the fact that many people lack information about antibiotic resistance, not everyone finds it as easy to change their antibiotic use. Some have less access than others to correctly prescribed treatments, for example, if they live far from a hospital but can easily buy antibiotics without a prescription. In addition, not everyone has the same financial means to stay at home if they are ill.

Another thing that makes it difficult to talk about individual responsibility for antibiotic resistance, is that you can hardly determine how much the pills you threw in the dustbin actually contributed to the problem. We know that people die due to antibiotic resistant bacteria, but it is difficult to determine the consequences of your particular antibiotic behaviour.

For these reasons, Mirko Ancillotti proposes a virtue ethical concept of responsibility. He suggests that we as individuals cultivate personal qualities and habits, which support responsible antibiotic use as a virtue. If I understand him, this means cultivating certain norms about antibiotics use, which we try to meet, such as following the doctor’s prescription, not using antibiotics unless necessary, not persuading the doctor to prescribe antibiotics, and making sure that we are vaccinated. However, since the conditions for acting with this normative sensitivity vary with human circumstances, there is in many cases a need to improve the conditions and institutional support for responsible antibiotic use.

A comparison: We have learned that we should preferably not travel by air, that it is irresponsible and perhaps even shameful to fly if it is not necessary. To be able to meet this new norm, new societal conditions are needed in the form of better international train connections and simpler ticketing systems. In the same way, new normative sensitivities regarding antibiotics can be developed, simultaneously with improving the opportunities for meeting the norms, Mirko Ancillotti suggests.

If you want to read more about Mirko Ancillotti’s virtue ethical concept of an individual responsibility for antibiotic resistance, read the article in Bioethics: Individual moral responsibility for antibiotic resistance.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ancillotti, M., Nihlén Fahlquist, J., & Eriksson, S. (2021). Individual moral responsibility for antibiotic resistance. Bioethics, 1– 7. https://doi.org/10.1111/bioe.12958

This post in Swedish

We like real-life ethics

Philosophical research communication

How do you communicate about research with people who are not researchers? The scientific results usually presuppose a complicated intellectual framework, which the researchers have acquired through long education and experience. How can we talk about their research with people who are not researchers?

At CRB, we take research communication seriously, so this question follows us daily. A common way to solve the problem is to replace researchers’ complex intellectual frameworks with simple images, which people in general are more familiar with. An example could be comparing a body cell with a small factory. We thus compare the unknown with the familiar, so that the reader gets a certain understanding: “Aha, the cell functions as a kind of factory.”

Giving research results a more comprehensible context by using images that replace the researchers’ intellectual framework often works well. We sometimes use that method ourselves here at CRB. But we also use another way of embedding the research, so that it touches people. We use philosophical reflection. We ask questions that you do not need to be an expert to wonder about. The questions lead to thoughts that you do not need to be a specialist to follow. Finally, the research results are incorporated into the reasoning. We then point out that a new article sheds light on the issues we have thought about together. In this way, the research gets an understandable context, namely, in the form of thoughts that anyone can have.

We could call this philosophical research communication. There is a significant difference between these two ways of making research understandable. When simple images are used, they only aim to make people (feel that they) understand what they are not familiar with. The images are interchangeable. If you find a better image, you immediately use it instead. The images are not essential in themselves. That we compare the body cell with a factory does not express any deep interest in factories. But the philosophical questions and reflections that we at CRB embed the research in, are essential in themselves. They are sincere questions and thoughts. They cannot be replaced by other questions and reasoning, for the sole purpose of effectively conveying research results. In philosophical research communication, we give research an essential context, which is not just an interchangeable pedagogical aid. The embedding is as important as what is embedded.

Philosophical research communication is particularly important to us at CRB, as we are a centre for ethics research. Our research is driven by philosophical questions and reflections, for example, within the Human Brain Project, which examines puzzling phenomena such as consciousness and artificial intelligence. Even when we perform empirical studies, the point of those studies is to shed light on ethical problems and thoughts. In our research communication, we focus on this interplay between the philosophically thought-provoking and the empirical results.

Another difference between these ways of communicating research has to do with equality. Since the simple images that are used to explain research are not essential in themselves, such research communication is, after all, somewhat unequal. The comparison, which seemed to make us equal, is not what the communication is really about. The reader’s acquaintance with factories does not help the reader to have their own views on research. Philosophical research communication is different. Because the embedding philosophical questions and thoughts are essential and meant seriously, we meet on the same level. We can wonder together about the same honest questions. When research is communicated philosophically, communicators as well as researchers and non-researchers are equal.

Philosophical research communication can thereby deepen the meaning of the research, sometimes even for the researchers themselves!

As philosophical research communication unites us around common questions and thoughts, it is important in an increasingly fragmented and specialized society. It helps us to think together, which is easier than you might believe, if we dare to open up to our own questions. Here, of course, I assume that the communication is sincere, that it comes from independently thinking people, that it is not based on any intellectually constructed thought patterns, which one must be a philosophy expert to understand.

In that case, philosophical research communicators would need to bring philosophy itself to life, by sincerely asking the most alive questions.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

Neuroimaging the brain without revealing the person

Three contemporary trends create great challenges for researchers. First, science is expected to become increasingly open, among other things by making collected data available to new users and new purposes. At the same time, data protection laws are being strengthened to protect privacy. Finally, artificial intelligence finds new ways to reveal the individuals behind data, where this was previously impossible.

Neuroimaging is an example of how open science, stronger data protection legislation and more powerful AI challenge the research community. You may not think that you can identify the person whose brain is imaged by using a magnetic camera? But the image actually also depicts the shape of the skull and face, including any scars. You could thus recognize the person. In order to be able to share neuroimaging data without revealing the person, it has hitherto been assumed sufficient to remove the shape of the skull and face in the images, or to make the contours blurry. The problem is the third trend: more powerful AI.

AI can learn to identify people, where human eyes fail. Brain images where the shape of the skull and face has been made unrecognizable often turn out to contain enough information for self-learning face recognition programs to be able to identify people in the defaced images. AI can thus re-identify what had been de-identified. In addition, the anatomy of the brain itself is individual. Just as our fingers have unique fingerprints, our brains have unique “brainprints.” This makes it possible to link neuroimaging data to a person, namely, if you have previously identified neuroimaging data from the person. For example, via another database, or if the person has spread their brain images via social media so that “brainprint” and person are connected.

Making the persons completely unidentifiable would change the images so drastically that they would lose their value for research. The three contemporary trends – open science, stronger data protection legislation and more powerful AI – thus seem to be on a collision course. Is it at all possible to share scientifically useful neuroimaging data in a responsible way, when AI seems to be able to reveal the people whose brains have been imaged?

Well, everything unwanted that can happen does not have to happen. If the world were as insidiously constructed as in a conspiracy theory, no safety measures in the world could save us from the imminent end of the world. On the contrary, such totalized safety measures would definitely undermine safety, which I recently blogged about.

So what should researchers do in practice, when building international research infrastructures to share neuroimaging data (according to the first trend above)? A new article in Neuroimage: Reports, presents a constructive proposal. The authors emphasize, among other things, increased and continuously updated awareness among researchers about realistic data protection risks. Researchers doing neuroimaging need to be trained to think in terms of data protection and see this as a natural part of their research.

Above all, the article proposes several concrete measures to technically and organizationally build research infrastructures where data protection is included from the beginning, by design and by default. Because completely anonymized neuroimaging data is an impossibility (such data would lose its scientific value), pseudonymization and encryption are emphasized instead. Furthermore, technical systems of access control are proposed, as well as clear data use agreements that limit what the user may do with the data. Moreover, of course, informed consent from participants in the studies is part of the proposed measures.

Taken together, these safety measures, built-in from the beginning, would make it possible to construct research infrastructures that satisfy stronger data protection rules, even in a world where artificial intelligence can in principle see what human eyes cannot see. The three contemporary trends may not be on a collision course, after all. If data protection is built in from the beginning, by design and by default, researchers can share data without being forced to destroy the scientific value of the images, and people may continue to want to participate in research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Damian Eke, Ida E.J. Aasebø, Simisola Akintoye, William Knight, Alexandros Karakasidis, Ezequiel Mikulan, Paschal Ochang, George Ogoh, Robert Oostenveld, Andrea Pigorini, Bernd Carsten Stahl, Tonya White, Lyuba Zehl. “Pseudonymisation of neuroimages and data protection: Increasing access to data while retaining scientific utility,” Neuroimage: Reports, 2021,Volume 1, Issue 4

This post in Swedish

Approaching future issues

YouTube as a source of information on paediatric cancer trials

YouTube has become an easily accessible source of information on a variety of issues, from how to fix a puncture to what Plato meant by love, and much more. Of course, YouTube can also serve as a source of health information. Regarding certain issues of health, it may be of importance to review whether the information in the uploaded videos is reliable.

A sensitive research ethical issue is what it means for children to participate in clinical cancer trials. Parents of children with cancer can be asked to give informed consent, agreeing to let their child to participate in such a study. Since the information from the researchers is difficult to understand, as is the whole situation of the family, it is conceivable that many choose to obtain information from the Internet and social media such as YouTube. If so, what kind of information do they get? Is the information ethically satisfactory?

Tove Godskesen, Sara Frygner Holm, Anna T. Höglund and Stefan Eriksson recently conducted a review of YouTube as a source of information on clinical trials for paediatric cancer. The survey was limited to videos in English posted 2010 or later, not more than 20 minutes long and with more than 100 views. Most of the videos had been produced by centres, hospitals or foundations that conduct paediatric cancer studies. The videos were graded using an instrument (DISCERN), the questions of which were adapted to the purpose of measuring the research ethical reliability of the videos. The authors found that 20 percent of the videos were useful without serious shortcomings; almost 50 percent of the videos were misleading with serious shortcomings; 30 percent were classified as inappropriate sources of information. No video could be classified as excellent.

The quality of the videos was thus generally low from a research ethical point of view. A particularly serious problem had to do with the fact that half of the videos focused on new experimental treatments or innovative early clinical trials with children whose cancer had recurred or where the standard treatment had failed. In such Phase 1 clinical trials, one mainly investigates what doses of the drug can be given without too many or too severe adverse effects. Such studies cannot be expected to have any positive therapeutic effect for these children. Instead of emphasizing this ethical difficulty in Phase 1 trials, a hopeful affective language was used in the videos suggesting new therapeutic possibilities for the children.

The authors draw the practical conclusion that children with cancer and their parents may need advice on the quality of the often anecdotal healthcare information that can be found in videos online. Because video is simultaneously an excellent medium for information to both parents and children, the authors suggest that healthcare providers produce and upload high-quality information on clinical paediatric cancer studies.

Read the article in the journal Information, Communication & Society: YouTube as a source of information on paediatric cancer trials.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Tove Godskesen, Sara Frygner Holm, Anna T. Höglund & Stefan Eriksson (2021) YouTube as a source of information on clinical trials for paediatric cancer, Information, Communication & Society, DOI: 10.1080/1369118X.2021.1974515

This post in Swedish

We care about communication

Securing the future already from the beginning

Imagine if there was a reliable method for predicting and managing future risks, such as anything that could go wrong with new technology. Then we could responsibly steer clear of all future dangers, we could secure the future already now.

Of course, it is just a dream. If we had a “reliable method” for excluding future risks from the beginning, time would soon rush past that method, which then proved to be unreliable in a new era. Because we trusted the method, the method of managing future risks soon became a future risk in itself!

It is therefore impossible to secure the future from the beginning. Does this mean that we must give up all attempts to take responsibility for the future, because every method will fail to foresee something unpredictably new and therefore cause misfortune? Is it perhaps better not to try to take any responsibility at all, so as not to risk causing accidents through our imperfect safety measures? Strangely enough, it is just as impossible to be irresponsible for the future as it is to be responsible. You would need to make a meticulous effort so that you do not happen to cook a healthy breakfast or avoid a car collision. Soon you will wish you had a “safe method” that could foresee all the future dangers that you must avoid to avoid if you want to live completely irresponsibly. Your irresponsibility for the future would become an insurmountable responsibility.

Sorry if I push the notions of time and responsibility beyond their breaking point, but I actually think that many of us have a natural inclination to do so, because the future frightens us. A current example is the tendency to think that someone in charge should have foreseen the pandemic and implemented powerful countermeasures from the beginning, so that we never had a pandemic. I do not want to deny that there are cases where we can reason like that – “someone in charge should have…” – but now I want to emphasize the temptation to instinctively reason in such a way as soon as something undesirable occurs. As if the future could be secured already from the beginning and unwanted events would invariably be scandals.

Now we are in a new situation. Due to the pandemic, it has become irresponsible not to prepare (better than before) for risks of pandemics. This is what our responsibility for the future looks like. It changes over time. Our responsibility rests in the present moment, in our situation today. Our responsibility for the future has its home right here. It may sound irresponsible to speak in such a way. Should we sit back and wait for the unwanted to occur, only to then get the responsibility to avoid it in the future? The problem is that this objection once again pushes concepts beyond their breaking point. It plays around with the idea that the future can be foreseen and secured already now, a thought pattern that in itself can be a risk. A society where each public institution must secure the future within its area of ​​responsibility, risks kicking people out of the secured order: “Our administration demands that we ensure that…, therefore we need a certificate and a personal declaration from you, where you…” Many would end up outside the secured order, which hardly secures any order. And because the trouble-makers are defined by contrived criteria, which may be implemented in automated administration systems, these systems will not only risk making systematic mistakes in meeting real people. They will also invite cheating with the systems.

So how do we take responsibility for the future in a way that is responsible in practice? Let us first calm down. We have pointed out that it is impossible not to take responsibility! Just breathing means taking responsibility for the future, or cooking breakfast, or steering the car. Taking responsibility is so natural that no one needs to take responsibility for it. But how do we take responsibility for something as dynamic as research and innovation? They are already in the future, it seems, or at least at the forefront. How can we place the responsibility for a brave new world in the present moment, which seems to be in the past already from the beginning? Does not responsibility have to be just as future oriented, just as much at the forefront, since research and innovation are constantly moving towards the future, where they make the future different from the already past present moment?

Once again, the concepts are pushed beyond their breaking point. Anyone who reads this post carefully can, however, note a hopeful contradiction. I have pointed out that it is impossible to secure the future already now, from the beginning. Simultaneously, I point out that it is in the present moment that our responsibility for the future lies. It is only here that we take responsibility for the future, in practice. How can I be so illogical?

The answer is that the first remark is directed at our intellectual tendency to push the notions of time and responsibility beyond their limits, when we fear the future and wish that we could control it right now. The second remark reminds us of how calmly the concepts of time and responsibility work in practice, when we take responsibility for the future. The first remark thus draws a line for the intellect, which hysterically wants to control the future totally and already from the beginning. The second remark opens up the practice of taking responsibility in each moment.

When we take responsibility for the future, we learn from history as it appears in current memory, as I have already indicated. The experiences from the pandemic make it possible at present to take responsibility for the future in a different way than before. The not always positive experiences of artificial intelligence make it possible at present to take better responsibility for future robotics. The strange thing, then, is that taking responsibility presupposes that things go wrong sometimes and that we are interested in the failures. Otherwise we had nothing to learn from, to prepare responsibly for the future. It is really obvious. Responsibility is possible only in a world that is not fully secured from the beginning, a world where the undesirable happens. Life is contradictory. We can never purify security according to the one-sided demands of the intellect, for security presupposes the uncertain and the undesirable.

Against this philosophical background, I would like to recommend an article in the Journal of Responsible Innovation, which discusses responsible research and innovation in a major European research project, the Human Brain Project (HBP): From responsible research and innovation to responsibility by design. The article describes how one has tried to be foresighted and take responsibility for the dynamic research and innovation within the project. The article reflects not least on the question of how to continue to be responsible even when the project ends, within the European research infrastructure that is planned to be the project’s product: EBRAINS.

The authors are well aware that specific regulated approaches easily become a source of problems when they encounter the new and unforeseen. Responsibility for the future cannot be regulated. It cannot be reduced to contrived criteria and regulations. One of the most important conclusions is that responsibility from the beginning needs to be an integral part of research and innovation, rather than an external framework. Responsibility for the future requires flexibility, openness, anticipation, engagement and reflection. But what is all that?

Personally, I want to say that it is partly about accepting the basic ambiguity of life. If we never have the courage to soar in uncertainty, but always demand security and nothing but security, we will definitely undermine security. By being sincerely interested in the uncertain and the undesirable, responsibility can become an integral part of research and innovation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bernd Carsten Stahl, Simisola Akintoye, Lise Bitsch, Berit Bringedal, Damian Eke, Michele Farisco, Karin Grasenick, Manuel Guerrero, William Knight, Tonii Leach, Sven Nyholm, George Ogoh, Achim Rosemann, Arleen Salles, Julia Trattnig & Inga Ulnicane. From responsible research and innovation to responsibility by design. Journal of Responsible Innovation. (2021) DOI: 10.1080/23299460.2021.1955613

This post in Swedish

Approaching future issues

« Older posts