A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: ethics (Page 1 of 6)

Ethics Council at Uppsala Region: Healthcare workers shouldn’t have to report undocumented patients

Last week, the Ethics Council in Region Uppsala sent a letter to the Ministry of Justice where the Council dissociates itself from a proposal in the Tidö Agreement, a political agreement between four parties in the Swedish Parliament. The part of the agreement that the Ethics Council dissociates itself from is a proposed obligation for healthcare professionals to report patients who are undocumented migrants to authorities.

The Ethics Council writes that such a duty would be in conflict with both international and national conventions and laws. It is also contrary to the ethics of all professions in healthcare and would entail a serious threat to patient safety. Healthcare workers have not signed up to enforce decisions on expulsion or refusal of entry. They are assigned to, and their expertise relates to, the assessment of patients’ needs and to provide the best available care with those needs as a starting point.

In a reflection on the Swedish healthcare legislation, the Ethics Council also writes that an obligation to report undocumented migrants is contrary to the principle of human dignity. The principle states that all human beings have equal value and the same right to care. This includes everyone, regardless of whether we have a right to stay in Sweden or not.

The Chair of the Ethics Council, Niklas Juth, today publishes a post in our Swedish language version of this blog which also contains the entire letter sent to the Ministry of Justice. If you read Swedish, you can find his blog post here: Etikrådet i Region Uppsala tar avstånd från förslaget om anmälningsplikt för vårdpersonal.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

We recommend readings

Human enhancement: Time for ethical guidance!

Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.

We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function? 

There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance? 

The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance. 

The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies. 

A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments. 

And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.

The public consultation will launch on January 11, the deadline to submit a response is January 25, 2021. 

Josepine Fernow

Written by…

Josepine Fernow, Coordinator at the Centre for Research Ethics & Bioethics (CRB), and communications leader for the SIENNA project.

SIENNA project logo

This post in Swedish

Ethics as renewed clarity about new situations

An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values ​​and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.

The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.

The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.

Read the article, which has already attracted well-deserved attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Anaïs Rességuier, Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society

This post in Swedish

We like critical thinking

Ethical fitness apps for high performance morality

In an unusually rhetorical article for being in a scientific journal, the image is drawn of a humanity that frees itself from moral weakness by downloading ethical fitness apps.

The authors claim that the maxim “Know thyself!” from the temple of Apollo at Delphi is answered today more thoroughly than ever. Never has humanity known more about itself. Ethically, we are almost fully educated. We also know more than ever about the moral weaknesses that prevent us from acting in accordance with the ethical principles that we finally know so well. Research is discovering more and more mechanisms in the brain and in our psychology that affect humanity’s moral shortcomings.

Given this enormous and growing self-knowledge, why do we not develop artificial intelligence that supports a morally limping humanity? Why spend so much resources on developing even more intelligent artificial intelligence, which takes our jobs and might one day threaten humanity in the form of uncontrollable superintelligence? Why do we behave so unwisely when we could develop artificial intelligence to help us humans become superethical?

How can AI make morally weak humans super-ethical? The authors suggest a comparison with the fitness apps that help people to exercise more efficiently and regularly than they otherwise would. The authors’ suggestion is that our ethical knowledge of moral theories, combined with our growing scientific knowledge of moral weaknesses, can support the technological development of moral crutches: wise objects that support people precisely where we know that we are morally limping.

My personal assessment of this utopian proposal is that it might easily be realized in less utopian form. AI is already widely used as a support in decision-making. One could imagine mobile apps that support consumers to make ethical food choices in the grocery shop. Or computer games where consumers are trained to weigh different ethical considerations against each another, such as animal welfare, climate effects, ecological effects and much more. Nice looking presentations of the issues and encouraging music that make it fun to be moral.

The philosophical question I ask is whether such artificial decision support in shops and other situations really can be said to make humanity wiser and more ethical. Imagine a consumer who chooses among the vegetables, eagerly looking for decision support in the smartphone. What do you see? A human who, thanks to the mobile app, has become wiser than Socrates, who lived long before we knew as much about ourselves as we do today?

Ethical fitness apps are conceivable. However, the risk is that they spread a form of self-knowledge that flies above ourselves: self-knowledge suspiciously similar to the moral vice of self-satisfied presumptuousness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Pim Haselager & Giulio Mecacci (2020) Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly, AJOB Neuroscience, 11:2, 113-119, DOI: 10.1080/21507740.2020.1740353

The temptation of rhetoric

This post in Swedish

Autonomous together

Autonomy is such a cherished concept in ethics that I hardly dare to write about it. The fact that the concept cherishes the individual does not make my task any easier. The slightest error in my use of the term, and I risk being identified as an enemy perhaps not of the people but of the individual!

In ethics, autonomy means personal autonomy: individuals’ ability to govern their own lives. This ability is constantly at risk of being undermined. It is undermined if others unduly influence your decisions, if they control you. It is also undermined if you are not sufficiently well informed and rational. For example, if your decisions are based on false or contradictory information, or if your decisions result from compulsions or weakness of the will. It is your faculty of reason that should govern your life!

In an article in BMC Medical Ethics, Amal Matar, who has a PhD at CRB, discusses decision-making situations in healthcare where this individual-centered concept of autonomy seems less useful. It is about decisions made not by individuals alone, but by people together: by couples planning to become parents.

A couple planning a pregnancy together is expected to make joint decisions. Maybe about genetic tests and measures to be taken if the child risks developing a genetic disease. Here, as always, the healthcare staff is responsible for protecting the patients’ autonomy. However, how is this feasible if the decision is not made by individuals but jointly by a couple?

Personal autonomy is an idealized concept. No man is an island, it is said. This is especially evident when a couple is planning a life together. If a partner begins to emphasize his or her personal autonomy, the relationship probably is about to disintegrate. An attempt to correct the lack of realism in the idealized concept has been to develop ideas about relational autonomy. These ideas emphasize how individuals who govern their lives are essentially related to others. However, as you can probably hear, relational autonomy remains tied to the individual. Amal Matar therefore finds it urgent to take a further step towards realism concerning joint decisions made by couples.

Can we talk about autonomy not only at the level of the individual, but also at the level of the couple? Can a couple planning a pregnancy together govern their life by making decisions that are autonomous not only for each one of them individually, but also for them together as a couple? This is Amal Matar’s question.

Inspired by how linguistic meaning is conceptualized in linguistic theory as existing not only at the level of the word, but also at the level of the sentence (where words are joined together), Amal Matar proposes a new concept of couple autonomy. She suggests that couples can make joint decisions that are autonomous at both the individual and the couple’s level.

She proposes a three-step definition of couple autonomy. First, both partners must be individually autonomous. Then, the decision must be reached via a communicative process that meets a number of criteria (no partner dominates, sufficient time is given, the decision is unanimous). Finally, the definition allows one partner to autonomously transfer aspects of the decision to the other partner.

The purpose of the definition is not a philosophical revolution in ethics. The purpose is practical. Amal Matar wants to help couples and healthcare professionals to speak realistically about autonomy when the decision is a couple’s joint decision. Pretending that separate individuals make decisions in parallel makes it difficult to realistically assess and support the decision-making process, which is about interaction.

Amal Matar concludes the article, written together with Anna T. Höglund, Pär Segerdahl and Ulrik Kihlbom, with describing two cases. The cases show concretely how her definition can help healthcare professionals to assess and support autonomous decision-making at the level of the couple. In one case, the couple’s autonomy is undermined, in the other case, probably not.

Read the article as an example of how we sometimes need to modify cherished concepts to enable a realistic use of them. 

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Matar, A., Höglund, A.T., Segerdahl, P. and Kihlbom, U. Autonomous decisions by couples in reproductive care. BMC Med Ethics 21, 30 (2020). https://doi.org/10.1186/s12910-020-00470-w

We like challenging questions

This post in Swedish

Inspiration for responsible research and innovation

Our attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.

Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?

Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.

Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?

RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.

A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.

The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!

Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.

The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Declich, Andrea. 2019. RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project.

We recommend readings

This post in Swedish

Proceed carefully with vaccine against covid-19

Pharmaceutical companies want to quickly manufacture a vaccine against covid-19, with human testing and launch in the market as soon as possible. In a debate article, Jessica Nihlén Fahlquist at CRB warns of the risk of losing the larger risk perspective: “Tests on people and a potential premature mass vaccination entail risks. It is easy to forget about similar situations in the past,” she writes.

It may take time for side effects to appear. Unfortunately, it therefore also takes time to develop new safe vaccines. We need to develop a vaccine, but even with new vaccines, caution is needed.

The article is in Swedish. If you want to Google translate: Proceed carefully with vaccine against covid-19

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

We have a clinical perspective

What is a moral machine?

I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020).

We recommend readings

This post in Swedish

What shall we eat? An ethical framework for food choices (By Anna T. Höglund)

To reflect ethically on what we eat has been part of Western culture for centuries. In pre-modern times, the focus was mainly on the consumption of food, although it varied whether the emphasis was on the amount of food one should eat (as in ancient Greece) or on what kind of food one was allowed to eat (as in the Old Testament).

Modern food ethics has instead focused on the production of food, emphasizing aspects of animal ethics and environmental ethics. In a new article, I take a broader perspective and discuss both the production and consumption of food and further incorporate the meal as an important part of my food ethics analysis.

I identify four affected parties in relation to the production and consumption of food, namely, animals, nature, producers and consumers. What ethical values can be at stake for these parties?

For animals, an important value is welfare; not being exposed to pain or stress, but provided opportunities for natural behavior. For nature, important values are low negative impact on the environment and sustainable climate. For producers, ethical values at stake concern fair salaries and safe working conditions. For consumers, finally, important values are access to healthy food and the right to autonomous food choices. Apart from that, food can also be seen as an important value in pursuit of a good life.

Evidently, several ethical values are at stake when it comes to the production and consumption of food. Furthermore, these values often conflict when food choices are to be made. In such situations, a thorough weighing of values must be performed in order to find out which value should be given priority over another.

A problem with today’s food debate is that we tend to concentrate on one value at a time, without putting it in the perspective of other aspects. The question of how our food choices affect the climate has gained a lot of interest, at the expense of almost all other aspects of food ethics.

Many have learned that beef production can affect the climate negatively, since grazing cattle give rise to high levels of methane. They therefore choose to avoid that kind of meat. On the other hand, grazing animals can contribute to biodiversity as they keep the landscape open, which is good for the environment. Breeding chickens produces low levels of methane, but here the challenges concern animal welfare, natural behavior and the use of chemicals in the production of bird feed.

To replace meat with vegetables can be good for your health, but imported fruits and vegetables can be produced using toxins if they are not organically farmed. Long transports can also affect the climate negatively.

For these reasons, it can be ethically problematic to choose food based on only one perspective. Ethics is not that simple. We need to develop our ability to identify what values are at stake when it comes to food, and find good reasons for why we choose one sort of food instead of another. In the article, I develop a more comprehensive food ethical outlook by combining four well-known ethical concepts, namely, duties, consequences, virtues and care.

Duties and consequences are often part of ethical arguments. However, by including also virtues and care in my reasoning, the meal and the sense of community it gives rise to appear as important ethical values. Unfortunately, the latter values are at risk today when more and more people have their own individualized food preferences. During a meal, relations are developed, which the ethics of care emphasizes, but the meal is also an arena for developing virtues, such as solidarity, communication and respect.

It is hard to be an ethically aware consumer today, partly because there are so many aspects to take into account and partly because it is difficult to get reliable and trustworthy information upon which we can base our decisions. However, that does not mean that it is pointless to reflect on what is good and right when it comes to food ethical dilemmas.

If we think through our food choices thoroughly and avoid wasting food, we can do a lot to reach well-grounded food choices. Apart from that, we also need brave political decisions that can reduce factory farming, toxins, transports and emissions, and support small-scale and organic food production. Through such efforts, we might all feel a little more secure in the grocery shop, when we reflect on the question: What shall we eat?

Anna T. Höglund

Written by…

Anna T. Höglund, who is Associate Professor of Ethics at the Centre for Research Ethics & Bioethics and recently wrote a book on food ethics.

Höglund, Anna T. (2020) What shall we eat? An ethical framework for well-grounded food choices. Journal of Agricultural and Environmental Ethics. DOI: 10.1007/s10806-020-09821-4

We like real-life ethics

This post in Swedish

Communicating thought provoking research in our common language

Pär SegerdahlAfter having been the editor of the Ethics Blog for eight years, I would like to describe the research communication that usually occurs on this blog.

The Ethics Blog wants to avoid the popular scientific style that sometimes occurs in the media, which reports research results on the form, “We have traditionally believed that…, but a recent scientific study shows that…” This is partly because the Ethics Blog is run by a research center in ethics, CRB. Although ethics may involve empirical studies (for example, interviews and surveys), it is not least a matter of thinking. If you, as an ethicist, want to develop new recommendations on informed consent, you must think clearly and thoroughly. However, no matter how rigorously you think, you can never say, “We have traditionally believed that it is ethically important to inform patients about…, but recent philosophical thoughts show that we should avoid doing that.”

Thinking does not provide the authority that empirical research gives. As an ethicist or a philosopher, I cannot report my conclusions as if they were research results. Nor can I invoke “recent thoughts” as evidence. Thoughts give no evidence. Ethicists therefore present their entire thinking on different issues to the critical gaze of readers. They present their conclusions as open suggestions to the reader: “Here is how I honestly think about this issue, can you see it that way too?”

The Ethics Blog therefore avoids merely disseminating research results. Of course, it informs about new findings, but it emphasizes their thought provoking aspects. It chooses to reflect on what is worth thinking about in the research. This allows research communication to work more on equal terms with the reader, since the author and the reader meet in thinking about aspects that make both wonder. Moreover, since each post tries to stand on its own, without invoking intellectual authority (“the ethicists’ most recent thoughts show that…”), the reader can easily question the blogger’s attempts to think independently.

In short: By communicating research in a philosophical spirit, science can meet people on more equal terms than when they are informed about “recent scientific findings.” By focusing on the thought provoking aspects of the research, research communication can avoid a patronizing attitude to the reader. At least that is the ambition of the Ethics Blog.

Another aspect of the research communication at CRB, also beyond the Ethics Blog, is that we want to use our ordinary language as far as possible. Achieving a simple style of writing, however, is not easy! Why are we making this effort, which is almost doomed to fail when it comes to communicating academic research? Why do Anna Holm, Josepine Fernow and I try to communicate research without using strange words?

Of course, we have reflected on our use of language. Not only do we want to reach many different groups: the public, patients and their relatives, healthcare staff, policy makers, researchers, geneticists and more. We also want these groups to understand each other a little better. Our common language accommodates more human agreement than we usually believe.

Moreover, ethics research often highlights the difficulties that different groups have in understanding each other. It can be about patients’ difficulties in understanding genetic risk information, or about geneticists’ difficulties in understanding how patients think about genetic risk. It may be about cancer patients’ difficulties in understanding what it means to participate in clinical trials, or about cancer researchers’ difficulties in understanding how patients think.

If ethics identifies our human difficulties in understanding each other as important ethical problems, then research communication will have a particular responsibility for clarifying things. Otherwise, research communication risks creating more communication difficulties, in addition to those identified by ethics! Ethics itself would become a communication problem. We therefore want to write as clearly and simply as we can, to reach the groups that according to the ethicists often fail to reach each other.

We hope that our communication on thought provoking aspects of ethics research stimulates readers to think for themselves about ethical issues. Everyone can wonder. Non-understanding is actually a source of wisdom, if we dare to admit it.

Pär Segerdahl

This post in Swedish

We care about communication - the Ethics Blog

 

 

« Older posts