Bioethics without doctrines

September 24, 2019

Pär SegerdahlEver since this blog started, I have regularly described how bioethical discussions often are driven by our own psychology. On the surface, the debates appear to be purely rational investigations of the truthfulness of certain claims. The claims may be about the risks of genetically modified organisms (GMOs), the private nature of genetic information, the moral status of the human embryo, or the exploitation of egg donors for stem cell research. The topics are, as you probably hear, sensitive. Behind the rational surface of the debates, one can sense deeply human emotions and reactions: fear, anger, anxiety.

Have you ever been afraid? Then you know how easily fear turns into anger towards what you think causes your fear. What happens to the anger? Anger, in turn, tends to express itself in the form of clever arguments against what you think is causing your fear. You want to prove how wrong what frightens you is. It must be condemned, it must cease, it must be prohibited. This is how debates often begin.

The debates hide the emotions that drive them. Fear hides behind anger, which hides behind clever arguments. This hiding in several steps creates the shiny rational surface. It sounds like we were discussing the truth of purely intellectual doctrines about reality. Doctrines that must be defended or criticized rationally.

As academics, we have a responsibility to contribute to debates, to contribute with our expertise and our ability to reason correctly. This is good. Debates need objectivity and clear logic. The only risk is that sometimes, when the debates are rooted in fear, we contribute to hiding the human emotions even more deeply below the rational surface. I think I can see this happening in at least some bioethical debates.

What we need to do in these cases, I think, is to recognize the emotions that drive the debates. We need to see them and handle them gently. Here, too, objectivity and clear logic are required. However, we do not direct our objectivity at pure doctrines. Rather, we direct it more thoughtfully at the emotions and their expressions. Much like we can talk compassionately with a worried child, without trying to disprove the child as if the child’s worries were deduced from false doctrines about reality.

If our objectivity does not acknowledge emotions, if it does not take them seriously, then the emotions will continue to drive endlessly polarizing debates. But if our objectivity is kindly directed to the emotions, to the psychological engine behind the polarization, then we can pause the sensitive mechanism and examine it in detail. At least we can make it react a little slower.

We habitually distinguish between reason and feeling. As soon as a conflict emerges, we hope that reason will pick out the right position for us. We do not consider the possibility that we can direct reason directly to the emotions and their expressions. It is as if we thought that feelings are so irrational that we must suppress them, should hide them. As parents, however, this is precisely how we reason wisely: We talk to the child’s feelings. Sometimes we need to handle our own feelings the same way. We need to acknowledge them and take good care of them.

In such a compassionate spirit, we can turn our objectivity and our wisdom towards ourselves. Not just in bioethics, but everywhere where human vulnerability turns into relentless argumentation.

By gently dissolving the doctrines that lock the positions and reinforce the hidden emotions, we can begin the process of undoing the mental deadlocks. Then we may talk more clearly and objectively about genetics and stem cell research.

Pär Segerdahl

This post in Swedish

We want to be just - the Ethics Blog


Learning from the difficulties

September 11, 2019

Pär SegerdahlIn popular scientific literature, research can sometimes appear deceptively simple: “In the past, people believed that … But when researchers looked more closely, they found that …” It may seem as if researchers need not do much more than visit archives or laboratories. There, they take a closer look at things and discover amazing results.

There is nothing wrong with this popular scientific prose. It is exciting to read about new research results. However, the prose often hides the difficulties of the research work, the orientation towards questions and problems. As I said, there is nothing wrong with this. Readers of popular science rarely need to know how physicists or sociologists struggle daily to formulate their questions and delve into the problems. Readers are more interested in new findings about our fascinating world.

However, there are academic fields where the questions affect us all more directly, and where the questions are at the center of the research process from beginning to end. Two examples are philosophy and ethics. Here, identifying the difficult questions can be the important thing. Today, for example, genetics is developing rapidly. That means it affects more people; it affects us all. Genetic tests can now be purchased on the internet and more and more patients may be genetically tested in healthcare to individualize their treatment.

Identifying ethical issues around this development, delving into the problems, becoming aware of the difficulties, can be the main element of ethics research. Such difficulty-oriented work can make us better prepared, so that we can act more wisely.

In addition, ethical problems often arise in the meeting between living human beings and new technological opportunities. Identifying these human issues may require that the language that philosophy and ethics use is less specialized, that it speaks to all of us, whether we are experts or not. Therefore, many of the posts on the Ethics Blog attempt to speak directly to the human being in all of us.

It may seem strange that research that delves into questions can help us act wisely. Do we not rather become paralyzed by all the questions and problems? Do we not need clear ethical guidelines in order to act wisely?

Well, sometimes we need guidelines. But they must not be exaggerated. Think about how much better you function when you do something for the second time (when you become a parent for the second time, for example). Why do we function better the second time? Is it because the second time we are following clear guidelines?

We grow through being challenged by difficulties. Philosophy and ethics delve into the difficulties for this very reason. To help us to grow, mature, become wiser. Individually and together, as a society. I do not know anyone who matured as a human being through reading guidelines.

Pär Segerdahl

This post in Swedish

We like challenging questions - the ethics blog

 


How can we set future ethical standards for ICT, Big Data, AI and robotics?

July 11, 2019

josepine-fernow-siennaDo you use Google Maps to navigate in a new city? Ask Siri, Alexa or OK Google to play your favourite song? To help you find something on Amazon? To read a text message from a friend while you are driving your car? Perhaps your car is fitted with a semi-autonomous adaptive cruise control system… If any software or machine is going to perform in any autonomous way, it needs to collect data. About you, where you are going, what songs you like, your shopping habits, who your friends are and what you talk about. This begs the question:  are we willing to give up part of our privacy and personal liberty to enjoy the benefits technology offers.

It is difficult to predict the consequences of developing and using new technology. Policymakers struggle to assess the ethical, legal and human rights impacts of using different kinds of IT systems. In research, in industry and our homes. Good policy should be helpful for everyone that holds a stake. We might want it to protect ethical values and human rights, make research and development possible, allow technology transfer from academia to industry, make sure both large and smaller companies can develop their business, and make sure that there is social acceptance for technological development.

The European Union is serious about developing policy on the basis of sound research, rigorous empirical data and wide stakeholder consultation. In recent years, the Horizon2020 programme has invested € 10 million in three projects looking at the ethics and human rights implications of emerging digital technologies: PANELFIT, SHERPA and SIENNA.

The first project, PANELFIT (which is short for Participatory Approaches to a New Ethical and Legal Framework for ICT), will develop guidelines on the ethical and legal issues of ICT research and innovation. The second, SHERPA (stands for Shaping the ethical dimensions of Smart Information Systems (SIS) – A European Perspective), will develop tools to identify and address the ethical dimensions of smart information systems (SIS), which is the combination of artificial intelligence (AI) and big data analytics. SIENNA (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), will develop research ethics protocols, professional ethical codes, and better ethical and legal frameworks for AI and robotics, human enhancement technologies, and human genomics.

SSP-graphic

All three projects involve experts, publics and stakeholders to co-create outputs, in different ways. They also support the European Union’s vision of Responsible Research and Innovation (RRI). SIENNA, SHERPA and PANELFIT recently published an editorial in the Orbit Journal, inviting stakeholders and publics to engage with the projects and contribute to the work.

Want to read more? Rowena Rodrigues and Anaïs Resseguier have written about some of the issues raised by the use of artificial intelligence on Ethics Dialogues (The underdog in the AI and ethical debate: human autonomy), and you can find out more about the SIENNA project in a previous post on the Ethics Blog (Ethics, human rights and responsible innovation).

Want to know more about the collaboration between SIENNA, SHERPA and PANELFIT? Read the editorial in Orbit (Setting future ethical standards for ICT, Big Data, AI and robotics: The contribution of three European Projects), or watch a video from our joint webinar on May 20, 2019 on YouTube (SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics).

Want to know how SIENNA views the ethical impacts of AI and robotics? Download infographic (pdf) and read our state-of-the-art review for AI & robotics (deliverable report).

AI-robotics-ifographic

Josepine Fernow

This post in Swedish

We want solid foundations - the Ethics Blog

 


Promoting public health requires responsibility, compassion and humility

June 10, 2019

Jessica Nihlén FahlquistPublic health focuses on the prevention of disease and the promotion of health on a collective level, that is, the health of the population. This distinguishes public health from medical care and the doctor-patient relationship.

In a clinical setting, the doctor discusses treatments with the patient directly and risks and benefits are assessed in relation to that individual. In contrast, public health agencies need to base their analysis on a collectivist risk-weighing principle, weighing risks of the population against benefits of the population. One example could be taxation of cigarettes or information concerning ways to reduce obesity.

Although the generalizations involved and the collectivist focus is necessary in public health, and although the overall intentions are good, there is always a risk that individual interests, values and rights are threatened. One example is the way current national and international breastfeeding policy affects non-breastfeeding mothers and possibly gay and adoptive parents. The norm to breastfeed is very pervasive, and studies show that women who cannot breastfeed feel that they may harm the baby or that they are inadequate as parents. It is possible to think of a couple who want to share parenthood equally and for that reason choose to bottle-feed their baby due to their values. The collectivist focus is based on a utilitarian rationale where the consequences in terms of health-related benefits of the population are the primary goal of successful interventions. In such efforts, the most important value is efficacy.

In addition to the underlying utilitarian perspective on health, there is also a somewhat contrasting human rights perspective in public health: the idea that all humans have certain rights, and that the right to life and health are of utmost importance. Finally, health is also discussed in terms of local and global justice, especially since inequalities in terms of socio-economical and educational differences have been acknowledged during recent years.

One could conclude that all aspects of the ethics of public health are covered by these different approaches. However, I would argue that there is one dimension missing in these analyses, namely, virtue ethics, and more specifically the virtues of responsibility, compassion and humility.

As mentioned above, there is a risk that the interests, values and rights of particular individuals and minorities are neglected by ever so well-intended collectivist policies. The power involved in more and less coercive public health policies calls for a certain measure of responsibility. A balance should be struck between the aim to promote the collective good and the respect for the choices and values of individuals.

In addition, a certain measure of compassion is needed. Compassion could be seen as a disposition to think and act in an emotionally engaged way in order to understand and acknowledge the effects of policy on individuals. This is clear when reflecting on the effects of breastfeeding policy on individuals who cannot breastfeed their babies.

Finally, since public health policy is not only a matter of evidence and science, but also about values, a certain degree of humility should be exercised, acknowledging also the provisional character of scientific evidence. This is the case with measles vaccination. The safety and efficacy of the vaccine can, and has been, established by science. However, the question whether to introduce mandatory vaccination is a matter of values. It should be possible to acknowledge and respect the values and perspectives of individuals without compromising what scientific evidence suggests in terms of safety and efficacy.

The virtues of responsibility, compassion and humility could be understood in terms of values of public health professionals, and they should be encouraged by the agencies for which such professionals work.

Jessica Nihlén Fahlquist

This post in Swedish

We like ethics : www.ethicsblog.crb.uu.se

 


Genetic risk entails genetic responsibility

March 5, 2019

Pär SegerdahlIntellectual optimists have seen genetic risk information as a human victory over nature. The information gives us power over our future health. What previously would have been our fate, genetics now transforms into matters of personal choice.

Reality, however, is not as rosy as in this dream of intellectual power over life. Where there is risk there is responsibility, Silke Schicktanz writes in an article on genetic risk and responsibility. This is probably how people experience genetic risk information when they face it. Genetic risk gives us new forms of responsibility, rather than liberates us from nature.

Silke Schicktanz describes how responsibility emerges in situations where genetic risk is investigated, communicated and managed. The analysis exceeds what I can reproduce in a short blog post. However, I can give the reader a sense of how genetic risk information entails a broad spectrum of responsibilities. Sometimes in the individual who receives the information. Sometimes in the professional who provides the information. Sometimes in the family affected by the information. The examples are versions of the cases discussed in the article:

Suppose you have become strangely forgetful. You do a genetic test to determine if you have a gene associated with Alzheimer’s disease. You have the gene! The test result immediately makes you responsible for yourself. What can you do to delay or alleviate the disease? What practical measures can be taken at home to help you live with the disease? You can also feel responsibility for your family. Have you transferred the gene to your children and grandchildren? Should you urge them to test themselves? What can they do to protect themselves? The professional who administered the test also becomes responsible. Should she tell you that the validity of the test is low? Maybe you should not have been burdened with such a worrying test result, when the validity so low?

Suppose you have rectum-colon cancer. The surgeon offers you to participate in a research study in which a genetic test of the tumor cells will allow individualized treatment. Here, the surgeon becomes responsible for explaining research in personalized medicine, which is not easy. There is also the responsibility of not presenting your participation in the study as an optimization of your treatment. You yourself may feel a responsibility to participate in research, as patients have done in the past. They contributed to the care you receive today. Now you can contribute to the use genetic information in future cancer care. Moreover, the surgeon may have a responsibility to counteract a possible misunderstanding of the genetic test. You can easily believe that the test says something about disease genes that you may have passed on, and that the information should be relevant to your children. However, the test concerns mutations in the cancer cells. The test provides information only about the tumor.

Suppose you have an unusual neurological disorder. A geneticist informs you that you have a gene sequence that may be the cause of the disease. Here we can easily imagine that you feel responsibility for your family and children. Your 14-year-old son has started to show symptoms, but your 16-year-old daughter is healthy. Should she do a genetic test? You discuss the matter with your ex-partner. You explain how you found the genetic information helpful: you worry less, you have started going on regular check-ups and you have taken preventive measures. Together, you decide to tell your daughter about your test results, so that she can decide for herself if she wants to test herself.

These three examples are sufficient to illustrate how genetic risk entails genetic responsibility. How wonderful it would have been if the information simply allowed us to triumph over nature, without this burdensome genetic responsibility! A pessimist could object that the responsibility becomes overpowering instead of empowering. We must surrender to the course of nature; we cannot control everything but must accept our fate.

Neither optimists nor pessimists tend to be realistic. The article by Silke Schicktanz can help us look more realistically at the responsibilities entailed by genetic risk information.

Pär Segerdahl

Schicktanz, S. 2018. Genetic risk and responsibility: reflections on a complex relationship. Journal of Risk Research 21(2): 236-258

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se


Neuroethics goes global

February 12, 2019

The complicated meaning, powerful assumptions, and boundless hopes about what can be revealed through neuroscience have made this discipline a national funding priority around the globe. A growing cohort of large-scale brain research initiatives aim to unravel the mysteries of the basis of feelings, thinking, and ultimately the mind. Questions formerly in the domain of the philosophical world have become part and parcel to neuroscience.

Just as science has so clearly become a global enterprise, ethics must keep pace. Cultural misunderstandings have nontrivial consequences for the scientific enterprise. Gaps in understanding negatively impact opportunities for collaboration and sharing, ultimately slowing scientific advancement. Too narrow of a view on science can limit our ability to reap the benefits of discoveries and, perhaps most damning for science, can result in a failure to anticipate and recognize the full consequences and risks of research.

To date, neuroethics discussions have been dominated by Western influences. However, the rapid neuroscientific development in East Asia in particular and the not-so-gradual relocation of a number of cutting-edge research projects from the West to East Asia, has made it clear that exploration and understanding of the ethics and cultural values informing research will be critical in engaging science as a collaborative global enterprise.

The Neuroethics Workgroup of the International Brain Initiative is comprised of members of each of the existing and emerging large-scale brain research initiatives. Leveraging the fellowship of the IBI and using an intentional culturally aware approach to guide its work, the Neuroethics Workgroup completes rapid deliverables in the near (within one year) and short-term (within two years).

With the inaugural 2017 summit, leading scientists, ethicists, and humanist co-created a universal list of neuroethics questions, Neuroethics Questions for Neuroscientists (NeQN) that should be addressed by scientists in each brain project. These NeQN were published in Neuron in 2018 and can be found here.

The neuroethics questions themselves were not necessarily unfamiliar neuroethics questions; however, these NEQN were designed to be adapted and informed by the cultural values and frameworks of each country.

The 2018 meeting served as a workshop, where each of the brain projects discussed why and how they will integrate neuroethics into their brain projects with particular recognition of the five questions from the 2018 Neuroethics Questions for Neuroscientists (NeQN) featured in Neuron. The product is the first neuroethics special issue in a high impact neuroscience journal.

Each perspective offers topics and context for their engagement with and practice of neuroethics. The issue features the seven existing and emerging large-scale brain research projects organized in alphabetical order.

The Australian Brain Alliance describes how neuroethics has been integrated into their research ethos as featured in their public outreach and advocacy efforts as well as their explorations in the public domains such as neurolaw and industry. A key component for the Australian project is diversity and inclusion, and there is a particular interest in engaging brain health with vulnerable Indigenous populations in Australia.

The Canadian Brain Research Strategy paper illustrates the rich historical efforts in pioneering neuroethics and future plans of a national collaboration to carefully consider public discourse and patient engagement as they pursue deeper knowledge of the how the brain learns, remembers, and adapts. A fundamental recognition of the neuroethics backbone of the Canadian project is that “The powerful ability of the brain to change or rewire itself in response to experience is the foundation of human identity.”

The China Brain Project discusses potential models for important public outreach campaigns and the balance of considering traditional Chinese culture and philosophy, particularly in the areas of brain death, conceptualizations of personhood and individual rights, and stigma for mental illness. The authors describe commitments for integrating neuroethics as the China Brain Project is being designed.

The EU Human Brain Project outlines its bold leadership and addresses the conceptual and philosophical issues of neuroethics and the implementation of philosophical insights as an iterative process for neuroscience research. A project with an extremely sophisticated neuroethics infrastructure, this paper provides examples of managing issues related to the moral status of engineered entities, how interventions could impact autonomy and agency, and dual use.

The Japan Brain/MINDS paper describes plans to reinvigorate historical efforts in neuroethics leadership as it expands the scope of its research and launches Japan Brain/MINDS Beyond. In particular, the project will integrate neuroethics to address issues related to privacy and data collection as well as in considering stigma and biological models of psychiatric disease.

The Korea Brain Initiative paper nicely demonstrates how advocacy for neuroscience and neuroethics at the government and policy levels go hand in hand. As Korea aims to advance its neuroscience community, the Korean government has seen neuroethics as integral to neuroscientists’ development. The Korea Brain Initiative is exploring ethical issues related to “intelligent” brain technologies, brain banking, cognitive enhancement, and neural privacy in the milieu of traditional and contemporary cultural traditions in Korea.

The US BRAIN Initiative outlines its efforts in building an infrastructure for neuroethics in research and policy and for funding research as it plans its roadmap for the next phase of BRAIN to 2025. Example of ethical issues that arise from the project’s goals of understanding neural circuitry include the moral relevance and status of ex vivo brain tissue and organoids as well as unique ethical concerns around informed consent in brain recording and stimulation in humans.

Each project illustrates that neuroethics is important regardless of the scope and methodologies inherent in its research goals and demonstrates the utility of the NeQNs for today’s and future scientists within and beyond the large-scale neuroscience research projects.

Karen Rommelfanger

PhD, Director, Neuroethics Program Emory Center for Ethics, Co-chair International Brain Initiative Neuroethics Workgroup


Larger and smaller sized ethics

January 29, 2019

Pär SegerdahlEthics can be about big, almost religious questions. Should scientists be allowed to harvest stem cells from human embryos and then destroy the embryos? Ethics can also be about narrower, almost professional issues. How should the development of embryonic stem cell lines be regulated? The latter question is similar to the question: How should the aircraft industry be regulated?

Larger and smaller ethics can have difficulties understanding each other, even though they often need to talk. For example, larger ethics can be suspicious of medical research and the pharmaceutical industry, and overlook how meticulously responsible they most often are. And how rigorously supervised they are, as the aircraft industry. Neither the drug nor the aircraft industry can be carefree about safety issues!

Smaller ethics can also be suspicious of larger ethics. Medical research and industry, with their professional attitudes, can experience larger ethical questions as being as vague and distant as nebulae. This fact, that larger and smaller ethics have difficulties even hearing each other, creates the need for a simpler, more sincerely questioning attitude, which never settles within any limits, whether they are narrower or wider. Remember that even larger perspectives often degenerate into regulations of how people should think. They shrink.

Medical research and industry need regulation, it is as important as the safety work in the aircraft industry. However, we need also to think big about human life and life in general. In order to keep ethics alive, a beginner’s attitude is needed, constantly renewed sincerity. Does it sound difficult? All we need to do is to ask the questions we really wonder about, instead of hiding them behind a confident facade.

Nothing could be easier. The question is if we dare. The sincerest questions open up the biggest perspectives.

Pär Segerdahl

This post in Swedish

We like challenging questions - the ethics blog


%d bloggers like this: