Proceed carefully with vaccine against covid-19

April 4, 2020

Pär SegerdahlPharmaceutical companies want to quickly manufacture a vaccine against covid-19, with human testing and launch in the market as soon as possible. In a debate article, Jessica Nihlén Fahlquist at CRB warns of the risk of losing the larger risk perspective: “Tests on people and a potential premature mass vaccination entail risks. It is easy to forget about similar situations in the past,” she writes.

It may take time for side effects to appear. Unfortunately, it therefore also takes time to develop new safe vaccines. We need to develop a vaccine, but even with new vaccines, caution is needed.

The article is in Swedish. If you want to Google translate: Proceed carefully with vaccine against covid-19

Pär Segerdahl

We participate in debates - the Ethics Blog


What is a moral machine?

April 1, 2020

Pär SegerdahlI recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020). https://doi.org/10.1007/s11948-019-00084-5

This post in Swedish

We like challenging questions - the ethics blog


Artificial intelligence and living consciousness

March 2, 2020

Pär SegerdahlThe Ethics Blog will publish several posts on artificial intelligence in the future. Today, I just want to make a little observation of something remarkable.

The last century was marked by fear of human consciousness. Our mind seemed as mystic as the soul, as superfluous in a scientific age as God. In psychology, behaviorism flourished, which defined psychological words in terms of bodily behavior that could be studied scientifically in the laboratory. Our living consciousness was treated as a relic from bygone superstitious ages.

What is so remarkable about artificial intelligence? Suddenly, one seems to idolize consciousness. One wallows in previously sinful psychological words, at least when one talks about what computers and robots can do. These machines can see and hear; they can think and speak. They can even learn by themselves.

Does this mean that the fear of consciousness has ceased? Hardly, because when artificial intelligence employs psychological words such as seeing and hearing, thinking and understanding, the words cease to be psychological. The idea of computer “learning,” for example, is a technical term that computer experts define in their laboratories.

When artificial intelligence embellishes machines with psychological words, then, one repeats how behaviorism defined mind in terms of something else. Psychological words take on new machine meanings that overshadow the meanings the words have among living human beings.

Remember this next time you wonder if robots might become conscious. The development exhibits fear of consciousness. Therefore, what you are wondering is not if robots can become conscious. You wonder if your own consciousness can be superstition. Remarkable, right?

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog


Neuroethics as foundational

January 28, 2020

Pär SegerdahlAs neuroscience expands, the need for ethical reflection also expands. A new field has emerged, neuroethics, which celebrated its 15th anniversary last year. This was noted in the journal AJOB Neuroscience through an article about the area’s current and future challenges.

In one of the published comments, three researchers from the Human Brain Project and CRB emphasize the importance of basic conceptual analysis in neuroethics. The new field of neuroethics is more than just a kind of ethical mediator between neuroscience and society. Neuroethics can and should contribute to the conceptual self-understanding of neuroscience, according to Arleen Salles, Kathinka Evers and Michele Farisco. Without such self-understanding, the ethical challenges become unclear, sometimes even imaginary.

Foundational conceptual analysis can sound stiff. However, if I understand the authors, it is just the opposite. Conceptual analysis is needed to make concepts agile, when habitual thinking made them stiff. One example is the habitual thinking that facts about the brain can be connected with moral concepts, so that, for example, brain research can explain to us what it “really” means to be morally responsible for our actions. Such habitual thinking about the role of the brain in human life may suggest purely imaginary ethical concerns about the expansion of neuroscience.

Another example the authors give is the external perspective on consciousness in neuroscience. Neuroscience does not approach consciousness from a first-person perspective, but from a third-person perspective. Neuroscience may need to be reminded of this and similar conceptual limitations, to better understand the models that one develops of the brain and human consciousness, and the conclusions that can be drawn from the models.

Conceptual neuroethics is needed to free concepts from intellectual deadlocks arising with the expansion of neuroscience. Thus, neuroethics can contribute to deepening the self-understanding of neuroscience as a science with both theoretical and practical dimensions. At least that is how I understand the spirit of the authors’ comment in AJOB Neuroscience.

Pär Segerdahl

Emerging Issues Task Force, International Neuroethics Society (2019) Neuroethics at 15: The Current and Future Environment for Neuroethics, AJOB Neuroscience, 10:3, 104-110, DOI: 10.1080/21507740.2019.1632958

Arleen Salles, Kathinka Evers & Michele Farisco (2019) The Need for a Conceptual Expansion of Neuroethics, AJOB Neuroscience, 10:3, 126-128, DOI: 10.1080/21507740.2019.1632972

This post in Swedish

We want solid foundations - the Ethics Blog


Ethical issues when gene editing approaches humanity

December 2, 2019

Pär SegerdahlGene editing technology, which already is used to develop genetically modified organisms (GMOs), could in the future also be used clinically in humans. One such application could be genetic modification of human embryos, editing genes that would otherwise cause disease.

Of course, the scenario of ​​clinical uses of genetic modification in humans arouses deep concern and heated debate. In addition to questions about efficacy and safety for the people who would be directly affected by the treatments, huge issues are raised about the fate of humanity. When gene editing is performed on germ cells, the changes are passed on to future generations.

What is often overlooked in the debate are ethical questions about the research that would have to precede such clinical applications. In order to develop genetic techniques that are effective and safe for humans, much research is required. One must, for example, test the techniques on human embryos. However, since genetic editing is best done at the time of fertilization (if done on the embryo, not all cells are always modified), a large number of donated gametes are probably required, where the eggs are fertilized in the laboratory to create genetically modified embryos.

Emilia Niemiec and Heidi Carmen Howard, both at CRB, draw attention to these more immediate ethical concerns. They point out that already the research, which precedes clinical applications, must be carefully considered and debated. It raises its own ethical issues.

In a letter to Nature, they highlight the large number of donated eggs that such research is likely to need. Egg donation involves stress and risks for women. Furthermore, the financial compensation they are offered can function as undue incentive for economically disadvantaged women.

Emilia Niemiec and Heidi Carmen Howard write that women who decide on egg donation should be given the opportunity to understand the ethical issues, so that they can make an informed decision and participate in the debate about gene editing. I think they have a good point when they emphasize that many ethical issues are raised already by the research work that would precede clinical applications.

A question I ask myself is how we can communicate with each other about deeply worrying future scenarios. How do we distinguish between image and reality when the anxiety starts a whole chain reaction of frightening images, which seem verified by the anxiety they trigger? How do we cool down this psychological reactivity without quenching the critical mind?

In short, how do we think and talk wisely about urgent future issues?

Pär Segerdahl

Niemiec, E. and Carmen Howard, H. 2019. Include egg donors in CRISPR gene-editing debate. Nature 575: 51

This post in Swedish

Approaching future issues - the Ethics Blog


Broad and deep consent for biobanks

November 18, 2019

Pär SegerdahlA new article on consent for biobanks manages to surprise me. How? By pointing out what ought to be obvious! If we want to judge what kind of consent works best for biobanks, then we should look at today’s biobanks and not look back at more traditional medical research.

The risks in traditional medical research are mainly physical. Testing new substances and interventions on human subjects can harm them. Potential research participants must therefore be informed about these physical risks, which are unique to each specific project. For this reason, study-specific informed consent is essential in traditional medical research.

In biobank research, however, the risks are primarily informational. Personal data may end up in the wrong hands. The risks here are not so much linked to the specific projects that use material from the biobank. The risks are rather linked to the biobank itself, to how it is governed and controlled. If we want to give biobank participants ethical protection through informed consent, it is information about the biobank they need, not about specific projects.

In the debate on consent for biobanks, study-specific consent figured as a constant requirement for what informed consent must be. However, in the context of biobanks, that requirement risks placing an irrelevant demand on biobanks. Participants will receive the wrong protection! What to do?

Instead of looking back, as if study-specific consent were an absolute norm for medical research, the authors formulate three requirements that are relevant to today’s biobanks. First, potential participants should be informed about relevant risks and benefits. Second, they should be given an opportunity to assess whether research on the biobank material is in line with their own values. Finally, they should be given ethical protection as long as they participate, as well as opportunities to regularly reconsider their participation.

In their comparison of the various forms of consent that have figured in the debate, the authors conclude that broad consent particularly well satisfies the first criterion. Since the risks are not physical but concern the personal data that the biobank stores, information to participants about the biobank itself is more relevant than information about the specific projects that use the services of the biobank. That is what broad consent delivers.

However, the authors argue that broad consent fails to meet the latter two criteria. If potential participants are not informed about specific projects, it becomes difficult to judge whether the biobank material is used according to their values. In addition, over time (biobank material can be saved for decades) participants may even forget that they have provided samples and data to the biobank. This undermines the value of their right to withdraw consent.

Again, what to do? The authors propose a deepened form of broad consent, meant to satisfy all three requirements. First, the information provided to participants should include a clear scope of the research that is allowed to use the biobank material, so that participants can judge whether it is consistent with their own values, and so that future ethical review can assess whether specific projects fall within the scope. Secondly, participants should be regularly informed about the activities of the biobank, as well as reminded of the fact that they still participate and still have a right to withdraw consent.

Ethical reasoning is difficult to summarize. If you want to judge for yourself the authors’ conclusion that broad and deep consent is best when it comes to biobanks, I must refer you to the article.

In this post, I mainly wanted to highlight the originality of the authors’ way of discussing consent: they formulate new relevant criteria to free us from old habits of thought. The obvious is often the most surprising.

Pär Segerdahl

Rasmus Bjerregaard Mikkelsen, Mickey Gjerris, Gunhild Waldemar & Peter Sandøe. Broad consent for biobanks is best – provided it is also deep. BMC Medical Ethics volume 20, Article number: 71 (2019)

This post in Swedish

We challenge habits of thought : the Ethics Blog


Why should we care about the environment and climate change?

October 8, 2019

Jessica Nihlén FahlquistTo most of us, it is self-evident that we, as human beings and societies, should care about the environment and climate change. Greta Thunberg has, in a remarkable way, spurred political interest and engagement in climate change. This effort has affected our thoughts and emotions concerning environmental policy. However, when we dig deeper into the philosophical debate, there are different ideas on why we should care about the environment. That is, even though we agree on the need to care, there are various arguments as to why and how we should do that.

First, some scholars argue that we should care about nature because we need it and what we get from it. Nature is crucial to us, for example, because it provides us with water and food as well as air to breathe. Without nature and a good climate, we simply cannot live on planet Earth. Unless we make a substantial effort, our lifestyle will lead to flooding, unmanageable migration and many other enormous challenges. Furthermore, it will affect poorer people and poorer regions the most, making it a crucial issue of justice.

Second, some philosophers argue that it is wrong to base our concern for nature and the environment on the needs of, and effects on, human beings. The anthropocentric assumptions are wrong, they argue. Even without human beings, nature has a value. Its value is intrinsic and not merely instrumental. Proponents of this view often claim that animals have values, and possibly even rights, that should be protected. They disagree on whether it is individual animals, species or even ecosystems that should be protected.

Environmental philosophy consists of many different theoretical schools, and the notions they defend underlie societal debate, explicitly or merely implicitly. Some notions are based on consequentialist ethics and others on deontological ethics. In addition to these two schools of thought, virtue ethics has become influential in the philosophical debate.

Environmental Virtue Ethics holds that it is inadequate to focus on consequences, duties and rights. Furthermore, it is inadequate to focus on rules and legislation. Our respect for and reverence for nature is based on the virtues we ought to develop as human beings. In addition, society should encourage such virtues. Virtue ethics focuses on the character traits, on the dispositions to act, and on the attitudes and emotions that are relevant to a certain area, in this case the environment. It is a richer, more complex theory than the other two mentioned. Even though virtues were first discussed during Antiquity, and the concept might seem obsolete, they are highly relevant in our time. Through reflection, experience and role models, we can all develop virtues crucial to environmental protection and sustainability. The idea is not only that society needs these virtuous people, but that virtuous human beings blossom as individuals when they develop these virtues. They argue that it is wrong to see nature as a commodity belonging to us. Instead, it is argued, we are part of nature and have a special relationship with it. This relationship should be the focus of the debate.

Whereas Environmental Virtue Ethics focuses on ethical virtues, that is, how we should relate to nature through our development into virtuous individuals, a related school of thought focuses on the aesthetical value of nature. It is pointed out that not only does nature have ethical value, but an aesthetical value in virtue of its beauty. We should spend time in nature in order to fully appreciate its aesthetical value.

All of the mentioned schools of thought agree that we should care about the environment and climate. They also hold that sustainability is an important national and global goal. Interestingly, what is beneficial from a sustainability perspective is not necessarily beneficial to climate changes. For instance, nuclear energy could be considered good for climate change due to its marginal emissions, but it is doubtful that it is good for sustainability considering the problems of nuclear waste.

Finally, it is important to include the discussion of moral responsibility. If we agree that it is crucial to save the environment, then the question arises who should take responsibility for materializing this goal. One could argue that individuals bear a personal responsibility to, for example, reduce consumption and use sustainable transportation. However, one could also argue that the greatest share of responsibility should be taken by political institutions, primarily states. In addition, a great share of responsibility might be ascribed to private actors and industries.

We could also ask whether, and to what extent, responsibility is about blame for past events, for example, the western world causing too much carbon emissions in the past. Alternatively, we could focus on what needs to be done now, regardless of causation and blame. According to this line of thinking, the most important question to ask is who has the resources and capacity to make the necessary changes. The questions of responsibility could be conceptualized as questions of individual versus collective responsibility and backward-looking versus forward-looking responsibility.

As we can see, there are many philosophically interesting aspects and discussions concerning the question why we should care about the environment. Hopefully, these discussions can contribute to making the challenges more comprehensible and manageable. Ideally, they can assist in the tremendous work done by Greta Thunberg and others like her so that it can lead to agreement on what needs to be done by individuals, nations and the world.

Jessica Nihlén Fahlquist

Nihlén Fahlquist, J. 2018. Moral Responsibility and Risk in Modern Society – Examples from emerging technologies, public health and environment. Routledge Earth Scan Risk in Society series: London.

Van de Poel, I. Nihlén Fahlquist, J, Doorn, N., Zwart, S, Royakkers L, di Lima, T. 2011. The problem of many hands: climate change as an example. Science and Engineering Ethics.

Nihlen Fahlquist J. 2009. Moral responsibility for environmental problems – individual or institutional? Journal of Agricultural and Environmental Ethics, Volume 22(2), pp. 109-124.

This post in Swedish

Approaching future issues - the Ethics Blog

 

 


%d bloggers like this: