Responsibly planned research communication

May 25, 2020

Pär SegerdahlAcademic research is driven by dissemination of results to peers at conferences and through publication in scientific journals. However, research results belong not only to the research community. They also belong to society. Therefore, results should reach not only your colleagues in the field or the specialists in adjacent fields. They should also reach outside the academy.

Who is out there? A homogeneous public? No, it is not that simple. Communicating research is not two activities: first communicating the science to peers and then telling the popular scientific story to the public. Outside the academy, we find engineers, entrepreneurs, politicians, government officials, teachers, students, research funders, taxpayers, healthcare professionals… We are all out there with our different experiences, functions and skills.

Research communication is therefore a strategically more complicated task than just “reaching the public.” Why do you want to communicate your results; why are they important? Who will find your results important? How do you want to communicate them? When is the best time to communicate? There is not just one task here. You have to think through what the task is in each particular case. For the task varies with the answers to these questions. Only when you can think strategically about the task can you communicate research responsibly.

Josepine Fernow is a skilled and experienced research communications officer at CRB. She works with communication in several research projects, including the Human Brain Project and STARBIOS2. In the latter project, about Responsible Research and Innovation (RRI), she contributes in a new book with arguments for responsibly planned research communication: Achieving impact: some arguments for designing a communications strategy.

Josepine Fernow’s contribution is, in my view, more than a convincing argument. It is an eye-opening text that helps researchers see more clearly their diverse relationships to society, and thereby their responsibilities. The academy is not a rock of knowledge in a sea of ​​ignorant lay people. Society consists of experienced people who, because of what they know, can benefit from your research. It is easier to think strategically about research communication when you survey your relations to a diversified society that is already knowledgeable. Josepine Fernow’s argumentation helps and motivates you to do that.

Josepine Fernow also warns against exaggerating the significance of your results. Bioscience has potential to give us effective treatments for serious diseases, new crops that meet specific demands, and much more. Since we are all potential beneficiaries of such research, as future patients and consumers, we may want to believe the excessively wishful stories that some excessively ambitious researchers want to tell. We participate in a dangerous game of increasingly unrealistic hopes.

The name of this dangerous game is hype. Research hype can make it difficult for you to continue your research in the future, because of eroded trust. It can also make you prone to take unethical shortcuts. The “huge potential benefit” obscures your judgment as a responsible researcher.

In some research fields, it is extra difficult to avoid research hype, as exaggerated hopes seem inscribed in the very language of the field. An example is artificial intelligence (AI), where the use of psychological and neuroscientific vocabulary about machines can create the impression that one has already fulfilled the hopes. Anthropomorphic language can make it sound as if some machines already thought like humans and functioned like brains.

Responsible research communication is as important as difficult. Therefore, these tasks deserve our greatest attention. Read Josepine Fernow’s argumentation for carefully planned communication strategies. It will help you see more clearly your responsibility.

Finally, a reminder for those interested: the STARBIOS2 project organizes its final event via Zoom on Friday, May 29, 2020.

Pär Segerdahl

Fernow, J. (2019). Note #11: Achieving impact: Some arguments for designing a communications strategy, In A. Declich (Ed.), RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project, (pp. 177-180). Uppsala University. ISBN: 978-91-506-2811-1

This post in Swedish

We recommend readings - the Ethics Blog


Inspiration for responsible research and innovation

April 27, 2020

Pär SegerdahlOur attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.

Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?

Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.

Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?

RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.

A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.

The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!

Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.

Pär Segerdahl

Declich, Andrea. 2019. RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project.

The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!

This post in Swedish

We recommend readings - the Ethics Blog


Anthropomorphism in AI can limit scientific and technological development

April 15, 2020

Pär SegerdahlAnthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

This post in Swedish

Minding our language - the Ethics Blog


What is a moral machine?

April 1, 2020

Pär SegerdahlI recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020). https://doi.org/10.1007/s11948-019-00084-5

This post in Swedish

We like challenging questions - the ethics blog


Herb Terrace about the chimpanzee Nim – do you see the contradiction?

March 23, 2020

Pär SegerdahlHave you seen small children make repeated attempts to squeeze a square object through a round hole (plastic toy for the little ones)? You get puzzled: Do they not see that it is impossible? The object and the hole have different shapes!

Sometimes adults are just as puzzling. Our intellect does not always fit reality. Yet, we force our thoughts onto reality, even when they have different shapes. Maybe we are extra stubborn precisely when it is not possible. This post is about such a case.

Herb Terrace is known as the psychologist who proved that apes cannot learn language. He himself tried to teach sign language to the chimpanzee Nim, but failed according to his own judgement. When Terrace took a closer look at the videotapes, where Nim interacted with his human sign-language teachers, he saw how Nim merely imitated the teachers’ signs, to get his reward.

I recently read a blog post by Terrace where he not only repeats the claim that his research demonstrates that apes cannot learn language. The strange thing is that he also criticizes his own research severely. He writes that he used the wrong method with Nim, namely, that of giving him rewards when the teacher judged that he made the right signs. The reasoning becomes even more puzzling when Terrace writes that not even a human child could learn language with such a method.

To me, this is as puzzling as a child’s insistence on squeezing a square object through a round hole. If Terrace used the wrong method, which would not work even on a human child, then how can he conclude that Project Nim demonstrates that apes cannot learn language? Nevertheless, he insists on reasoning that way, without feeling that he contradicts himself. Nor does anyone who read him seem to experience any contradiction. Why?

Perhaps because most of us think that humans cannot teach animals anything at all, unless we train them with rewards. Therefore, since Nim did not learn language with this training method, apes cannot learn language. Better methods do not work on animals, we think. If Terrace failed, then everyone must fail, we think.

However, one researcher actually did try a better method in ape language research. She used an approach to young apes that works with human children. She stopped training the apes via a system of rewards. She lived with the apes, as a parent with her children. And it succeeded!

Terrace almost never mentions the name of the successful ape language researcher. After all, she used a method that is impossible with animals: she did not train them. Therefore, she cannot have succeeded, we think.

I can tell you that the name of the successful researcher is Sue Savage-Rumbaugh. To see a round reality beyond a square thinking, we need to rethink our thought pattern. If you want to read a book that tries to do such rethinking about apes, humans and language, I recommend a philosophical self-critique that I wrote with Savage-Rumbaugh and her colleague William Fields.

To philosophize is to learn to stop imposing our insane thoughts on reality. Then we finally see reality as it is.

Pär Segerdahl

Segerdahl, P., Fields, W. & Savage-Rumbaugh, S. 2005. Kanzi’s Primal Language. The Cultural Initiation of Primates into Language. Palgrave Macmillan.

This post in Swedish

Understanding enculturated apes - the ethics blog


Artificial intelligence and living consciousness

March 2, 2020

Pär SegerdahlThe Ethics Blog will publish several posts on artificial intelligence in the future. Today, I just want to make a little observation of something remarkable.

The last century was marked by fear of human consciousness. Our mind seemed as mystic as the soul, as superfluous in a scientific age as God. In psychology, behaviorism flourished, which defined psychological words in terms of bodily behavior that could be studied scientifically in the laboratory. Our living consciousness was treated as a relic from bygone superstitious ages.

What is so remarkable about artificial intelligence? Suddenly, one seems to idolize consciousness. One wallows in previously sinful psychological words, at least when one talks about what computers and robots can do. These machines can see and hear; they can think and speak. They can even learn by themselves.

Does this mean that the fear of consciousness has ceased? Hardly, because when artificial intelligence employs psychological words such as seeing and hearing, thinking and understanding, the words cease to be psychological. The idea of computer “learning,” for example, is a technical term that computer experts define in their laboratories.

When artificial intelligence embellishes machines with psychological words, then, one repeats how behaviorism defined mind in terms of something else. Psychological words take on new machine meanings that overshadow the meanings the words have among living human beings.

Remember this next time you wonder if robots might become conscious. The development exhibits fear of consciousness. Therefore, what you are wondering is not if robots can become conscious. You wonder if your own consciousness can be superstition. Remarkable, right?

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog


Neuroethics as foundational

January 28, 2020

Pär SegerdahlAs neuroscience expands, the need for ethical reflection also expands. A new field has emerged, neuroethics, which celebrated its 15th anniversary last year. This was noted in the journal AJOB Neuroscience through an article about the area’s current and future challenges.

In one of the published comments, three researchers from the Human Brain Project and CRB emphasize the importance of basic conceptual analysis in neuroethics. The new field of neuroethics is more than just a kind of ethical mediator between neuroscience and society. Neuroethics can and should contribute to the conceptual self-understanding of neuroscience, according to Arleen Salles, Kathinka Evers and Michele Farisco. Without such self-understanding, the ethical challenges become unclear, sometimes even imaginary.

Foundational conceptual analysis can sound stiff. However, if I understand the authors, it is just the opposite. Conceptual analysis is needed to make concepts agile, when habitual thinking made them stiff. One example is the habitual thinking that facts about the brain can be connected with moral concepts, so that, for example, brain research can explain to us what it “really” means to be morally responsible for our actions. Such habitual thinking about the role of the brain in human life may suggest purely imaginary ethical concerns about the expansion of neuroscience.

Another example the authors give is the external perspective on consciousness in neuroscience. Neuroscience does not approach consciousness from a first-person perspective, but from a third-person perspective. Neuroscience may need to be reminded of this and similar conceptual limitations, to better understand the models that one develops of the brain and human consciousness, and the conclusions that can be drawn from the models.

Conceptual neuroethics is needed to free concepts from intellectual deadlocks arising with the expansion of neuroscience. Thus, neuroethics can contribute to deepening the self-understanding of neuroscience as a science with both theoretical and practical dimensions. At least that is how I understand the spirit of the authors’ comment in AJOB Neuroscience.

Pär Segerdahl

Emerging Issues Task Force, International Neuroethics Society (2019) Neuroethics at 15: The Current and Future Environment for Neuroethics, AJOB Neuroscience, 10:3, 104-110, DOI: 10.1080/21507740.2019.1632958

Arleen Salles, Kathinka Evers & Michele Farisco (2019) The Need for a Conceptual Expansion of Neuroethics, AJOB Neuroscience, 10:3, 126-128, DOI: 10.1080/21507740.2019.1632972

This post in Swedish

We want solid foundations - the Ethics Blog


%d bloggers like this: