A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: responsible research and innovation

Taking care of the legacy: curating responsible research and innovation practice

Responsible research and innovation, or RRI as it is often called in EU-project language, is both scholarship and practice. Over the last decade, the Human Brain Project Building has used structured and strategic approaches to embed responsible research and innovation practices across the project. The efforts to curate the legacy of this work includes the development an online Ethics & Society toolkit. But how does that work? And what does a toolkit need in order to ensure it has a role to play?

A recent paper by Lise Bitsch and Bernd Stahl in Frontiers in Research Metrics and Analytics explores whether this kind of toolkit can help embed the legacy of RRI activities in a large research project. According to them, a toolkit has the potential to play an important role in preserving RRI legacy. But they also point out that that potential can only be realised if we have organisational structures and funding in place to make sure that this legacy is retained. Because as all resources, it needs to be maintained, shared, used, and curated. To play a role in the long-term.

Even though this particular toolkit is designed to integrate insights and practises of responsible research and innovation in the Human Brain Project, there are lessons to be learned for other efforts to ensure acceptability, desirability and sustainability of processes and outcomes of research and innovation activities. The Human Brain Project is a ten-year European Flagship project that has gone through several phases. Bernd Stahl is the ethics director of the Human Brain Project, and Lise Bitsch has led the project’s responsible research and innovation work stream for the past three years. And there is a lot to be learned. For projects who are considering developing similar tools, they describe the process of designing and developing the toolkit.

But there are parts of the RRI-legacy that cannot fit in a toolkit. The impact of the ethical, social and reflective work in the Human Brain Project is visible in governance structures, how the project is managing and handling data, in its publications and communications. The authors are part of those structures.

In addition to the Ethics & Society toolkit, the work has been published in journals, shared on the Ethics Dialogues blog (where a first version of this post was published) and the HBP Society Twitter handle, offering more opportunities to engage and discuss in the EBRAINS community Ethics & Society space. The capacity building efforts carried out for the project and EBRAINS research infrastructure have been developed into an online ethics & society training resource, and the work with gender and diversity has resulted in a toolkit for equality, diversity and inclusion in project themes and teams.

Read the paper by Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy.

(A first version of this post was originally published on the Ethics Dialogues blog, March 13, 2023)

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy, Frontiers in Research Metrics and Analytics, 13 March 2023, Sec. Research Policy and Strategic Management, Volume 8 – 2023, https://doi.org/10.3389/frma.2023.1112106

Part of international collaborations

Anthropomorphism in AI can limit scientific and technological development

Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

We recommend readings

This post in Swedish