Science and society: a changing framework and the role of RRI

May 18, 2020

Guest blogger Daniele Mezzana from the STARBIOS2 coordination teamThe STARBIOS2 project has carried out its activities in a context of the profound transformations that affect contemporary societies, and now we are all facing the Covid-19 pandemic. Science and society have always coevolved, they are interconnected entities, but their relationship is changing and it has been for some time. This shift from modern to so-called postmodern society affects all social institutions in similar ways, whether their work is in politics, religion, family, state administration, or bioscience.

We can find a wide range of phenomena connected to this trend in the literature, for instance: globalization; weakening of previous social “structures” (rules, models of action, values and beliefs); more capacity and power of individuals to think and act more freely (thanks also to new communication technologies); exposure to risks of different kinds (climate change, weakening of welfare, etc.); great social and cultural diversification; and weakening of traditional boundaries and spheres of life, etc.

In this context, we are witnessing the diminishing authority and prestige of all political, religious, even scientific institutions, together with a decline in people’s trust towards these institutions. One example would be the anti-vaccination movement.

Meanwhile, scientific research is also undergoing profound transformations, experiencing a transition that has been examined in various ways and called various names. At the heart of this transformation is the relationship between research and the society it belongs to. We can observe a set of global trends in science.

Such trends include the increasing relationship between universities, governments and industries; the emergence of approaches aimed at “opening” science to society, such as citizen science; the diffusion of cooperative practices in scientific production; the increasing relevance of transdisciplinarity; the increasing expectation that scientific results have economic, social, and environmental impacts; the increasingly competitive access to public funds for research; the growing importance attached to quantitative evaluation systems based on publications, often with distorting effects and questionable results; and the emergence on the international economic and technological scene of actors such as India, China, Brazil, South Africa and others. These trends produce risks and opportunities for both science and society.

Critical concerns for science includes career difficulties for young researchers and women in the scientific sector; the cost of publishing and the difficulties to publish open access; and the protection of intellectual property rights.

Of course, these trends and issues manifest in different ways and intensities according to the different political, social and cultural contexts they exist in.

After the so-called “biological revolution” and within the context of the “fourth industrial revolution” and with “converging technologies” like genetics, robotics, info-digital, neurosciences, nanotechnologies, biotechnologies, and artificial intelligence, the biosciences are at a crossroads in its relationship to society.

In this new context, more and more knowledge is produced and technological solutions developed require a deeper understanding of their status, limits, and ethical and social acceptability (take organoids, to name one example). Moreover, food security, clean energy transition, climate change, and pandemics are all challenges where bioscience can play a crucial role, while new legal, ethical, and social questions that need to be dealt with arise.

These processes have been running for years, albeit in different ways, and national and international decision-makers have been paying attention. Various forms of governance have been developed and implemented over time, to re-establish and harmonize the relationship between scientific and technological research and the rest of society, including more general European strategies and approaches such as Smart Specialization, Open Innovation, Open Science and Responsible Research and Innovation as well as strategies related to specific social aspects of science (such as ethics or gender).

Taking on an approach such as RRI is not simply morally recommendable, but indispensable for attempting a re-alignment between scientific research and the needs of society. Starting from the areas of the life of the scientific communities that are most crucial to science-society relations (The 5+1 RRI keys: Science education, Gender equality, Public engagement, Ethics, Open access, and the cross-cutting sixth key: Governance) and taking the four RRI dimensions into account (anticipation, inclusiveness, responsiveness, and reflexivity) can provide useful guidance for how to activate and drive change in research organisations and research systems.

We elaborate and experiment, in search of the most effective and most relevant solution. While at the same time, there is a need to encourage mainstreaming of the most substantial solutions, to root them more deeply and sustainably in the complex fabric of scientific organisations and networks. Which leads us to ask ourselves: in this context, how can we mainstream RRI and its application in the field of bioscience?

Based on what we know, and on experiences from the STARBIOS2 project, RRI and similar approaches need to be promoted and supported by specific policies and contextualised on at least four levels.

  • Organizational contextualization
    Where mainstreaming takes place through the promotion of a greater embedment of RRI, or similar approaches, within the individual research organizations such as universities, national institutes, private centres, etc.
  • Disciplinary or sectoral contextualization
    Where mainstreaming consists of adapting the responsible research and innovation approach to a specific discipline − for example, biotechnology − or to an entire “sector” in a broad sense, such as bioscience.
  • Geopolitical and cultural contextualization
    Where mainstreaming aims to identify forms of adaptation, or rather reshaping, RRI or similar approaches, in various geopolitical and cultural contexts, taking into account elements such as the features of the national research systems, the economy, territorial dynamics, local philosophy and traditions, etc.
  • Historical contextualization
    Where RRI mainstreaming is related to the ability of science to respond to the challenges that history poses from time to time − and of which the COVID-19 pandemic is only the last, serious example − and to prevent them as much as possible.

During the course of the STARBIOS2 project, we have developed a set of guidelines and a sustainable model for RRI implementation in bioscience research institutions. Over the course of 4 years, 6 bioscience research institutions in Europe, and 3 outside Europe, worked together to achieve structural change towards RRO in their own research institutions with the goal of achieving responsible biosciences. We were looking forward to revealing and discussing our results in April, but with the Covid-19 outbreak, neither that event nor our Cape Town workshop was a possibility. Luckily, we have adapted and will now share our findings online, at our final event on 29 May. We hope to see you there.

For our final remark, as the Covid-19 pandemic is challenging our societies, our political and economic systems, we recognise that scientists are also being challenged. By the corona virus as well as by contextual challenges. The virus is testing their ability to play a key role to the public, to share information and to produce relevant knowledge. But when we go back to “normal”, the challenge of changing science-society relations will persist. And we will remain convinced that RRI and similar approaches will be a valuable contribution to addressing these challenges, now and in the future.

Daniele Mezzana

Daniele Mezzana is a social researcher working in the STARBIOS2 project (Structural Transformation to Attain Responsible BIOSciences) as part of the coordination team at University of Rome – Tor Vergata.

This text is based on the Discussion Note for the STARBIOS2 final event on 29 May 2020. 

STARBIOS2 logo

The STARBIOS2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 709517. The contents of this text and the view expressed are the sole responsibility of the author and under no circumstances can be regarded as reflecting the position of the European Union.

Part of international collaborations - the Ethics Blog


Inspiration for responsible research and innovation

April 27, 2020

Pär SegerdahlOur attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.

Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?

Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.

Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?

RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.

A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.

The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!

Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.

Pär Segerdahl

Declich, Andrea. 2019. RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project.

The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!

This post in Swedish

We recommend readings - the Ethics Blog


Anthropomorphism in AI can limit scientific and technological development

April 15, 2020

Pär SegerdahlAnthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

This post in Swedish

Minding our language - the Ethics Blog


We cannot control everything: the philosophical dimensions of life

April 9, 2020

Pär SegerdahlLife always surpasses us. We thought we were in control, but then something unexpected happens that seems to upset the order. A storm, a forest fire, a pandemic. Life appears as a drawing in sand, the contours of which suddenly dissolve.

Of course, it is not that definitive. Even a storm, a forest fire, a pandemic, will pass. The contours of life return, in somewhat new forms. However, the unexpected reminded us that life is greater than our ability to control it. My question in this post is how we balance the will to control life against the knowledge that life always surpasses us.

That life is greater than our ability to control it is evident not only in the form of storms, forest fires and pandemics. It is evident also in the form of nice varying weather, growing forests and good health. Certainly, medicine contributes to better health. Nevertheless, it is not thanks to any pills that blood circulates in our bodies and food becomes nourishment for our cells. We are rightly grateful to medicine, which helps the sick. However, maybe we could devote life itself a thought of gratitude sometimes. Is not the body fantastic, which develops immunity in contact with viruses? Are not the forests and the climate wonderful, providing oxygen, sun and rain? And consider nature, on which we are like outgrowths, almost as fruits on a tree.

Many people probably want to object that it is pointless to philosophize about things that we cannot change. Why waste time reflecting on the uncontrollable dimensions of life, when we can develop new medicines? Should we not focus all our efforts on improving the world?

I just point out that we then reason as the artist who thought himself capable of painting only the foreground, without background. As though the background was a distraction from the foreground. However, if you want to emphasize the foreground, you must also pay attention to the background. Then the foreground appears. The foreground needs to be embraced by the background. Small and large presuppose each other.

Our desire to control life works more wisely, I believe, if we acknowledge our inevitable dependence on a larger, embracing background. As I said, we cannot control everything, just as an artist cannot paint only the foreground. I want to suggest that we can view philosophy as an activity that reminds us of that. It helps us see the controllable in the light of the uncontrollable. It reminds us of the larger context: the background that the human intellect does not master, but must presuppose and interact with wisely.

It does not have to be dramatic. Even everyday life has philosophical dimensions that exceed our conscious control. Children learn to talk beyond their parents’ control, without either curricula or examinations. No language teacher in the world can teach a toddler to talk through lessons in a classroom. It can only happen spontaneously and boundlessly, in the midst of life. Only those who already speak can learn language through lessons in a classroom.

The ability to talk is thus the background to language teaching in the classroom. A language teacher can plan the lessons in detail. The youngest children’s language acquisition, on the other hand, is so inextricably linked to what it is to live as a human being that it exceeds the intellect’s ability to organize and govern. We can only remind ourselves of the difference between foreground and background in language. Here follows such a philosophical reminder. A parent of a schoolchild can say, “Now you’ve been studying French for two hours and need a break: go out and play.” However, a parent of a small child who is beginning to talk cannot say, “Now you’ve been talking for two hours and need a break: go out and play!” The child talks constantly. It learns in the midst of playing, in the midst of life, beyond control. Therefore, the child has no breaks.

Had Herb Terrace seen the difference between foreground and background in language, he would never have used the insane method of training sign language with the chimpanzee Nim in a special classroom, as if Nim were a schoolchild who could already speak. Sometimes we need a bit of philosophy (a bit of reason) for our projects to work. Foreground and background interact everywhere. Our welfare systems do not work unless we fundamentally live by our own power, or by life’s own power. Pandemics hardly subside without the virus moving through sufficiently many of our, thereafter, immune bodies – under controlled forms that protect groups at risk and provide the severely ill care. Everywhere, foreground and background, controllable and uncontrollable, interact.

The dream of complete intellectual control is therefore a pitfall when we philosophize. At least if we need philosophy to elucidate the living background of what lies within human control. Then we cannot strive to define life as a single intellectually controllable foreground. A bit of philosophy can help us see the interplay between foreground and background. It can help us live actively and act wisely in the zone between controllable and uncontrollable.

Pär Segerdahl

This post in Swedish

We like broad perspectives : www.ethicsblog.crb.uu.se


Proceed carefully with vaccine against covid-19

April 4, 2020

Pär SegerdahlPharmaceutical companies want to quickly manufacture a vaccine against covid-19, with human testing and launch in the market as soon as possible. In a debate article, Jessica Nihlén Fahlquist at CRB warns of the risk of losing the larger risk perspective: “Tests on people and a potential premature mass vaccination entail risks. It is easy to forget about similar situations in the past,” she writes.

It may take time for side effects to appear. Unfortunately, it therefore also takes time to develop new safe vaccines. We need to develop a vaccine, but even with new vaccines, caution is needed.

The article is in Swedish. If you want to Google translate: Proceed carefully with vaccine against covid-19

Pär Segerdahl

We participate in debates - the Ethics Blog


What is a moral machine?

April 1, 2020

Pär SegerdahlI recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020). https://doi.org/10.1007/s11948-019-00084-5

This post in Swedish

We like challenging questions - the ethics blog


Artificial intelligence and living consciousness

March 2, 2020

Pär SegerdahlThe Ethics Blog will publish several posts on artificial intelligence in the future. Today, I just want to make a little observation of something remarkable.

The last century was marked by fear of human consciousness. Our mind seemed as mystic as the soul, as superfluous in a scientific age as God. In psychology, behaviorism flourished, which defined psychological words in terms of bodily behavior that could be studied scientifically in the laboratory. Our living consciousness was treated as a relic from bygone superstitious ages.

What is so remarkable about artificial intelligence? Suddenly, one seems to idolize consciousness. One wallows in previously sinful psychological words, at least when one talks about what computers and robots can do. These machines can see and hear; they can think and speak. They can even learn by themselves.

Does this mean that the fear of consciousness has ceased? Hardly, because when artificial intelligence employs psychological words such as seeing and hearing, thinking and understanding, the words cease to be psychological. The idea of computer “learning,” for example, is a technical term that computer experts define in their laboratories.

When artificial intelligence embellishes machines with psychological words, then, one repeats how behaviorism defined mind in terms of something else. Psychological words take on new machine meanings that overshadow the meanings the words have among living human beings.

Remember this next time you wonder if robots might become conscious. The development exhibits fear of consciousness. Therefore, what you are wondering is not if robots can become conscious. You wonder if your own consciousness can be superstition. Remarkable, right?

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog


%d bloggers like this: