Articles that turn out to be based on fraudulent or flawed research are, of course, retracted by the journals that published them. The fact that there is a clearly stated policy for retracting fraudulent research is extremely important. Science as well as its societal applications must be able to trust that published findings are correct and not fabricated or distorted.
However, how should we handle articles that turn out to be based on unethical research? For example, research on the bodies of executed prisoners? Or research that exposes participants to unreasonable risks? Or research supported by unacceptable sources of funding?
In a new article, William Bülow, Tove E. Godskesen, Gert Helgesson and Stefan Eriksson examine whether academic journals have clearly formulated policies for retracting papers that are based on unethical research. The review shows that many journals lack such policies. This introduces arbitrariness and uncertainty into the system, the authors argue. Readers cannot trust that published research is ethical. They also do not know on what grounds articles are retracted or remain in the journal.
To motivate a clearly stated policy, the authors discuss four possible arguments for retracting unethical research papers. Two arguments are considered particularly conclusive. The first is that such a policy communicates that unethical research is unacceptable, which can deter researchers from acting unethically. The second argument is that journals that make it possible to complete unethical research by publishing it and that benefit from it become complicit in the unethical conduct.
Retraction of research papers is a serious matter and very compromising for researchers. Therefore, it is essential to clarify which forms and degrees of unethical conduct are sufficient to justify retraction. The authors cite as examples research based on serious violations of human rights, unfree research and research with unacceptable sources of funding.
The article concludes by recommending scientific journals to introduce a clearly stated policy for retracting unethical research: as clear as the policy for fraudulent research. Among other things, all retractions should be marked in the journal and the reasons behind the retractions should be specified in terms of both the kind and degree of unethical conduct.
Bülow, W., Godskesen, T. E., Helgesson, G., Eriksson, S. Why unethical papers should be retracted. Journal of Medical Ethics, Published Online First: 13 August 2020. doi: 10.1136/medethics-2020-106140
An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.
The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.
The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.
Read the article, which has already attracted well-deserved attention.
The word ethical framework evokes the idea of something rigid and separating, like the fence around the garden. The research that emerges within the framework is dynamic and constantly new. However, to ensure safety, it is placed in an ethical framework that sets clear boundaries for what researchers are allowed to do in their work.
The article questions not only the image of ethical frameworks as static boundaries for dynamic research activities. Inspired by ideas within so-called responsible research and innovation (RRI), the image that research can be separated from ethics and society is also questioned.
Researchers tend to regard research as their own concern. However, there are tendencies towards increasing collaboration not only across disciplinary boundaries, but also with stakeholders such as patients, industry and various forms of extra-scientific expertise. These tendencies make research an increasingly dispersed, common concern. Not only in retrospect in the form of applications, which presupposes that the research effort can be separated, but already when research is initiated, planned and carried out.
This could sound threatening, as if foreign powers were influencing the free search for truth. Nevertheless, there may also be something hopeful in the development. To see the hopeful aspect, however, we need to free ourselves from the image of ethical frameworks as static boundaries, separate from dynamic research.
With examples from the Human Brain Project, Arleen Salles and Michele Farisco try to show how ethical challenges in neuroscience projects cannot always be controlled in advance, through declared principles, values and guidelines. Even ethical work is dynamic and requires living intelligent attention. The authors also try to show how ethical attention reaches all he way into the neuroscientific issues, concepts and working conditions.
When research on the human brain is not aware of its own cultural and societal conditions, but takes them for granted, it may mean that relevant questions are not asked and that research results do not always have the validity that one assumes they have.
We thus have good reasons to see ethical and societal reflections as living parts of neuroscience, rather than as rigid frameworks around it.
Arleen Salles & Michele Farisco (2020) Of Ethical Frameworks and Neuroethics in Big Neuroscience Projects: A View from the HBP, AJOB Neuroscience, 11:3, 167-175, DOI: 10.1080/21507740.2020.1778116
The covid-19 pandemic forced many of us to work online from home. The change contained surprises, both positive and negative. We learned that it is possible to have digital staff meetings, seminars and coffee breaks, and that working from home can sometimes mean less interference than working in the office. We also discovered how much better the office chair and desk are, how difficult it is to try to be professional online from an untidy home, and that working from home often means more interference than working in the office!
The European Human Brain Project (HBP) has extensive experience of collaborating digitally, with regular online meetings. This is how they worked long before the pandemic struck, since the project is a collaboration between more than 100 partner institutions in almost 20 countries, also outside Europe. As part of the project’s investment in responsible research and innovation, special efforts are now being made to digitally include everyone, when so much of the work has moved to the internet.
In the Journal of Responsible Technology, Karin Grasenick and Manuel Guerrero from HBP formulate recommendations based on experiences from the project. Their recommendations concern four areas: How do we facilitate social and family life? How do we reduce stress and anxiety? How do we handle career stages, roles and responsibilities? How do we support team spirit and virtual cooperation?
Read the concise article! You will recognize your work situation and be inspired by the suggestions. Even after the pandemic, online collaboration will occur.
Karin Grasenick, Manuel Guerrero, Responsible Research and Innovation& Digital Inclusiveness during Covid-19 Crisis in the Human Brain Project (HBP), Journal of Responsi-ble Technology(2020), doi: https://doi.org/10.1016/j.jrt.2020.06.001
In an unusually rhetorical article for being in a scientific journal, the image is drawn of a humanity that frees itself from moral weakness by downloading ethical fitness apps.
Given this enormous and growing self-knowledge, why do we not develop artificial intelligence that supports a morally limping humanity? Why spend so much resources on developing even more intelligent artificial intelligence, which takes our jobs and might one day threaten humanity in the form of uncontrollable superintelligence? Why do we behave so unwisely when we could develop artificial intelligence to help us humans become superethical?
How can AI make morally weak humans super-ethical? The authors suggest a comparison with the fitness apps that help people to exercise more efficiently and regularly than they otherwise would. The authors’ suggestion is that our ethical knowledge of moral theories, combined with our growing scientific knowledge of moral weaknesses, can support the technological development of moral crutches: wise objects that support people precisely where we know that we are morally limping.
My personal assessment of this utopian proposal is that it might easily be realized in less utopian form. AI is already widely used as a support in decision-making. One could imagine mobile apps that support consumers to make ethical food choices in the grocery shop. Or computer games where consumers are trained to weigh different ethical considerations against each another, such as animal welfare, climate effects, ecological effects and much more. Nice looking presentations of the issues and encouraging music that make it fun to be moral.
The philosophical question I ask is whether such artificial decision support in shops and other situations really can be said to make humanity wiser and more ethical. Imagine a consumer who chooses among the vegetables, eagerly looking for decision support in the smartphone. What do you see? A human who, thanks to the mobile app, has become wiser than Socrates, who lived long before we knew as much about ourselves as we do today?
Ethical fitness apps are conceivable. However, the risk is that they spread a form of self-knowledge that flies above ourselves: self-knowledge suspiciously similar to the moral vice of self-satisfied presumptuousness.
Autonomy is such a cherished concept in ethics that I hardly dare to write about it. The fact that the concept cherishes the individual does not make my task any easier. The slightest error in my use of the term, and I risk being identified as an enemy perhaps not of the people but of the individual!
In ethics, autonomy means personal autonomy: individuals’ ability to govern their own lives. This ability is constantly at risk of being undermined. It is undermined if others unduly influence your decisions, if they control you. It is also undermined if you are not sufficiently well informed and rational. For example, if your decisions are based on false or contradictory information, or if your decisions result from compulsions or weakness of the will. It is your faculty of reason that should govern your life!
In an article in BMC Medical Ethics, Amal Matar, who has a PhD at CRB, discusses decision-making situations in healthcare where this individual-centered concept of autonomy seems less useful. It is about decisions made not by individuals alone, but by people together: by couples planning to become parents.
A couple planning a pregnancy together is expected to make joint decisions. Maybe about genetic tests and measures to be taken if the child risks developing a genetic disease. Here, as always, the healthcare staff is responsible for protecting the patients’ autonomy. However, how is this feasible if the decision is not made by individuals but jointly by a couple?
Personal autonomy is an idealized concept. No man is an island, it is said. This is especially evident when a couple is planning a life together. If a partner begins to emphasize his or her personal autonomy, the relationship probably is about to disintegrate. An attempt to correct the lack of realism in the idealized concept has been to develop ideas about relational autonomy. These ideas emphasize how individuals who govern their lives are essentially related to others. However, as you can probably hear, relational autonomy remains tied to the individual. Amal Matar therefore finds it urgent to take a further step towards realism concerning joint decisions made by couples.
Can we talk about autonomy not only at the level of the individual, but also at the level of the couple? Can a couple planning a pregnancy together govern their life by making decisions that are autonomous not only for each one of them individually, but also for them together as a couple? This is Amal Matar’s question.
Inspired by how linguistic meaning is conceptualized in linguistic theory as existing not only at the level of the word, but also at the level of the sentence (where words are joined together), Amal Matar proposes a new concept of couple autonomy. She suggests that couples can make joint decisions that are autonomous at both the individual and the couple’s level.
She proposes a three-step definition of couple autonomy. First, both partners must be individually autonomous. Then, the decision must be reached via a communicative process that meets a number of criteria (no partner dominates, sufficient time is given, the decision is unanimous). Finally, the definition allows one partner to autonomously transfer aspects of the decision to the other partner.
The purpose of the definition is not a philosophical revolution in ethics. The purpose is practical. Amal Matar wants to help couples and healthcare professionals to speak realistically about autonomy when the decision is a couple’s joint decision. Pretending that separate individuals make decisions in parallel makes it difficult to realistically assess and support the decision-making process, which is about interaction.
Amal Matar concludes the article, written together with Anna T. Höglund, Pär Segerdahl and Ulrik Kihlbom, with describing two cases. The cases show concretely how her definition can help healthcare professionals to assess and support autonomous decision-making at the level of the couple. In one case, the couple’s autonomy is undermined, in the other case, probably not.
Read the article as an example of how we sometimes need to modify cherished concepts to enable a realistic use of them.
Matar, A., Höglund, A.T., Segerdahl, P. and Kihlbom, U. Autonomous decisions by couples in reproductive care. BMC Med Ethics 21, 30 (2020). https://doi.org/10.1186/s12910-020-00470-w
Academic research is driven by dissemination of results to peers at conferences and through publication in scientific journals. However, research results belong not only to the research community. They also belong to society. Therefore, results should reach not only your colleagues in the field or the specialists in adjacent fields. They should also reach outside the academy.
Who is out there? A homogeneous public? No, it is not that simple. Communicating research is not two activities: first communicating the science to peers and then telling the popular scientific story to the public. Outside the academy, we find engineers, entrepreneurs, politicians, government officials, teachers, students, research funders, taxpayers, healthcare professionals… We are all out there with our different experiences, functions and skills.
Research communication is therefore a strategically more complicated task than just “reaching the public.” Why do you want to communicate your results; why are they important? Who will find your results important? How do you want to communicate them? When is the best time to communicate? There is not just one task here. You have to think through what the task is in each particular case. For the task varies with the answers to these questions. Only when you can think strategically about the task can you communicate research responsibly.
Josepine Fernow’s contribution is, in my view, more than a convincing argument. It is an eye-opening text that helps researchers see more clearly their diverse relationships to society, and thereby their responsibilities. The academy is not a rock of knowledge in a sea of ignorant lay people. Society consists of experienced people who, because of what they know, can benefit from your research. It is easier to think strategically about research communication when you survey your relations to a diversified society that is already knowledgeable. Josepine Fernow’s argumentation helps and motivates you to do that.
Josepine Fernow also warns against exaggerating the significance of your results. Bioscience has potential to give us effective treatments for serious diseases, new crops that meet specific demands, and much more. Since we are all potential beneficiaries of such research, as future patients and consumers, we may want to believe the excessively wishful stories that some excessively ambitious researchers want to tell. We participate in a dangerous game of increasingly unrealistic hopes.
The name of this dangerous game is hype. Research hype can make it difficult for you to continue your research in the future, because of eroded trust. It can also make you prone to take unethical shortcuts. The “huge potential benefit” obscures your judgment as a responsible researcher.
Responsible research communication is as important as difficult. Therefore, these tasks deserve our greatest attention. Read Josepine Fernow’s argumentation for carefully planned communication strategies. It will help you see more clearly your responsibility.
The STARBIOS2 project has carried out its activities in a context of the profound transformations that affect contemporary societies, and now we are all facing the Covid-19 pandemic. Science and society have always coevolved, they are interconnected entities, but their relationship is changing and it has been for some time. This shift from modern to so-called postmodern society affects all social institutions in similar ways, whether their work is in politics, religion, family, state administration, or bioscience.
We can find a wide range of phenomena connected to this trend in the literature, for instance: globalization; weakening of previous social “structures” (rules, models of action, values and beliefs); more capacity and power of individuals to think and act more freely (thanks also to new communication technologies); exposure to risks of different kinds (climate change, weakening of welfare, etc.); great social and cultural diversification; and weakening of traditional boundaries and spheres of life, etc.
In this context, we are witnessing the diminishing authority and prestige of all political, religious, even scientific institutions, together with a decline in people’s trust towards these institutions. One example would be the anti-vaccination movement.
Meanwhile, scientific research is also undergoing profound transformations, experiencing a transition that has been examined in various ways and called various names. At the heart of this transformation is the relationship between research and the society it belongs to. We can observe a set of global trends in science.
Such trends include the increasing relationship between universities, governments and industries; the emergence of approaches aimed at “opening” science to society, such as citizen science; the diffusion of cooperative practices in scientific production; the increasing relevance of transdisciplinarity; the increasing expectation that scientific results have economic, social, and environmental impacts; the increasingly competitive access to public funds for research; the growing importance attached to quantitative evaluation systems based on publications, often with distorting effects and questionable results; and the emergence on the international economic and technological scene of actors such as India, China, Brazil, South Africa and others. These trends produce risks and opportunities for both science and society.
Critical concerns for science includes career difficulties for young researchers and women in the scientific sector; the cost of publishing and the difficulties to publish open access; and the protection of intellectual property rights.
Of course, these trends and issues manifest in different ways and intensities according to the different political, social and cultural contexts they exist in.
After the so-called “biological revolution” and within the context of the “fourth industrial revolution” and with “converging technologies” like genetics, robotics, info-digital, neurosciences, nanotechnologies, biotechnologies, and artificial intelligence, the biosciences are at a crossroads in its relationship to society.
In this new context, more and more knowledge is produced and technological solutions developed require a deeper understanding of their status, limits, and ethical and social acceptability (take organoids, to name one example). Moreover, food security, clean energy transition, climate change, and pandemics are all challenges where bioscience can play a crucial role, while new legal, ethical, and social questions that need to be dealt with arise.
These processes have been running for years, albeit in different ways, and national and international decision-makers have been paying attention. Various forms of governance have been developed and implemented over time, to re-establish and harmonize the relationship between scientific and technological research and the rest of society, including more general European strategies and approaches such as Smart Specialization, Open Innovation, Open Science and Responsible Research and Innovation as well as strategies related to specific social aspects of science (such as ethics or gender).
Taking on an approach such as RRI is not simply morally recommendable, but indispensable for attempting a re-alignment between scientific research and the needs of society. Starting from the areas of the life of the scientific communities that are most crucial to science-society relations (The 5+1 RRI keys: Science education, Gender equality, Public engagement, Ethics, Open access, and the cross-cutting sixth key: Governance) and taking the four RRI dimensions into account (anticipation, inclusiveness, responsiveness, and reflexivity) can provide useful guidance for how to activate and drive change in research organisations and research systems.
We elaborate and experiment, in search of the most effective and most relevant solution. While at the same time, there is a need to encourage mainstreaming of the most substantial solutions, to root them more deeply and sustainably in the complex fabric of scientific organisations and networks. Which leads us to ask ourselves: in this context, how can we mainstream RRI and its application in the field of bioscience?
Based on what we know, and on experiences from the STARBIOS2 project, RRI and similar approaches need to be promoted and supported by specific policies and contextualised on at least four levels.
Organizational contextualization Where mainstreaming takes place through the promotion of a greater embedment of RRI, or similar approaches, within the individual research organizations such as universities, national institutes, private centres, etc.
Disciplinary or sectoral contextualization Where mainstreaming consists of adapting the responsible research and innovation approach to a specific discipline − for example, biotechnology − or to an entire “sector” in a broad sense, such as bioscience.
Geopolitical and cultural contextualization Where mainstreaming aims to identify forms of adaptation, or rather reshaping, RRI or similar approaches, in various geopolitical and cultural contexts, taking into account elements such as the features of the national research systems, the economy, territorial dynamics, local philosophy and traditions, etc.
Historical contextualization Where RRI mainstreaming is related to the ability of science to respond to the challenges that history poses from time to time − and of which the COVID-19 pandemic is only the last, serious example − and to prevent them as much as possible.
During the course of the STARBIOS2 project, we have developed a set of guidelines and a sustainable model for RRI implementation in bioscience research institutions. Over the course of 4 years, 6 bioscience research institutions in Europe, and 3 outside Europe, worked together to achieve structural change towards RRO in their own research institutions with the goal of achieving responsible biosciences. We were looking forward to revealing and discussing our results in April, but with the Covid-19 outbreak, neither that event nor our Cape Town workshop was a possibility. Luckily, we have adapted and will now share our findings online, at our final event on 29 May. We hope to see you there.
For our final remark, as the Covid-19 pandemic is challenging our societies, our political and economic systems, we recognise that scientists are also being challenged. By the corona virus as well as by contextual challenges. The virus is testing their ability to play a key role to the public, to share information and to produce relevant knowledge. But when we go back to “normal”, the challenge of changing science-society relations will persist. And we will remain convinced that RRI and similar approaches will be a valuable contribution to addressing these challenges, now and in the future.
Written by…
Daniele Mezzana, a social researcher working in the STARBIOS2 project (Structural Transformation to Attain Responsible BIOSciences) as part of the coordination team at University of Rome – Tor Vergata.
This text is based on the Discussion Note for the STARBIOS2 final event on 29 May 2020.
The STARBIOS2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 709517. The contents of this text and the view expressed are the sole responsibility of the author and under no circumstances can be regarded as reflecting the position of the European Union.
How do we know? That is the recurring question in a scientific culture. Do we have support for what we claim or is it just an opinion? Is there evidence?
The development of new cancer treatments provides many examples of the recurring question. The pharmaceutical company would like to be able to claim that the new treatment is more effective than existing alternatives and that the dosages recommended give good effect without excessive side effects. However, first we must answer the question, How do we know?
It is not enough to ask the question just once. We must repeat the question for every aspect of the treatment. Any claim on efficacy, side effects and dosages must be supported by answers to the question. How do we arrive at these answers? How do we check that it is not mere opinions? Through clinical trials conducted with cancer patients who agree to be research subjects.
A new research ethical study shows, however, that an ethically sensitive claim is often repeated in cancer research, without first asking and answering the question “How do we know?” in a satisfying way. Which claim? It is the claim that cancer patients are better off as participants in clinical trials than as regular patients who receive standard treatment. The claim is ethically sensitive because it can motivate patients to participate in trials.
In a large interview study, the authors first investigated whether the claim occurs among physicians and nurses working with clinical trials. Then, through a systematic literature review, they examined whether there is scientific evidence supporting the claim. The startling answer to the questions is: Yes, the claim is common. No, the claim lacks support.
Patients recruited for clinical trials are thus at risk of being misled by the common but unfounded opinion that research participation means better treatment. Of course, it is conceivable that patients who participate in trials will at least get indirect positive effects through increased attention: better follow-ups, more sample taking, closer contacts with physicians and nurses. However, indirect positive effects on outcomes should have been visible in the literature study. Regarding subjective effects, it is pointed out in the article that such effects will vary with the patients’ conditions and preferences. It is not always positive for a very sick patient to provide the many samples that research needs. In general, then, we cannot claim that research participation has indirect positive effects.
An ethically important conclusion drawn in the article is the following. If we suggest to patients who consent to participation in trials that research means better treatment, then they receive misleading information. Instead, altruistic research participation should be emphasized. By participating in studies, patients support new knowledge that can enable better cancer treatments for future patients.
The article examines a case where the question “How do we know?” has the answer, “We do not know, it is just an opinion.” Then at least we know that we do not know! How do we know? Through the studies presented in the article – read it!
Our attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.
Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?
Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.
Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?
RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.
A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.
The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!
Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.
The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!
During the last phase of the Human Brain Project, the activities on this blog received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. HBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539). The views and opinions expressed on this blog are the sole responsibility of the author(s) and do not necessarily reflect the views of the European Commission.
Recent Comments