The covid-19 pandemic forced many of us to work online from home. The change contained surprises, both positive and negative. We learned that it is possible to have digital staff meetings, seminars and coffee breaks, and that working from home can sometimes mean less interference than working in the office. We also discovered how much better the office chair and desk are, how difficult it is to try to be professional online from an untidy home, and that working from home often means more interference than working in the office!
The European Human Brain Project (HBP) has extensive experience of collaborating digitally, with regular online meetings. This is how they worked long before the pandemic struck, since the project is a collaboration between more than 100 partner institutions in almost 20 countries, also outside Europe. As part of the project’s investment in responsible research and innovation, special efforts are now being made to digitally include everyone, when so much of the work has moved to the internet.
In the Journal of Responsible Technology, Karin Grasenick and Manuel Guerrero from HBP formulate recommendations based on experiences from the project. Their recommendations concern four areas: How do we facilitate social and family life? How do we reduce stress and anxiety? How do we handle career stages, roles and responsibilities? How do we support team spirit and virtual cooperation?
Read the concise article! You will recognize your work situation and be inspired by the suggestions. Even after the pandemic, online collaboration will occur.
Karin Grasenick, Manuel Guerrero, Responsible Research and Innovation& Digital Inclusiveness during Covid-19 Crisis in the Human Brain Project (HBP), Journal of Responsi-ble Technology(2020), doi: https://doi.org/10.1016/j.jrt.2020.06.001
In an unusually rhetorical article for being in a scientific journal, the image is drawn of a humanity that frees itself from moral weakness by downloading ethical fitness apps.
Given this enormous and growing self-knowledge, why do we not develop artificial intelligence that supports a morally limping humanity? Why spend so much resources on developing even more intelligent artificial intelligence, which takes our jobs and might one day threaten humanity in the form of uncontrollable superintelligence? Why do we behave so unwisely when we could develop artificial intelligence to help us humans become superethical?
How can AI make morally weak humans super-ethical? The authors suggest a comparison with the fitness apps that help people to exercise more efficiently and regularly than they otherwise would. The authors’ suggestion is that our ethical knowledge of moral theories, combined with our growing scientific knowledge of moral weaknesses, can support the technological development of moral crutches: wise objects that support people precisely where we know that we are morally limping.
My personal assessment of this utopian proposal is that it might easily be realized in less utopian form. AI is already widely used as a support in decision-making. One could imagine mobile apps that support consumers to make ethical food choices in the grocery shop. Or computer games where consumers are trained to weigh different ethical considerations against each another, such as animal welfare, climate effects, ecological effects and much more. Nice looking presentations of the issues and encouraging music that make it fun to be moral.
The philosophical question I ask is whether such artificial decision support in shops and other situations really can be said to make humanity wiser and more ethical. Imagine a consumer who chooses among the vegetables, eagerly looking for decision support in the smartphone. What do you see? A human who, thanks to the mobile app, has become wiser than Socrates, who lived long before we knew as much about ourselves as we do today?
Ethical fitness apps are conceivable. However, the risk is that they spread a form of self-knowledge that flies above ourselves: self-knowledge suspiciously similar to the moral vice of self-satisfied presumptuousness.
Allegedly, there are over 12.000 so-called predatory journals out there. Instead of supporting readers and science, these journals serve their own economic interests first and at best offer dubious merits for scholars. We believe that scholars working in any academic discipline have a professional interest and a responsibility to keep track of these journals. It is our job to warn the young or inexperienced of journals where a publication or editorship could be detrimental to their career and science is not served.
We have seen “predatory” publishing take off in a big way and noticed how colleagues start to turn up in the pages of some of these journals. While many have assumed that this phenomenon mainly is a problem for low-status universities, there are strong indications that predatory publishing is a part of a major trend towards the industrialization of misconduct and that it affects many top-flight research institutions (see Priyanka Pulla: “In India, elite institutes in shady journals”, Science 354(6319): 1511-1512).
This trend, referred to by some as the dark side of publishing, needs to be reversed. Thus we published this blog post in 2016. This is our fourth annual update (the first version can be found here). At first, we relied heavily on the work of Jeffrey Beall, a librarian at the University of Colorado, who run blacklists of “potential, possible, or probable” predatory publishers and journals. His lists have since been removed although they live on in new form (anonymous) at the Stop predatory journals site (SPJ) and they can also be found archived.
The latest effort to create a thorough blacklist comes from Cabells, who distinguish around 70 different unacceptable violations and employs a whole team reviewing journals. These lists are not, however, the final say on the matter, as it is impossible for one person or a limited group to judge reliably actors in every academic discipline. Moreover, since only questionable journals are listed, the good journals must be found elsewhere.
A response of gatekeeping needs to be anchored in each discipline and the scholars who make up that discipline. As a suitable response in bioethics, we have chosen to, first, collect a few authoritative lists of recommended bioethics journals that can be consulted by anyone in bioethics to find good journals to publish with.
For our first post, we recommended a list of journals ourselves, which brought on some well-deserved questions and criticism about criteria for inclusion. Unfortunately then, our list ultimately drew attention from other parts of the message that we were more concerned to get across. Besides, there are many other parties making such lists. We, therefore, have dropped this feature. Instead, we have enlarged the collection of good journal lists to the service of our readers. They are all of great use when further exploring the reputable journals available:
It is of prime importance to list the journals that are potentially or possibly predatory or of such a low quality that it might be dishonoring to engage with them. We have listed all 50 of them alphabetically (eleven new entries for 2019, two have ceased operation and been removed), and provided both the homepage URL and links to any professional discussion of these journals that we have found (which most often alerted us to their existence in the first place).
Each of these journals asks scholars for manuscripts from, or claims to publish papers in bioethics or related areas (such as practical philosophy). They have been reviewed by the authors of this blog post as well as by a group of reference scholars that we have asked for advice on the list. Those journals listed have unanimously been agreed are journals that – in light of the criticism put forth and the quality we see – we would not deem acceptable for us to publish in. Typical signs as to why a journal could fall in this category, such as extensive spamming, publishing in almost any subject, or fake data being included on the website etc., are listed here:
We have started to more systematically evaluate the journals against the 25 defining characteristics we outlined in the article linked to above (with the help of science and technology PhD students). The results will be added when they exist.
We would love to hear about your views on this blog post, and be especially grateful for pointers to journals engaging in sloppy or bad publishing practices. The list is not meant as a check-list but as a starting point for any bioethics scholar to ponder for him- or herself where to publish.
Also, anyone thinking that a journal in our list should be given due reconsideration might post their reasons for this as a comment to the blog post or send an email to us. Journals might start out with some sloppy practices but shape up over time and we will be happy to hear about it. You can make an appeal against the inclusion of a journal and we will deal with it promptly and publicly.
Please spread the content of this blog as much as you can and check back for updates (we will do a major update annually and continually add any further information found).
Note to readers: The list contained on Stop Predatory Journals has been down for while and it seems the domain now is for sale. From 2022 any reference to journals/publishers being included on SPJ refers to their previous inclusion. We will gradually check for inclusion in the most prominent list presently available, Cabells’ Predatory Reports, as a alternative.
WHERE NOT TO PUBLISH IN BIOETHICS – THE 2020 LIST
Advanced Humanities & Social Sciences (Consortium Publisher) Critical remark (2018): It has been claimed that behind this journal you find OMICS, the most-ever discussed publisher of this kind, see http://ottawacitizen.com/news/local-news/predatory-publisher-expanding-empire-in-canada. The only article published in 2016 is very badly edited, all the references are lost in the text and the paper would not pass an exam at our departments. 2017 volume is again only one article. The publisher is listed on SPJ. Critical remark (2022). After a complaint from the publisher, we have checked the latest volume. An article like this one shows no evident editorial work on the paper at all, so we still regard the journal to be a low quality outlet for research.
Advances In Medical Ethics (Longdom Publishing) Critical remark (2019): When asked, one editor attest to the fact that his editorship was forged. Publisher was on Beall’s list and is now listed at Cabells with 5 violations. A thorough review December 2019 concludes that it exhibits at least 7 of the 25 criteria for “predatory” journals. A more recent review (2022) concludes that it exhibits about 17 such criteria.
American Open Ethics Journal (Research and Knowledge Publication) Critical remark (2019): Listed on Cabells with 7 violations. A thorough review February 2020 concludes that it exhibits at least 11 of the 25 criteria for “predatory” journals.
Annals of Bioethics & Clinical Applications (Medwin Publishers) Criticism 1 │ Criticism 2 Critical remark (2019): Publisher was on Beall’s list and is on many other lists of these journals. They say that they are “accepting all type of original works that is related to the disciplines of the journal” and indeed the flow chart of manuscript handling does not have a reject route. Indexed by alternative indexes. Critical remark (2020): Listed on Cabells with 5 violations. A thorough review October 2020 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals.
Austin Journal of Genetics and Genomic Research(Austin Publishing Group) Criticism 1 │Criticism 2 │Criticism 3 Critical remark (2017): Spam e-mail about special issue on bioethics; Listed by SPJ; Romanian editorial member is said to be from a university in “Europe”; Another editorial board member is just called “Michael”; APG has been sued by International Association for Dental Research and The American Association of Neurological Surgeons for infringing on their IP rights. Student reviews concludes the journal is not suitable to publish in, one finding that the journal exhibits at least 16 of the 25 criteria for “predatory” journals. Critical remark (2019): Listed by Cabells with 10 violations. Critical remark (2021): A thorough review concludes that the journals exhibits at least 13 of the 25 criteria for “predatory” journals.
British Open Journal of Ethics (British Open Research Publications) Critical remark (2019): Listed by Cabells with 6 violations. Critical remark (2022): A thorough review concludes that the journal exhibit many criteria for “predatory” journals, for example that no editorial board exists and the journal is not indexed, and that it is strongly recommended to avoid “publishing” with this journal.
Creative Education (Scientific Research Publishing – SCIRP) Criticism 1 │ Criticism 2 Critical remark (2017): Listed by SPJ; They claim misleadingly to be indexed by ISI but this relates to be among cited articles only – they are not indexed. A thorough review May 2017 concludes that it exhibits at least 5 of the 25 criteria for “predatory” journals.
East European Scientific Journal (East European Research Alliance) Critical remark (2017): Listed by SPJ; Criticised by Beall for having a bogus editorial board; Claims to be indexed by ISI but that is not the well-known Institute for Scientific Information (now Thompson Reuters), but rather the so-called International Scientific Indexing. Thorough reviews November 2018 and February 2019 conclude that it exhibits at least 13 or 14 of the 25 criteria for “predatory” journals.
Ethics Today Journal (Franklin Publishing) Critical remark (2019): Listed by Cabells with 9 violations.
European Academic Research (Kogaion Publishing Center, formerly Bridge Center) Critical remark (2017): Listed by SPJ; Uses impact factor from Universal Impact Factor (now defunct); A thorough review May 2017 concludes that it exhibits at least 15 of the 25 criteria for “predatory” journals.
European Scientific Journal (European Scientific Institute) Critical remark (2017): Listed by SPJ; Use of alternative indexes. A thorough review May 2017 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals.
International Journal of Contemporary Research & Review Critical remark (2017): Listed by SPJ; Indexed by Index Copernicus; Despite claims they seem not to be indexed by either Chemical Abstracts or DOAJ. A thorough review June 2017 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals.
International Journal of Current Research Criticism 1 Critical remark (2017): Listed by SPJ; Uses IF from SJIF and Index Copernicus and more. It wrongly claims to be indexed by Thomson Reuters, ORCID and having a DOI among other things. A thorough review January 2018 concludes that it exhibits at least 12 of the 25 criteria for “predatory” journals.
International Journal of Current Research and Academic Review (Excellent Publishers) Critical remark (June 2018): Listed by SPJ and Cabells because of misleading claims about credentials, metrics, and too quick review; alternative indexing; publishes in almost any field imaginable; the editor -in-chief is head of the “Excellent Education and Researh Institute” (sic) which does not seem to exist even when spelled right? A thorough review in December 2019 concludes that it exhibits at least 12 of the 25 criteria for “predatory journals”.
International Journal of Ethics (Nova Science Publishers) Criticism 1 │ Criticism 2 Critical remark (2022): The article on Nova at Wikipedia notes that librarians have been critical of this publisher, a Ms. Alexandra Columbus is both the owner of, business manager and customer contact for Nova.
International Journal of Ethics & Moral Philosophy (Journal Network) Critical remark (2017): Listed by SPJ; Publisher was criticized by Beall when launching 350 journals at once; After several years not one associate editor has signed up and no article has been published; No editorial or contact details available. Thorough reviews in May 2019 and February 2020 conclude that it exhibits at least 10 to 12 of the 25 criteria for “predatory journals”.
International Journal of Ethics in Engineering & Management Education Critical remark (2019): Papers from almost any field; Claims to have a 5.4 Impact factor (from IJEEE); Indexed by GJIF etc. A non-existent address in “Varginia”, US (sic!); Open access but asks for the copyright; Claims to be indexed in Scopus can’t be verified. Thorough reviews February 2018 and February 2020 conclude that it exhibits at least 16-17 of the 25 criteria for “predatory” journals. Listed by Cabells with 11 violations found.
International Journal of Humanities and Social Science Invention Criticism 1 Critical remark (2017): Listed by SPJ and is on many other lists of blacklisted journals; An IF of 4.5 given by African Quality Centre for Journals; Open access but asks for the copyright; Publishes any subject; Says that the journal is indexed in DOAJ which it does not seem to be. A thorough review February 2018 concludes that it exhibits at least 13 of the 25 criteria for “predatory” journals.
International Journal of Humanities and Social Sciences Critical remark (2017): Listed by SPJ; Has an amazing fast-track review option for $100 that guarantees “the review, editorial decision, author notification and publication” to take place “within 2 weeks”. “Editors” claim that repeated requests to be removed from the list of editors result in nothing. Thorough reviews in February and June 2018 conclude that it seems to exhibit at least 7 to 10 of the 25 criteria for “predatory” journals.
International Journal of Humanities & Social Studies Critical remark (2017): Listed by SPJ; IF from International Impact Factor Services; States that there “is no scope of correction after the paper publication”. Critical remark (2018): They write that the “review process will be completed expectedly within 3-4 days”. Critical remark (2020): A thorough review in October 2020 concludes that it seems to exhibit at least 7-8 of the 25 criteria for “predatory” journals.
International Journal of Legal, Ethical and Regulatory Issues (Jacobs Publishers) Criticism 1 Critical remark (2019): Spamming with invitation to publish. They are unsure of their own name; in the e-mail they call the journal “International Journal of Legal, Ethical and Regulatory Affairs“! Publisher listed on SPJ. Editor-in-chief and editorial board are missing. Claims that material is “written by leading scholars” which is obviously false.
International Journal of Philosophy (SciencePG) Criticism 1 │ Criticism 2 Critical remark (2017): Listed by SPJ; Alternative indexing and also IF from Universal Impact Factor (now defunct); Promises a two-week peer review. Thorough reviews in April and November 2018 conclude that it seems to exhibit at least 10 or 8 of the 25 criteria for “predatory” journals and also find obvious examples of pseudo-science among the published articles.
International Journal of Philosophy and Theology(American Research Institute for Policy Development) Criticism 1 │Criticism 2 │ Criticism 3 Critical remark: A thorough review in June 2018 concludes that “there are grounds to believe that the American Research Institute never intended to create a serious scientific periodical and that, on the contrary, its publications are out-and-out predatory journals.” Update (2022): A thorough review in June concludes that it seems to exhibit at least 9 of the 25 criteria for “predatory” journals. However, the website could not be accessed on June 21.
International Journal of Research in Humanities and Social Studies (Sryahwa Publications) Critical remark (2017): Listed on SPJ; Open access but asks for the copyright. A thorough review in April 2018 concludes that it seems to exhibit at least 9 of the 25 criteria for “predatory” journals. Update (2022): A June review again confirmed that it seems to exhibit at least 9 of the 25 criteria for “predatory” journals.
International Journal of Social Science and Humanities Research (Research Publish Journals) Critical remark (2017): Listed on SPJ; On their homepage they state that in order to get a high IF their journals are “indexed in top class organisation around the world” although no major index is used. A thorough review in 2020 concludes that it seems to exhibit at least 14 of the 25 criteria for “predatory” journals.
International Open Journal of Philosophy (Academic and Scientific Publishing) Critical remark (2017): Listed on SPJ and was heavily critized on Beall’s blog; The editorial board consists of one person from Iran; Although boosting 12 issues a year they have published only 1 article in the journal’s first four years; A thorough review March 1 2017 concludes that it exhibits 17 of the 25 criteria for “predatory” journals and one in March 2019 that it exhibits at least 13 criteria.
International Researchers Critical remark (2017): Listed on SPJ; Indexed by e.g. Index Copernicus; Claims that it is “Monitor by Thomson Reuters” but is not part of the TR journal citation reports; Several pages are not working at time of review; A thorough review April 24 2017 concludes that it exhibits at least 6 of the 25 criteria for “predatory” journals.
Journal of Academic and Business Ethics (Academic and Business Research Institute) Critical remark (2017): Listed on SPJ as well as several other blacklists; Journal seems uncertain about it’s own name, the header curiously says “Journal of ethical and legal issues”. Update 2021: A thorough review May 2021 concludes that it exhibits at least 7 of the 25 criteria for “predatory” journals.
Journal of Bioethics and Applications (Sci Forschen) Critical remark (2018): Brand new journal with no articles yet. Publisher has been criticized for spamming more than once, have a bad record at Scam Analyze, and is listed on SPJ. Critical remark (2022): A thorough review March 2022 concludes that it exhibits at least 7 of the 25 criteria for “predatory” journals.
Journal of Philosophy and Ethics (Sryahwa Publications) Critical remark (2019): listed by Cabells for 7 violations. Critical remark 2020): A thorough review October 2020 concludes that it exhibits at least 11 of the 25 criteria for “predatory” journals.
Journal of Research in Philosophy and History (Scholink) Criticism 1 Critical remark (June 2018): Listed on several lists of predatory publishers. They only do “peer review” through their own editorial board, a flowchart states. They claim to check for plagiarism but the first 2018 article abstract run by us through a checker turned out to be self-plagiarized from a book and it looks to have been published many times over. Unfortunately, the next paper checked in the same issue was also published the previous year by another journal listed here… Critical remark (March 2021): A thorough review concludes that it exhibits at least 14 of the 25 criteria for “predatory” journals.
Journal of Social Sciences and Humanities (AASCIT) Criticism 1 │ Criticism 2 │ Criticism 3 Critical remark (2019): From law to religion, this journal publishes it all. Though publisher claims to be “American”, it has only two editors, both from India. The list from Cabells includes 13 journals from this publisher. The AASCIT Code of Ethics apparently plagiarizes the INCOSE Code of Ethics.
Journal of Studies in Social Sciences and Humanities Critical remark (2017): Listed on SPJ; Alternative indexing; Uses several alternative IF providers. A thorough review October 2017 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals. Critical remark (2020): A thorough review October 2020 concludes that it exhibits at least 4 of the 25 criteria for “predatory” journals.
JSM Health Education and Primary Health Care Spamming with invitation to special issue on ‘Bioethics’. The publisher is listed on SPJ, and criticized and exposed here. It is indexed by spoof indexer Directory of Research Journals Indexing among others (whose website is now gone, BTW). Update 2019: Access denied because of non-secure connection.
Medical Ethics and Communication (Avid Science) Criticism 1 Critical remarks (2017): Listed on SPJ; Spamming researchers with offer of eBook publication for $350. Update: In June 2022, the journal cannot be accessed online.
Nova Journal of Humanities and Social Sciences Criticism 1 Critical remark (2018): This publisher was on Beall’s list; Uses alternative impact factors and indexing; Publishes in less than 30 days; Curiously, it says no fee is charged for publication. Update: In June 2022, the journal cannot be accessed online.
Open Journal for Studies in Philosophy (Center for Open Access in Science) Critical remark (2020): Cabells found 8 violations. Update: Thorough reviews May-June 2022 concludes that it exhibits at least 8-9 of the 25 criteria for “predatory” journals.
Philosophical Papers and Review (Academic Journals) Critical remark (2017): Listed on SPJ and blacklisted by the Ministry of Higher Education of Malaysia. Update (2021): Latest article in press was accepted the same day it was sent in – and it happened back in 2018! Update: A thorough review April 2022 concludes that it exhibits at least 10 of the 25 criteria for “predatory” journals.
Philosophy Study (David Publishing Company) Criticism 1 │ Criticism 2 Critical remark (2017): Listed on SPJ. A thorough review October 2019 concludes that it exhibits approx. 8 of the 25 criteria for “predatory” journals.
The Recent Advances in Academic Science Journal (Swedish Scientific Publications) Critical remark (2018): Despite the publisher’s name it seems based in India. The only Swedish editor’s existence cannot be verified. Website quality is lacking. Listed on SPJ. A thorough review October 2017 concludes that it exhibits at least 15 of the 25 criteria for “predatory” journals.
Universal Open Ethics Journal (Adyan Academic Press) Critical remark (2019): listed by Cabells for 7 violations. Update: Thorough reviews in May 2022 concludes that it exhibits 13 to 20 of the 25 criteria for “predatory” journals.
World Journal of Social Sciences and Humanities (Science and Education Publishing, SciEP) Criticism 1 │Criticism 2 Critical remark (2017): Listed on SPJ as well as many other blacklists. A thorough review in May 2019 concludes that it exhibits at least 7 of the 25 criteria for “predatory” journals.
End remark:
In light of recent legal action taken against people trying to warn others about dubious publishers and journals – see here and here, for example – we want to stress that this blog post is about where we would like our articles to show up, it is about quality, and as such it is an expression of a professional judgement intended to help authors find good journals with which to publish.
Indirectly, this may also help readers to be more discerning about the articles they read. As such it is no different from other rankings that can be found for various products and services everywhere. Our list of where not to publish implies no accusation of deception or fraud but claims to identify journals that experienced bioethicists would usually not find to be of high quality. Those criticisms linked to might be more upfront or confrontational; us linking to them does not imply an endorsement of any objectionable statement made therein. We would also like to point out that individual papers published in these journals might of course nevertheless be perfectly acceptable contributions to the scholarly literature of bioethics.
Essential resources on so-called predatory publishing and open access:
The Rise of Junk Science. Fake publications are corrupting the world of research —and influencing real news (the latest developments presented in 2019 by Alex Gillis)
Autonomy is such a cherished concept in ethics that I hardly dare to write about it. The fact that the concept cherishes the individual does not make my task any easier. The slightest error in my use of the term, and I risk being identified as an enemy perhaps not of the people but of the individual!
In ethics, autonomy means personal autonomy: individuals’ ability to govern their own lives. This ability is constantly at risk of being undermined. It is undermined if others unduly influence your decisions, if they control you. It is also undermined if you are not sufficiently well informed and rational. For example, if your decisions are based on false or contradictory information, or if your decisions result from compulsions or weakness of the will. It is your faculty of reason that should govern your life!
In an article in BMC Medical Ethics, Amal Matar, who has a PhD at CRB, discusses decision-making situations in healthcare where this individual-centered concept of autonomy seems less useful. It is about decisions made not by individuals alone, but by people together: by couples planning to become parents.
A couple planning a pregnancy together is expected to make joint decisions. Maybe about genetic tests and measures to be taken if the child risks developing a genetic disease. Here, as always, the healthcare staff is responsible for protecting the patients’ autonomy. However, how is this feasible if the decision is not made by individuals but jointly by a couple?
Personal autonomy is an idealized concept. No man is an island, it is said. This is especially evident when a couple is planning a life together. If a partner begins to emphasize his or her personal autonomy, the relationship probably is about to disintegrate. An attempt to correct the lack of realism in the idealized concept has been to develop ideas about relational autonomy. These ideas emphasize how individuals who govern their lives are essentially related to others. However, as you can probably hear, relational autonomy remains tied to the individual. Amal Matar therefore finds it urgent to take a further step towards realism concerning joint decisions made by couples.
Can we talk about autonomy not only at the level of the individual, but also at the level of the couple? Can a couple planning a pregnancy together govern their life by making decisions that are autonomous not only for each one of them individually, but also for them together as a couple? This is Amal Matar’s question.
Inspired by how linguistic meaning is conceptualized in linguistic theory as existing not only at the level of the word, but also at the level of the sentence (where words are joined together), Amal Matar proposes a new concept of couple autonomy. She suggests that couples can make joint decisions that are autonomous at both the individual and the couple’s level.
She proposes a three-step definition of couple autonomy. First, both partners must be individually autonomous. Then, the decision must be reached via a communicative process that meets a number of criteria (no partner dominates, sufficient time is given, the decision is unanimous). Finally, the definition allows one partner to autonomously transfer aspects of the decision to the other partner.
The purpose of the definition is not a philosophical revolution in ethics. The purpose is practical. Amal Matar wants to help couples and healthcare professionals to speak realistically about autonomy when the decision is a couple’s joint decision. Pretending that separate individuals make decisions in parallel makes it difficult to realistically assess and support the decision-making process, which is about interaction.
Amal Matar concludes the article, written together with Anna T. Höglund, Pär Segerdahl and Ulrik Kihlbom, with describing two cases. The cases show concretely how her definition can help healthcare professionals to assess and support autonomous decision-making at the level of the couple. In one case, the couple’s autonomy is undermined, in the other case, probably not.
Read the article as an example of how we sometimes need to modify cherished concepts to enable a realistic use of them.
Matar, A., Höglund, A.T., Segerdahl, P. and Kihlbom, U. Autonomous decisions by couples in reproductive care. BMC Med Ethics 21, 30 (2020). https://doi.org/10.1186/s12910-020-00470-w
Academic research is driven by dissemination of results to peers at conferences and through publication in scientific journals. However, research results belong not only to the research community. They also belong to society. Therefore, results should reach not only your colleagues in the field or the specialists in adjacent fields. They should also reach outside the academy.
Who is out there? A homogeneous public? No, it is not that simple. Communicating research is not two activities: first communicating the science to peers and then telling the popular scientific story to the public. Outside the academy, we find engineers, entrepreneurs, politicians, government officials, teachers, students, research funders, taxpayers, healthcare professionals… We are all out there with our different experiences, functions and skills.
Research communication is therefore a strategically more complicated task than just “reaching the public.” Why do you want to communicate your results; why are they important? Who will find your results important? How do you want to communicate them? When is the best time to communicate? There is not just one task here. You have to think through what the task is in each particular case. For the task varies with the answers to these questions. Only when you can think strategically about the task can you communicate research responsibly.
Josepine Fernow’s contribution is, in my view, more than a convincing argument. It is an eye-opening text that helps researchers see more clearly their diverse relationships to society, and thereby their responsibilities. The academy is not a rock of knowledge in a sea of ignorant lay people. Society consists of experienced people who, because of what they know, can benefit from your research. It is easier to think strategically about research communication when you survey your relations to a diversified society that is already knowledgeable. Josepine Fernow’s argumentation helps and motivates you to do that.
Josepine Fernow also warns against exaggerating the significance of your results. Bioscience has potential to give us effective treatments for serious diseases, new crops that meet specific demands, and much more. Since we are all potential beneficiaries of such research, as future patients and consumers, we may want to believe the excessively wishful stories that some excessively ambitious researchers want to tell. We participate in a dangerous game of increasingly unrealistic hopes.
The name of this dangerous game is hype. Research hype can make it difficult for you to continue your research in the future, because of eroded trust. It can also make you prone to take unethical shortcuts. The “huge potential benefit” obscures your judgment as a responsible researcher.
Responsible research communication is as important as difficult. Therefore, these tasks deserve our greatest attention. Read Josepine Fernow’s argumentation for carefully planned communication strategies. It will help you see more clearly your responsibility.
The STARBIOS2 project has carried out its activities in a context of the profound transformations that affect contemporary societies, and now we are all facing the Covid-19 pandemic. Science and society have always coevolved, they are interconnected entities, but their relationship is changing and it has been for some time. This shift from modern to so-called postmodern society affects all social institutions in similar ways, whether their work is in politics, religion, family, state administration, or bioscience.
We can find a wide range of phenomena connected to this trend in the literature, for instance: globalization; weakening of previous social “structures” (rules, models of action, values and beliefs); more capacity and power of individuals to think and act more freely (thanks also to new communication technologies); exposure to risks of different kinds (climate change, weakening of welfare, etc.); great social and cultural diversification; and weakening of traditional boundaries and spheres of life, etc.
In this context, we are witnessing the diminishing authority and prestige of all political, religious, even scientific institutions, together with a decline in people’s trust towards these institutions. One example would be the anti-vaccination movement.
Meanwhile, scientific research is also undergoing profound transformations, experiencing a transition that has been examined in various ways and called various names. At the heart of this transformation is the relationship between research and the society it belongs to. We can observe a set of global trends in science.
Such trends include the increasing relationship between universities, governments and industries; the emergence of approaches aimed at “opening” science to society, such as citizen science; the diffusion of cooperative practices in scientific production; the increasing relevance of transdisciplinarity; the increasing expectation that scientific results have economic, social, and environmental impacts; the increasingly competitive access to public funds for research; the growing importance attached to quantitative evaluation systems based on publications, often with distorting effects and questionable results; and the emergence on the international economic and technological scene of actors such as India, China, Brazil, South Africa and others. These trends produce risks and opportunities for both science and society.
Critical concerns for science includes career difficulties for young researchers and women in the scientific sector; the cost of publishing and the difficulties to publish open access; and the protection of intellectual property rights.
Of course, these trends and issues manifest in different ways and intensities according to the different political, social and cultural contexts they exist in.
After the so-called “biological revolution” and within the context of the “fourth industrial revolution” and with “converging technologies” like genetics, robotics, info-digital, neurosciences, nanotechnologies, biotechnologies, and artificial intelligence, the biosciences are at a crossroads in its relationship to society.
In this new context, more and more knowledge is produced and technological solutions developed require a deeper understanding of their status, limits, and ethical and social acceptability (take organoids, to name one example). Moreover, food security, clean energy transition, climate change, and pandemics are all challenges where bioscience can play a crucial role, while new legal, ethical, and social questions that need to be dealt with arise.
These processes have been running for years, albeit in different ways, and national and international decision-makers have been paying attention. Various forms of governance have been developed and implemented over time, to re-establish and harmonize the relationship between scientific and technological research and the rest of society, including more general European strategies and approaches such as Smart Specialization, Open Innovation, Open Science and Responsible Research and Innovation as well as strategies related to specific social aspects of science (such as ethics or gender).
Taking on an approach such as RRI is not simply morally recommendable, but indispensable for attempting a re-alignment between scientific research and the needs of society. Starting from the areas of the life of the scientific communities that are most crucial to science-society relations (The 5+1 RRI keys: Science education, Gender equality, Public engagement, Ethics, Open access, and the cross-cutting sixth key: Governance) and taking the four RRI dimensions into account (anticipation, inclusiveness, responsiveness, and reflexivity) can provide useful guidance for how to activate and drive change in research organisations and research systems.
We elaborate and experiment, in search of the most effective and most relevant solution. While at the same time, there is a need to encourage mainstreaming of the most substantial solutions, to root them more deeply and sustainably in the complex fabric of scientific organisations and networks. Which leads us to ask ourselves: in this context, how can we mainstream RRI and its application in the field of bioscience?
Based on what we know, and on experiences from the STARBIOS2 project, RRI and similar approaches need to be promoted and supported by specific policies and contextualised on at least four levels.
Organizational contextualization Where mainstreaming takes place through the promotion of a greater embedment of RRI, or similar approaches, within the individual research organizations such as universities, national institutes, private centres, etc.
Disciplinary or sectoral contextualization Where mainstreaming consists of adapting the responsible research and innovation approach to a specific discipline − for example, biotechnology − or to an entire “sector” in a broad sense, such as bioscience.
Geopolitical and cultural contextualization Where mainstreaming aims to identify forms of adaptation, or rather reshaping, RRI or similar approaches, in various geopolitical and cultural contexts, taking into account elements such as the features of the national research systems, the economy, territorial dynamics, local philosophy and traditions, etc.
Historical contextualization Where RRI mainstreaming is related to the ability of science to respond to the challenges that history poses from time to time − and of which the COVID-19 pandemic is only the last, serious example − and to prevent them as much as possible.
During the course of the STARBIOS2 project, we have developed a set of guidelines and a sustainable model for RRI implementation in bioscience research institutions. Over the course of 4 years, 6 bioscience research institutions in Europe, and 3 outside Europe, worked together to achieve structural change towards RRO in their own research institutions with the goal of achieving responsible biosciences. We were looking forward to revealing and discussing our results in April, but with the Covid-19 outbreak, neither that event nor our Cape Town workshop was a possibility. Luckily, we have adapted and will now share our findings online, at our final event on 29 May. We hope to see you there.
For our final remark, as the Covid-19 pandemic is challenging our societies, our political and economic systems, we recognise that scientists are also being challenged. By the corona virus as well as by contextual challenges. The virus is testing their ability to play a key role to the public, to share information and to produce relevant knowledge. But when we go back to “normal”, the challenge of changing science-society relations will persist. And we will remain convinced that RRI and similar approaches will be a valuable contribution to addressing these challenges, now and in the future.
Written by…
Daniele Mezzana, a social researcher working in the STARBIOS2 project (Structural Transformation to Attain Responsible BIOSciences) as part of the coordination team at University of Rome – Tor Vergata.
This text is based on the Discussion Note for the STARBIOS2 final event on 29 May 2020.
The STARBIOS2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 709517. The contents of this text and the view expressed are the sole responsibility of the author and under no circumstances can be regarded as reflecting the position of the European Union.
How do we know? That is the recurring question in a scientific culture. Do we have support for what we claim or is it just an opinion? Is there evidence?
The development of new cancer treatments provides many examples of the recurring question. The pharmaceutical company would like to be able to claim that the new treatment is more effective than existing alternatives and that the dosages recommended give good effect without excessive side effects. However, first we must answer the question, How do we know?
It is not enough to ask the question just once. We must repeat the question for every aspect of the treatment. Any claim on efficacy, side effects and dosages must be supported by answers to the question. How do we arrive at these answers? How do we check that it is not mere opinions? Through clinical trials conducted with cancer patients who agree to be research subjects.
A new research ethical study shows, however, that an ethically sensitive claim is often repeated in cancer research, without first asking and answering the question “How do we know?” in a satisfying way. Which claim? It is the claim that cancer patients are better off as participants in clinical trials than as regular patients who receive standard treatment. The claim is ethically sensitive because it can motivate patients to participate in trials.
In a large interview study, the authors first investigated whether the claim occurs among physicians and nurses working with clinical trials. Then, through a systematic literature review, they examined whether there is scientific evidence supporting the claim. The startling answer to the questions is: Yes, the claim is common. No, the claim lacks support.
Patients recruited for clinical trials are thus at risk of being misled by the common but unfounded opinion that research participation means better treatment. Of course, it is conceivable that patients who participate in trials will at least get indirect positive effects through increased attention: better follow-ups, more sample taking, closer contacts with physicians and nurses. However, indirect positive effects on outcomes should have been visible in the literature study. Regarding subjective effects, it is pointed out in the article that such effects will vary with the patients’ conditions and preferences. It is not always positive for a very sick patient to provide the many samples that research needs. In general, then, we cannot claim that research participation has indirect positive effects.
An ethically important conclusion drawn in the article is the following. If we suggest to patients who consent to participation in trials that research means better treatment, then they receive misleading information. Instead, altruistic research participation should be emphasized. By participating in studies, patients support new knowledge that can enable better cancer treatments for future patients.
The article examines a case where the question “How do we know?” has the answer, “We do not know, it is just an opinion.” Then at least we know that we do not know! How do we know? Through the studies presented in the article – read it!
Our attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.
Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?
Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.
Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?
RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.
A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.
The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!
Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.
The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!
Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”
The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.
The article warns that this idea of a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.
An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.
Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.
Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.
Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.
In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.
I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.
The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.
The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)
The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?
John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.
In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.
All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.
Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?
Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!
I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.
We write for researchers, healthcare staff, officials, politicians, patient organisations and anyone interested in ethics.
We comment on the research ethics and bioethics debate and discuss our own research. We address issues from current debates in the research community and in the press.
Visit Etikbloggen where we write about national issues, in Swedish.
RECENT COMMENTS
During the last phase of the Human Brain Project, the activities on this blog received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. HBP SGA3 - Human Brain Project Specific Grant Agreement 3 (945539). The views and opinions expressed on this blog are the sole responsibility of the author(s) and do not necessarily reflect the views of the European Commission.