Two of our doctoral students at CRB recently successfully defended their dissertations. Both dissertations reflect a trend in bioethics from purely theoretical studies to also include empirical studies of people’s perceptions of bioethical issues.
Åsa Grauman’s dissertation explores the public’s view of risk information about cardiovascular disease. The risk of cardiovascular disease depends on many factors, both lifestyle and heredity influence the risk. Many find it difficult to understand such risk information and many underestimate their risk, while others worry unnecessarily. For risk information to make sense to people, it must be designed so that recipients can benefit from it in practice. That requires knowing more about their perspective on risk, how health information affects them, and what they think is important and unimportant when it comes to risk information about cardiovascular disease. One of Åsa Grauman’s conclusions from her studies of these issues is that people often estimate their risk on the basis of self-assessed health and family history. As this can lead to the risk being underestimated, she argues that health examinations are important which can nuance individuals’ risk assessments and draw their attention to risk factors that they themselves can influence.
Mirko Ancillotti’s dissertation explores the Swedish public’s view of antibiotic resistance and our responsibility to reduce its prevalence. The rise of antibiotic-resistant bacteria is one of the major global threats to public health. The increase is related to our often careless overuse of antibiotics in society. The problem needs to be addressed both nationally and internationally, both collectively and individually. Mirko Ancillotti focuses on our individual responsibility for antibiotic resistance. He examines how such a responsibility can be supported through more effective health communication and improved institutional conditions that can help people to use antibiotics more judiciously. Such support requires knowledge of the public’s beliefs, values and preferences regarding antibiotics, which may affect their willingness and ability to take responsibility for their own use of antibiotics. One of the studies in the dissertation indicates that people are prepared to make significant sacrifices to reduce their contribution to antibiotic resistance.
Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.
We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function?
There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance?
The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance.
The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies.
A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments.
And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.
Allegedly, there are over 12.000 so-called predatory journals out there. Instead of supporting readers and science, these journals serve their own economic interests first and at best offer dubious merits for scholars. We believe that scholars working in any academic discipline have a professional interest and a responsibility to keep track of these journals. It is our job to warn the young or inexperienced of journals where a publication or editorship could be detrimental to their career and science is not served.
We have seen “predatory” publishing take off in a big way and noticed how colleagues start to turn up in the pages of some of these journals. While many have assumed that this phenomenon mainly is a problem for low-status universities, there are strong indications that predatory publishing is a part of a major trend towards the industrialization of misconduct and that it affects many top-flight research institutions (see Priyanka Pulla: “In India, elite institutes in shady journals”, Science 354(6319): 1511-1512).
The latest effort to create a thorough blacklist comes from Cabells, who distinguish around 70 different unacceptable violations and employs a whole team reviewing journals. These lists are not, however, the final say on the matter, as it is impossible for one person or a limited group to judge reliably actors in every academic discipline. Moreover, since only questionable journals are listed, the good journals must be found elsewhere.
A response of gatekeeping needs to be anchored in each discipline and the scholars who make up that discipline. As a suitable response in bioethics, we have chosen to, first, collect a few authoritative lists of recommended bioethics journals that can be consulted by anyone in bioethics to find good journals to publish with.
For our first post, we recommended a list of journals ourselves, which brought on some well-deserved questions and criticism about criteria for inclusion. Unfortunately then, our list ultimately drew attention from other parts of the message that we were more concerned to get across. Besides, there are many other parties making such lists. We, therefore, have dropped this feature. Instead, we have enlarged the collection of good journal lists to the service of our readers. They are all of great use when further exploring the reputable journals available:
It is of prime importance to list the journals that are potentially or possibly predatory or of such a low quality that it might be dishonoring to engage with them. We have listed all 50 of them alphabetically (eleven new entries for 2019, two have ceased operation and been removed), and provided both the homepage URL and links to any professional discussion of these journals that we have found (which most often alerted us to their existence in the first place).
Each of these journals asks scholars for manuscripts from, or claims to publish papers in bioethics or related areas (such as practical philosophy). They have been reviewed by the authors of this blog post as well as by a group of reference scholars that we have asked for advice on the list. Those journals listed have unanimously been agreed are journals that – in light of the criticism put forth and the quality we see – we would not deem acceptable for us to publish in. Typical signs as to why a journal could fall in this category, such as extensive spamming, publishing in almost any subject, or fake data being included on the website etc., are listed here:
We have started to more systematically evaluate the journals against the 25 defining characteristics we outlined in the article linked to above (with the help of science and technology PhD students). The results will be added when they exist.
We would love to hear about your views on this blog post, and be especially grateful for pointers to journals engaging in sloppy or bad publishing practices. The list is not meant as a check-list but as a starting point for any bioethics scholar to ponder for him- or herself where to publish.
Also, anyone thinking that a journal in our list should be given due reconsideration might post their reasons for this as a comment to the blog post or send an email to us. Journals might start out with some sloppy practices but shape up over time and we will be happy to hear about it. You can make an appeal against the inclusion of a journal and we will deal with it promptly and publicly.
Please spread the content of this blog as much as you can and check back for updates (we will do a major update annually and continually add any further information found).
Advances In Medical Ethics (Longdom Publishing) Critical remark (2019): When asked, one editor attest to the fact that his editorship was forged. Publisher was on Beall’s list. A thorough review December 2019 concludes that it exhibits at least 7 of the 25 criteria for “predatory” journals.
American Open Ethics Journal (Research and Knowledge Publication) Critical remark (2019): Listed on Cabells with 7 violations. A thorough review February 2020 concludes that it exhibits at least 11 of the 25 criteria for “predatory” journals.
Annals of Bioethics & Clinical Applications (Medwin Publishers) Criticism 1 │ Criticism 2 Critical remark (2019): Publisher was on Beall’s list and is on many other lists of these journals. They say that they are “accepting all type of original works that is related to the disciplines of the journal” and indeed the flow chart of manuscript handling does not have a reject route. Indexed by alternative indexes. Critical remark (2020): Listed on Cabells with 5 violations. A thorough review October 2020 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals.
Austin Journal of Genetics and Genomic Research(Austin Publishing Group) Criticism 1 │Criticism 2 │Criticism 3 Critical remark (2017): Spam e-mail about special issue on bioethics; Listed by SPJ; Romanian editorial member is said to be from a university in “Europe”; Another editorial board member is just called “Michael”; APG has been sued by International Association for Dental Research and The American Association of Neurological Surgeons for infringing on their IP rights. Student reviews concludes the journal is not suitable to publish in, one finding that the journal exhibits at least 16 of the 25 criteria for “predatory” journals. Critical remark (2019): Listed by Cabells with 10 violations. Critical remark (2021): A thorough review concludes that the journals exhibits at least 13 of the 25 criteria for “predatory” journals.
Creative Education (Scientific Research Publishing – SCIRP) Criticism 1 │ Criticism 2 Critical remark (2017): Listed by SPJ; They claim misleadingly to be indexed by ISI but this relates to be among cited articles only – they are not indexed. A thorough review May 2017 concludes that it exhibits at least 5 of the 25 criteria for “predatory” journals.
East European Scientific Journal (East European Research Alliance) Critical remark (2017): Listed by SPJ; Criticised by Beall for having a bogus editorial board; Claims to be indexed by ISI but that is not the well-known Institute for Scientific Information (now Thompson Reuters), but rather the so-called International Scientific Indexing. Thorough reviews November 2018 and February 2019 conclude that it exhibits at least 13 or 14 of the 25 criteria for “predatory” journals.
Ethics Today Journal (Franklin Publishing) Critical remark (2019): Listed by Cabells with 9 violations.
European Academic Research (Kogaion Publishing Center, formerly Bridge Center) Critical remark (2017): Listed by SPJ; Uses impact factor from Universal Impact Factor (now defunct); A thorough review May 2017 concludes that it exhibits at least 15 of the 25 criteria for “predatory” journals.
European Scientific Journal (European Scientific Institute) Critical remark (2017): Listed by SPJ; Use of alternative indexes. A thorough review May 2017 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals.
International Journal of Contemporary Research & Review Critical remark (2017): Listed by SPJ; Indexed by Index Copernicus; Despite claims they seem not to be indexed by either Chemical Abstracts or DOAJ. A thorough review June 2017 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals.
International Journal of Current Research Criticism 1 Critical remark (2017): Listed by SPJ; Uses IF from SJIF and Index Copernicus and more. It wrongly claims to be indexed by Thomson Reuters, ORCID and having a DOI among other things. A thorough review January 2018 concludes that it exhibits at least 12 of the 25 criteria for “predatory” journals.
International Journal of Current Research and Academic Review (Excellent Publishers) Critical remark (June 2018): Listed by SPJ and Cabells because of misleading claims about credentials, metrics, and too quick review; alternative indexing; publishes in almost any field imaginable; the editor -in-chief is head of the “Excellent Education and Researh Institute” (sic) which does not seem to exist even when spelled right? A thorough review in December 2019 concludes that it exhibits at least 12 of the 25 criteria for “predatory journals”.
International Journal of Ethics & Moral Philosophy (Journal Network) Critical remark (2017): Listed by SPJ; Publisher was criticized by Beall when launching 350 journals at once; After several years not one associate editor has signed up and no article has been published; No editorial or contact details available. Thorough reviews in May 2019 and February 2020 conclude that it exhibits at least 10 to 12 of the 25 criteria for “predatory journals”.
International Journal of Ethics in Engineering & Management Education Critical remark (2019): Papers from almost any field; Claims to have a 5.4 Impact factor (from IJEEE); Indexed by GJIF etc. A non-existent address in “Varginia”, US (sic!); Open access but asks for the copyright; Claims to be indexed in Scopus can’t be verified. Thorough reviews February 2018 and February 2020 conclude that it exhibits at least 16-17 of the 25 criteria for “predatory” journals. Listed by Cabells with 11 violations found.
International Journal of Humanities and Social Sciences Critical remark (2017): Listed by SPJ; Has an amazing fast-track review option for $100 that guarantees “the review, editorial decision, author notification and publication” to take place “within 2 weeks”. “Editors” claim that repeated requests to be removed from the list of editors result in nothing. Thorough reviews in February and June 2018 conclude that it seems to exhibit at least 7 to 10 of the 25 criteria for “predatory” journals.
International Journal of Humanities & Social Studies Critical remark (2017): Listed by SPJ; IF from International Impact Factor Services; States that there “is no scope of correction after the paper publication”. Critical remark (2018): They write that the “review process will be completed expectedly within 3-4 days”. Critical remark (2020): A thorough review in October 2020 concludes that it seems to exhibit at least 7-8 of the 25 criteria for “predatory” journals.
International Journal of Legal, Ethical and Regulatory Issues (Jacobs Publishers) Criticism 1 Critical remark (2019): Spamming with invitation to publish. They are unsure of their own name; in the e-mail they call the journal “International Journal of Legal, Ethical and Regulatory Affairs“! Publisher listed on SPJ. Editor-in-chief and editorial board are missing. Claims that material is “written by leading scholars” which is obviously false.
International Journal of Philosophy (SciencePG) Criticism 1 │ Criticism 2 Critical remark (2017): Listed by SPJ; Alternative indexing and also IF from Universal Impact Factor (now defunct); Promises a two-week peer review. Thorough reviews in April and November 2018 conclude that it seems to exhibit at least 10 or 8 of the 25 criteria for “predatory” journals and also find obvious examples of pseudo-science among the published articles.
International Journal of Philosophy and Theology(American Research Institute for Policy Development) Criticism 1 │Criticism 2 │ Criticism 3 Critical remark: A thorough review in June 2018 concludes that “there are grounds to believe that the American Research Institute never intended to create a serious scientific periodical and that, on the contrary, its publications are out-and-out predatory journals.”
International Journal of Social Science and Humanities Research (Research Publish Journals) Critical remark (2017): Listed on SPJ; On their homepage they state that in order to get a high IF their journals are “indexed in top class organisation around the world” although no major index is used. A thorough review in 2020 concludes that it seems to exhibit at least 14 of the 25 criteria for “predatory” journals.
International Open Journal of Philosophy (Academic and Scientific Publishing) Critical remark (2017): Listed on SPJ and was heavily critized on Beall’s blog; The editorial board consists of one person from Iran; Although boosting 12 issues a year they have published only 1 article in the journal’s first four years; A thorough review March 1 2017 concludes that it exhibits 17 of the 25 criteria for “predatory” journals and one in March 2019 that it exhibits at least 13 criteria.
International Researchers Critical remark (2017): Listed on SPJ; Indexed by e.g. Index Copernicus; Claims that it is “Monitor by Thomson Reuters” but is not part of the TR journal citation reports; Several pages are not working at time of review; A thorough review April 24 2017 concludes that it exhibits at least 6 of the 25 criteria for “predatory” journals.
Journal of Academic and Business Ethics (Academic and Business Research Institute) Critical remark (2017): Listed on SPJ as well as several other blacklists; Journal seems uncertain about it’s own name, the header curiously says “Journal of ethical and legal issues”. Update 2021: A thorough review May 2021 concludes that it exhibits at least 7 of the 25 criteria for “predatory” journals.
Journal of Philosophy and Ethics (Sryahwa Publications) Critical remark (2019): listed by Cabells for 7 violations. Critical remark 2020): A thorough review October 2020 concludes that it exhibits at least 11 of the 25 criteria for “predatory” journals.
Journal of Research in Philosophy and History (Scholink) Criticism 1 Critical remark (June 2018): Listed on several lists of predatory publishers. They only do “peer review” through their own editorial board, a flowchart states. They claim to check for plagiarism but the first 2018 article abstract run by us through a checker turned out to be self-plagiarized from a book and it looks to have been published many times over. Unfortunately, the next paper checked in the same issue was also published the previous year by another journal listed here… Critical remark (March 2021): A thorough review concludes that it exhibits at least 14 of the 25 criteria for “predatory” journals.
Journal of Studies in Social Sciences and Humanities Critical remark (2017): Listed on SPJ; Alternative indexing; Uses several alternative IF providers. A thorough review October 2017 concludes that it exhibits at least 9 of the 25 criteria for “predatory” journals. Critical remark (2020): A thorough review October 2020 concludes that it exhibits at least 4 of the 25 criteria for “predatory” journals.
JSM Health Education and Primary Health Care Spamming with invitation to special issue on ‘Bioethics’. The publisher is listed on SPJ, and criticized and exposed here. It is indexed by spoof indexer Directory of Research Journals Indexing among others (whose website is now gone, BTW). Update 2019: Access denied because of non-secure connection.
Philosophy Study (David Publishing Company) Criticism 1 │ Criticism 2 Critical remark (2017): Listed on SPJ. A thorough review October 2019 concludes that it exhibits approx. 8 of the 25 criteria for “predatory” journals.
The Recent Advances in Academic Science Journal (Swedish Scientific Publications) Critical remark (2018): Despite the publisher’s name it seems based in India. The only Swedish editor’s existence cannot be verified. Website quality is lacking. Listed on SPJ. A thorough review October 2017 concludes that it exhibits at least 15 of the 25 criteria for “predatory” journals.
In light of recent legal action taken against people trying to warn others about dubious publishers and journals – see here and here, for example – we want to stress that this blog post is about where we would like our articles to show up, it is about quality, and as such it is an expression of a professional judgement intended to help authors find good journals with which to publish.
Indirectly, this may also help readers to be more discerning about the articles they read. As such it is no different from other rankings that can be found for various products and services everywhere. Our list of where not to publish implies no accusation of deception or fraud but claims to identify journals that experienced bioethicists would usually not find to be of high quality. Those criticisms linked to might be more upfront or confrontational; us linking to them does not imply an endorsement of any objectionable statement made therein. We would also like to point out that individual papers published in these journals might of course nevertheless be perfectly acceptable contributions to the scholarly literature of bioethics.
Essential resources on so-called predatory publishing and open access:
The STARBIOS2 project has carried out its activities in a context of the profound transformations that affect contemporary societies, and now we are all facing the Covid-19 pandemic. Science and society have always coevolved, they are interconnected entities, but their relationship is changing and it has been for some time. This shift from modern to so-called postmodern society affects all social institutions in similar ways, whether their work is in politics, religion, family, state administration, or bioscience.
We can find a wide range of phenomena connected to this trend in the literature, for instance: globalization; weakening of previous social “structures” (rules, models of action, values and beliefs); more capacity and power of individuals to think and act more freely (thanks also to new communication technologies); exposure to risks of different kinds (climate change, weakening of welfare, etc.); great social and cultural diversification; and weakening of traditional boundaries and spheres of life, etc.
In this context, we are witnessing the diminishing authority and prestige of all political, religious, even scientific institutions, together with a decline in people’s trust towards these institutions. One example would be the anti-vaccination movement.
Meanwhile, scientific research is also undergoing profound transformations, experiencing a transition that has been examined in various ways and called various names. At the heart of this transformation is the relationship between research and the society it belongs to. We can observe a set of global trends in science.
Such trends include the increasing relationship between universities, governments and industries; the emergence of approaches aimed at “opening” science to society, such as citizen science; the diffusion of cooperative practices in scientific production; the increasing relevance of transdisciplinarity; the increasing expectation that scientific results have economic, social, and environmental impacts; the increasingly competitive access to public funds for research; the growing importance attached to quantitative evaluation systems based on publications, often with distorting effects and questionable results; and the emergence on the international economic and technological scene of actors such as India, China, Brazil, South Africa and others. These trends produce risks and opportunities for both science and society.
Critical concerns for science includes career difficulties for young researchers and women in the scientific sector; the cost of publishing and the difficulties to publish open access; and the protection of intellectual property rights.
Of course, these trends and issues manifest in different ways and intensities according to the different political, social and cultural contexts they exist in.
After the so-called “biological revolution” and within the context of the “fourth industrial revolution” and with “converging technologies” like genetics, robotics, info-digital, neurosciences, nanotechnologies, biotechnologies, and artificial intelligence, the biosciences are at a crossroads in its relationship to society.
In this new context, more and more knowledge is produced and technological solutions developed require a deeper understanding of their status, limits, and ethical and social acceptability (take organoids, to name one example). Moreover, food security, clean energy transition, climate change, and pandemics are all challenges where bioscience can play a crucial role, while new legal, ethical, and social questions that need to be dealt with arise.
These processes have been running for years, albeit in different ways, and national and international decision-makers have been paying attention. Various forms of governance have been developed and implemented over time, to re-establish and harmonize the relationship between scientific and technological research and the rest of society, including more general European strategies and approaches such as Smart Specialization, Open Innovation, Open Science and Responsible Research and Innovation as well as strategies related to specific social aspects of science (such as ethics or gender).
Taking on an approach such as RRI is not simply morally recommendable, but indispensable for attempting a re-alignment between scientific research and the needs of society. Starting from the areas of the life of the scientific communities that are most crucial to science-society relations (The 5+1 RRI keys: Science education, Gender equality, Public engagement, Ethics, Open access, and the cross-cutting sixth key: Governance) and taking the four RRI dimensions into account (anticipation, inclusiveness, responsiveness, and reflexivity) can provide useful guidance for how to activate and drive change in research organisations and research systems.
We elaborate and experiment, in search of the most effective and most relevant solution. While at the same time, there is a need to encourage mainstreaming of the most substantial solutions, to root them more deeply and sustainably in the complex fabric of scientific organisations and networks. Which leads us to ask ourselves: in this context, how can we mainstream RRI and its application in the field of bioscience?
Based on what we know, and on experiences from the STARBIOS2 project, RRI and similar approaches need to be promoted and supported by specific policies and contextualised on at least four levels.
Organizational contextualization Where mainstreaming takes place through the promotion of a greater embedment of RRI, or similar approaches, within the individual research organizations such as universities, national institutes, private centres, etc.
Disciplinary or sectoral contextualization Where mainstreaming consists of adapting the responsible research and innovation approach to a specific discipline − for example, biotechnology − or to an entire “sector” in a broad sense, such as bioscience.
Geopolitical and cultural contextualization Where mainstreaming aims to identify forms of adaptation, or rather reshaping, RRI or similar approaches, in various geopolitical and cultural contexts, taking into account elements such as the features of the national research systems, the economy, territorial dynamics, local philosophy and traditions, etc.
Historical contextualization Where RRI mainstreaming is related to the ability of science to respond to the challenges that history poses from time to time − and of which the COVID-19 pandemic is only the last, serious example − and to prevent them as much as possible.
During the course of the STARBIOS2 project, we have developed a set of guidelines and a sustainable model for RRI implementation in bioscience research institutions. Over the course of 4 years, 6 bioscience research institutions in Europe, and 3 outside Europe, worked together to achieve structural change towards RRO in their own research institutions with the goal of achieving responsible biosciences. We were looking forward to revealing and discussing our results in April, but with the Covid-19 outbreak, neither that event nor our Cape Town workshop was a possibility. Luckily, we have adapted and will now share our findings online, at our final event on 29 May. We hope to see you there.
For our final remark, as the Covid-19 pandemic is challenging our societies, our political and economic systems, we recognise that scientists are also being challenged. By the corona virus as well as by contextual challenges. The virus is testing their ability to play a key role to the public, to share information and to produce relevant knowledge. But when we go back to “normal”, the challenge of changing science-society relations will persist. And we will remain convinced that RRI and similar approaches will be a valuable contribution to addressing these challenges, now and in the future.
Daniele Mezzana, a social researcher working in the STARBIOS2 project (Structural Transformation to Attain Responsible BIOSciences) as part of the coordination team at University of Rome – Tor Vergata.
This text is based on the Discussion Note for the STARBIOS2 final event on 29 May 2020.
The STARBIOS2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 709517. The contents of this text and the view expressed are the sole responsibility of the author and under no circumstances can be regarded as reflecting the position of the European Union.
Our attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.
Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?
Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.
Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?
RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.
A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.
The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!
Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.
The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!
Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”
The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.
The article warns that this idea of a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.
An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.
Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.
Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.
Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.
In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.
Life always surpasses us. We thought we were in control, but then something unexpected happens that seems to upset the order. A storm, a forest fire, a pandemic. Life appears as a drawing in sand, the contours of which suddenly dissolve.
Of course, it is not that definitive. Even a storm, a forest fire, a pandemic, will pass. The contours of life return, in somewhat new forms. However, the unexpected reminded us that life is greater than our ability to control it. My question in this post is how we balance the will to control life against the knowledge that life always surpasses us.
That life is greater than our ability to control it is evident not only in the form of storms, forest fires and pandemics. It is evident also in the form of nice varying weather, growing forests and good health. Certainly, medicine contributes to better health. Nevertheless, it is not thanks to any pills that blood circulates in our bodies and food becomes nourishment for our cells. We are rightly grateful to medicine, which helps the sick. However, maybe we could devote life itself a thought of gratitude sometimes. Is not the body fantastic, which develops immunity in contact with viruses? Are not the forests and the climate wonderful, providing oxygen, sun and rain? And consider nature, on which we are like outgrowths, almost as fruits on a tree.
Many people probably want to object that it is pointless to philosophize about things that we cannot change. Why waste time reflecting on the uncontrollable dimensions of life, when we can develop new medicines? Should we not focus all our efforts on improving the world?
I just point out that we then reason as the artist who thought himself capable of painting only the foreground, without background. As though the background was a distraction from the foreground. However, if you want to emphasize the foreground, you must also pay attention to the background. Then the foreground appears. The foreground needs to be embraced by the background. Small and large presuppose each other.
Our desire to control life works more wisely, I believe, if we acknowledge our inevitable dependence on a larger, embracing background. As I said, we cannot control everything, just as an artist cannot paint only the foreground. I want to suggest that we can view philosophy as an activity that reminds us of that. It helps us see the controllable in the light of the uncontrollable. It reminds us of the larger context: the background that the human intellect does not master, but must presuppose and interact with wisely.
It does not have to be dramatic. Even everyday life has philosophical dimensions that exceed our conscious control. Children learn to talk beyond their parents’ control, without either curricula or examinations. No language teacher in the world can teach a toddler to talk through lessons in a classroom. It can only happen spontaneously and boundlessly, in the midst of life. Only those who already speak can learn language through lessons in a classroom.
The ability to talk is thus the background to language teaching in the classroom. A language teacher can plan the lessons in detail. The youngest children’s language acquisition, on the other hand, is so inextricably linked to what it is to live as a human being that it exceeds the intellect’s ability to organize and govern. We can only remind ourselves of the difference between foreground and background in language. Here follows such a philosophical reminder. A parent of a schoolchild can say, “Now you’ve been studying French for two hours and need a break: go out and play.” However, a parent of a small child who is beginning to talk cannot say, “Now you’ve been talking for two hours and need a break: go out and play!” The child talks constantly. It learns in the midst of playing, in the midst of life, beyond control. Therefore, the child has no breaks.
Had Herb Terrace seen the difference between foreground and background in language, he would never have used the insane method of training sign language with the chimpanzee Nim in a special classroom, as if Nim were a schoolchild who could already speak. Sometimes we need a bit of philosophy (a bit of reason) for our projects to work. Foreground and background interact everywhere. Our welfare systems do not work unless we fundamentally live by our own power, or by life’s own power. Pandemics hardly subside without the virus moving through sufficiently many of our, thereafter, immune bodies – under controlled forms that protect groups at risk and provide the severely ill care. Everywhere, foreground and background, controllable and uncontrollable, interact.
The dream of complete intellectual control is therefore a pitfall when we philosophize. At least if we need philosophy to elucidate the living background of what lies within human control. Then we cannot strive to define life as a single intellectually controllable foreground. A bit of philosophy can help us see the interplay between foreground and background. It can help us live actively and act wisely in the zone between controllable and uncontrollable.
Pharmaceutical companies want to quickly manufacture a vaccine against covid-19, with human testing and launch in the market as soon as possible. In a debate article, Jessica Nihlén Fahlquist at CRB warns of the risk of losing the larger risk perspective: “Tests on people and a potential premature mass vaccination entail risks. It is easy to forget about similar situations in the past,” she writes.
It may take time for side effects to appear. Unfortunately, it therefore also takes time to develop new safe vaccines. We need to develop a vaccine, but even with new vaccines, caution is needed.
I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.
The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.
The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)
The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?
John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.
In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.
All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.
Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?
Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!
I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.
The Ethics Blog will publish several posts on artificial intelligence in the future. Today, I just want to make a little observation of something remarkable.
The last century was marked by fear of human consciousness. Our mind seemed as mystic as the soul, as superfluous in a scientific age as God. In psychology, behaviorism flourished, which defined psychological words in terms of bodily behavior that could be studied scientifically in the laboratory. Our living consciousness was treated as a relic from bygone superstitious ages.
What is so remarkable about artificial intelligence? Suddenly, one seems to idolize consciousness. One wallows in previously sinful psychological words, at least when one talks about what computers and robots can do. These machines can see and hear; they can think and speak. They can even learn by themselves.
Does this mean that the fear of consciousness has ceased? Hardly, because when artificial intelligence employs psychological words such as seeing and hearing, thinking and understanding, the words cease to be psychological. The idea of computer “learning,” for example, is a technical term that computer experts define in their laboratories.
When artificial intelligence embellishes machines with psychological words, then, one repeats how behaviorism defined mind in terms of something else. Psychological words take on new machine meanings that overshadow the meanings the words have among living human beings.
Remember this next time you wonder if robots might become conscious. The development exhibits fear of consciousness. Therefore, what you are wondering is not if robots can become conscious. You wonder if your own consciousness can be superstition. Remarkable, right?