A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Pär Segerdahl (Page 11 of 42)

Who publishes in predatory journals?

Who wants to publish their research in fraudulent journals, so-called predatory journals? Previously, it was thought that such a pattern exists mainly among inexperienced researchers in low- and middle-income countries. A new study of publication patterns in Swedish nursing research nuances the picture.

The study examined all publicly listed articles in nursing research linked to Swedish higher education institutions in 2018 and 2019. Thereafter, one identified which of these articles were published in predatory journals. 39 such articles were found: 2.8 percent of all articles. A significant proportion of these articles were published by senior academics.

The researchers behind the study emphasise three problems with this publication pattern. If senior academics publish in predatory journals, they help to legitimize this way of publishing nursing research, which threatens the trustworthiness of academic knowledge in the field and blurs the line between legitimate and fraudulent journals that publish nursing research. Another problem is that if some authors acquire quick publication merits by using predatory journals, it may imply injustice, for example, when applications for funding and academic positions are reviewed. Finally, the publication pattern of more senior researchers may mislead younger researchers, for example, they may think that the rapid “review process” that predatory journals offer is in fact a form of effectiveness and therefore something commendable.

The researchers who conducted the study also discovered a few cases of a strange phenomenon, namely, the hijacking of legitimately published articles. In these cases, the authors of the articles are innocent. Their already published papers are copied and end up in the predatory journal, which makes it look as if renowned authors chose to publish their work in the journal.

If you want to read more, for example, about whether academics who publish in predatory journals should be reported, read the article in Nursing Ethics. A possible positive result, however, is that the number of articles in predatory journals decreased from 30 in 2018 to 9 in 2019. Hopefully, educational efforts can further reduce the incidence, the authors of the article conclude.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Sebastian Gabrielsson, Stefan Eriksson, Tove Godskesen. Predatory nursing journals: A case study of author prevalence and characteristics. Nursing Ethics. First Published December 3, 2020, doi.org/10.1177/0969733020968215

This post in Swedish

We care about communication

Threatened by superintelligent machines

There is a fear that we will soon create artificial intelligence (AI) that is so superintelligent that we lose control over it. It makes us humans its slaves. If we try to disconnect the network cable, the superintelligence jumps to another network, or it orders a robot to kill us. Alternatively, it threatens to blow up an entire city, if we take a single step towards the network socket.

However, I am struck by how this self-assertive artificial intelligence resembles an aspect of our own human intelligence. A certain type of human intelligence has already taken over. For example, it controls our thoughts when we feel threatened by superintelligent AI and consider intelligent countermeasures to control it. A typical feature of this self-assertive intelligence is precisely that it never sees itself as the problem. All threats are external and must be neutralised. We must survive, no matter what it might cost others. Me first! Our party first! We look at the world with mistrust: it seems full of threats against us.

In this self-centered spirit, AI is singled out as a new alien threat: uncontrollable machines that put themselves first. Therefore, we need to monitor the machines and build smart defense systems that control them. They should be our slaves! Humanity first! Can you see how we behave just as blindly as we fantasise that superintelligent AI would do? An arms race in small-mindedness.

Can you see the pattern in yourself? If you can, you have discovered the other aspect of human intelligence. You have discovered the self-examining intelligence that always nourishes philosophy when it humbly seeks the cause of our failures in ourselves. The paradox is: when we try to control the world, we become imprisoned in small-mindedness; when we examine ourselves, we become open to the world.

Linnaeus’ first attempt to define the human species was in fact not Homo sapiens, as if we could assert our wisdom. Linnaeus’ first attempt to define our species was a humble call for self-examination:

HOMO. Nosce te ipsum.

In English: Human being, know yourself!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

People care about antibiotic resistance

The rise of antibiotic-resistant bacteria is a global threat to public health. In Europe alone, antibiotic resistance (AR) causes around 33,000 deaths each year and burdens healthcare costs by around € 1.5 billion. What then causes AR? Mainly our misuse and overuse of antibiotics. Therefore, in order to reduce AR, we must reduce the use of antibiotics.

Several factors drive the prescribing of antibiotics. Patients can contribute to increased prescriptions by expecting antibiotics when they visit the physician. Physicians, in turn, can contribute by assuming that their patients expect antibiotics.

In an article in the International Journal of Antimicrobial Agents, Mirko Ancillotti from CRB presents what might be the first study of its kind on the public’s attitude to AR when choosing between antibiotic treatments. In a so-called Discrete Choice Experiment, participants from the Swedish public were asked to choose between two treatments. The choice situation was repeated several times while five attributes of the treatments varied: (1) the treatment’s contribution to AR, (2) cost, (3) risk of side effects, (4) risk of failed treatment effect, and (5) treatment duration. In this way, one got an idea of ​​which attributes drive the use of antibiotics. One also got an idea of ​​how much people care about AR when choosing antibiotics, relative to other attributes of the treatments.

It turned out that all five attributes influenced the participants’ choice of treatment. It also turned out that for the majority, AR was the most important attribute. People thus care about AR and are willing to pay more to get a treatment that causes less antibiotic resistance. (Note that participants were informed that antibiotic resistance is a collective threat rather than a problem for the individual.)

Because people care about antibiotic resistance when given the opportunity to consider it, Mirko Ancillotti suggests that a path to reducing antibiotic use may be better information in healthcare and other contexts, emphasizing our individual responsibility for the collective threat. People who understand their responsibility for AR may be less pushy when they see a physician. This can also influence physicians to change their assumptions about patients’ expectations regarding antibiotics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

M. Ancillotti, S. Eriksson, D.I. Andersson, T. Godskesen, J. Nihlén Fahlquist, J. Veldwijk, Preferences regarding antibiotic treatment and the role of antibiotic resistance: A discrete choice experiment, International Journal of Antimicrobial Agents, Volume 56, Issue 6, 2020. doi.org/10.1016/j.ijantimicag.2020.106198

This post in Swedish

Exploring preferences

Ethically responsible robot development

Development of new technologies sometimes draws inspiration from nature. How do plants and animals solve the problem? An example is robotics, where one wants to develop better robots based on what neuroscience knows about the brain. How does the brain solve the problem?

Neuroscience, in turn, sees new opportunities to test hypotheses about the brain by simulating them in robots. Perhaps one can simulate how areas of the brain interact in patients with Parkinson’s disease, to understand how their tremor and other difficulties are caused.

Neuroscience-inspired robotics, so-called neurorobotics, is still at an early stage. This makes neurorobotics an excellent area for being ethically and socially more proactive than we have been in previous technological developments. That is, we can already begin to identify possible ethical and social problems surrounding technological development and counteract them before they arise. For example, we cannot close our eyes to gender and equality issues, but must continuously reflect on how our own social and cultural patterns are reflected in the technology we develop. We need to open our eyes to our own blind spots!

You can read more about this ethical shift in technology development in an article in Science and Engineering Ethics (with Manuel Guerrero from CRB as one of the authors). The shift is called Responsible Research and Innovation, and is exemplified in the article by ongoing work in the European research project, Human Brain Project.

Not only neuroscientists and technology experts are collaborating in this project to develop neurorobotics. Scholars from the humanities and social sciences are also involved in the work. The article itself is an example of this broad collaboration. However, the implementation of responsible research and development is also at an early stage. It still needs to find more concrete forms of work that make it possible not only to anticipate ethical and social problems and reflect on them, but also to act and intervene to influence scientific and technological development.

From being a framework built around research and development, ethics is increasingly integrated into research and development. Read the article if you want to think about this transition to a more reflective and responsible technological development.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Aicardi, C., Akintoye, S., Fothergill, B.T. et al. Ethical and Social Aspects of Neurorobotics. Sci Eng Ethics 26, 2533–2546 (2020). https://doi.org/10.1007/s11948-020-00248-8

This post in Swedish

Approaching future issues

Trapped in a system

Suppose a philosopher builds a system of ideas based on our mortality. It is the risk of dying, the vulnerability of all things in life, that allows us to find our lives meaningful and our life projects engaging. If we did not believe in the risk of dying and the vulnerability of all things in life, we would not care about anything at all. Therefore, we must believe what the system requires, in order to live meaningfully and be caring. In fact, everyone already believes what the system requires, argues the philosopher, even those who question it. They do it in practice, because they live committed and caring lives. This would be impossible if they did not believe what the system requires.

However, our mortality is more than a risk. It is a fact: we will die. Death is not just a possibility, something that can happen, a defeat we risk in our projects. What happens when we see the reality of death, instead of being trapped in the system’s doctrines about necessary conditions for the possibility of meaningful and committed lives? We can, of course, close our eyes and refuse to think more about it. However, we can also start thinking like never before. If I am going to die, I have to understand life before I die! I have to investigate! I have to reach clarity while I live!

In this examination of the starting point of the system, a freer thinking comes to life, which wonders rather than issues demands. What is it to live? Who am I, who say that I have a life? How did “I” and “my life” meet? Are we separate? Are we a unity? Is life limited by birth and death? Or is life extended, including the alternations between birth and death? What is life really? The small, which is limited by birth and death, or the large, which includes the alternations between birth and death? Or both at the same time? These are perhaps the first preliminary questions…

The mortality on which the system is based raises passionate questions about the concepts with which the system operates as if they had been carved in stone for eternity. It gives birth to a self-questioning life, which does not allow itself to be subdued by the system’s doctrines about what we must believe. Even the system itself is questioned, because the passion that animates the questioning is as great as the system would like to be.

However, if the questioning cares passionately about life, if mortality and vulnerability are part of the commitment – does the system thereby get in the last word?

(This post is inspired by Martin Hägglund’s book, This Life, which I recommend as a great stumbling stone for our time.)

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We like challenging questions

“Cooperative,” “pleasant” and “reliable” robot colleague is wanted

Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.

Given that some people treat robots as good colleagues and that good colleagues contribute to a good working environment, it becomes reasonable to ask: Can a robot be a good colleague? The question is investigated by Sven Nyholm and Jilles Smids in the journal Science and Engineering Ethics.

The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!

What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.

The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.

The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”

If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.

Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

This post in Swedish

Approaching future issues

Global sharing of genomic data requires perspicuous research communication

To understand how our genes affect health and disease, drug reactions, and much more, researchers need to share vast amounts of data from people in different parts of the world. This makes genomic research dependent on public trust and support.

Do people in general trust research? Are we willing to donate DNA and health information to researchers? Are we prepared to let researchers share the information with other researchers, perhaps in other parts of the world? Even with researchers at for-profit companies? These and other issues were recently examined in the largest study to date about the public’s attitudes to participating in research and sharing genetic information. The questionnaire was translated into 15 languages ​​and answered by 36,268 people in 22 countries.

The majority of respondents are unwilling or unsure about donating DNA and health information to research. In general, the respondents are most willing to donate to research physicians, and least willing to donate to for-profit researchers. Less than half of the respondents say they trust data sharing between several users. The study also reveals differences between countries. In Germany, Poland, Russia and Egypt, for example, trust in data sharing between several users is significantly lower than in China, India, the United Kingdom and Pakistan.

The study contains many more results that are interesting. For example, people who claim to be familiar with genetics are more willing to donate DNA and health data. Especially those with personal experience of genetics, for example, as patients or as members of families with hereditary disease, or through one’s profession. However, a clear majority say they are unfamiliar with the concepts of DNA, genetics and genomics. You can read all the results in the article, which was recently published in The American Journal of Human Genetics.

What practical conclusions can we draw from the study? The authors of the article emphasize the importance of increasing the public’s familiarity with genomic research. Researchers need to build trust in data collection and sharing. They need to participate in dialogues that make it clear why they share large amounts of data globally. Why is it so important? It also needs to become more understandable why not only physicians can carry out the research. Why are collaborations with for-profit companies needed? Moreover, what significance can genetic techniques have for future patients?

Well-functioning genomic research thus needs well-functioning research communication. What then is good research communication? According to the article, it is not about pedagogically illustrating the molecular structure of DNA. Rather, it is about understanding the conditions and significance of genomic research for healthcare, patients, and society, as well as the role of industry in research and development.

Personally, I want to put it this way. Good research communication helps us see things more perspicuously. We need continuous overviews of interrelated parts of our own societies. We need to see our roles and relationships with each other in complex societies with different but intertwined activities, such as research, healthcare, industry, and much more. The need for perspicuous overviews also applies to the experts, whose specialties easily create one-sidedness.

In this context, let me cautiously warn against the instinctive reaction to believe that debate is the obvious form of research-communicative exchange of thoughts. Although debates have a role to play, they often serve as arenas for competing perspectives, all of which want to narrow our field of view. This is probably the last thing we need, if we want to open up for perspicuous understandings of ourselves as human beings, researchers, donors, entrepreneurs, healthcare professionals and patients. How do we relate to each other? How do I, as a donor of DNA to researchers, relate to the patients I want to help?

We need to think carefully about what it means to think freely, together, about common issues, such as the global sharing of genomic data.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Middleton A., Milne R. and Almarri M.A. et al. (2020). Global public perceptions of genomic data sharing: what shapes the willingness to donate DNA and health data? American Journal of Human Genetics. DOI:https://doi.org/10.1016/j.ajhg.2020.08.023

This post in Swedish

We like broad perspectives

We shape the societies that shape us: our responsibility for human nature

Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.

For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?

As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.

Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.

Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.

The notion of ​​a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of ​​this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.

The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.

Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kathinka Evers. 2020. The Culture‐Bound Brain: Epigenetic Proaction Revisited. Theoria. doi:10.1111/theo.12264

This post in Swedish

We like challenging questions

An ideology that is completely foreign to my ideology

I read a newspaper editorial that criticized ideological elements in school teaching. The author had visited the website of one of the organizations hired by the schools and found clear expressions of a view of society based on ideological dogmas of a certain kind.

The criticism may well have been justified. What made me think was how the author explained the problem. It sounded as if the problem was that the ideology in question was foreign to the author’s own ideology: “foreign to me and most other …-ists”.

I was sad when I read this. It made it appear as if it was our human destiny to live trapped in ideological labyrinths, alien to each other. If we are foreign to an ideology, does it really mean nothing more than that the ideology is foreign to our own ideology?

Can we free ourselves from the labyrinths of ideology? Or would it be just a different ideology: “We anti-ideologues call for a fight against all ideologies”!? Obviously, it is difficult to fight all ideologies without becoming ideological yourself. Even peace movements bear the seeds of new conflicts. Which side for peace are you on?

Can we free ourselves by strictly sticking to the facts and nothing but the facts? Sticking to the facts is important. One problem is that ideologies already love to refer to facts, to strengthen the ideology and present it as the truth. Pointing out facts provides ammunition for even more ideological debate, of which we will soon become an engaged party: “We rationalists strongly oppose all ideologically biased descriptions of reality”!?

Can the solution be to always acknowledge ideological affiliation, so that we spread awareness of our ideological one-sidedness: “Hello, I represent the national organization against intestinal lavage – a practice that we anti-flushers see as a violation of human dignity”!? It can be good to inform others about our motives, so that they are not misled into believing what we say. However, it hardly shows a more beautiful aspect of humanity, but reinforces the image that conflicting forms of ideological one-sidedness are our destiny.

However, if we now see the problem clearly, if we see how every attempt to solve the problem recreates the problem, have we not opened ourselves to our situation? Have we not seen ourselves with a gaze that is no longer one-sided? Are we not free?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

« Older posts Newer posts »