A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 1 of 23)

Who publishes in predatory journals?

Who wants to publish their research in fraudulent journals, so-called predatory journals? Previously, it was thought that such a pattern exists mainly among inexperienced researchers in low- and middle-income countries. A new study of publication patterns in Swedish nursing research nuances the picture.

The study examined all publicly listed articles in nursing research linked to Swedish higher education institutions in 2018 and 2019. Thereafter, one identified which of these articles were published in predatory journals. 39 such articles were found: 2.8 percent of all articles. A significant proportion of these articles were published by senior academics.

The researchers behind the study emphasise three problems with this publication pattern. If senior academics publish in predatory journals, they help to legitimize this way of publishing nursing research, which threatens the trustworthiness of academic knowledge in the field and blurs the line between legitimate and fraudulent journals that publish nursing research. Another problem is that if some authors acquire quick publication merits by using predatory journals, it may imply injustice, for example, when applications for funding and academic positions are reviewed. Finally, the publication pattern of more senior researchers may mislead younger researchers, for example, they may think that the rapid “review process” that predatory journals offer is in fact a form of effectiveness and therefore something commendable.

The researchers who conducted the study also discovered a few cases of a strange phenomenon, namely, the hijacking of legitimately published articles. In these cases, the authors of the articles are innocent. Their already published papers are copied and end up in the predatory journal, which makes it look as if renowned authors chose to publish their work in the journal.

If you want to read more, for example, about whether academics who publish in predatory journals should be reported, read the article in Nursing Ethics. A possible positive result, however, is that the number of articles in predatory journals decreased from 30 in 2018 to 9 in 2019. Hopefully, educational efforts can further reduce the incidence, the authors of the article conclude.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Sebastian Gabrielsson, Stefan Eriksson, Tove Godskesen. Predatory nursing journals: A case study of author prevalence and characteristics. Nursing Ethics. First Published December 3, 2020, doi.org/10.1177/0969733020968215

This post in Swedish

We care about communication

Human enhancement: Time for ethical guidance!

Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.

We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function? 

There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance? 

The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance. 

The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies. 

A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments. 

And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.

The public consultation will launch on January 11, the deadline to submit a response is January 25, 2021. 

Josepine Fernow

Written by…

Josepine Fernow, Coordinator at the Centre for Research Ethics & Bioethics (CRB), and communications leader for the SIENNA project.

SIENNA project logo

This post in Swedish

Research for responsible governance of our health data

Do you use your smartphone to collect and analyse your performance at the gym? This is one example of how new health-related technologies are being integrated into our lives. This development leads to a growing need to collect, use and share health data electronically. Healthcare, medical research, as well as technological and pharmaceutical companies are increasingly dependent on collecting and sharing electronic health data, to develop healthcare and new medical and technical products.

This trend towards more and more sharing of personal health information raises several privacy issues. Previous studies suggest that people are willing to share their health information if the overall purpose is improved health. However, they are less willing to share their information with commercial enterprises and insurance companies, whose purposes may be unclear or do not meet people’s expectations. It is therefore important to investigate how individuals’ perceptions and attitudes change depending on the context in which their health data is used, what type of information is collected and which control mechanisms are in place to govern data sharing. In addition, there is a difference between what people say is important and what is revealed in their actual behaviour. In surveys, individuals often indicate that they value their personal information. At the same time, individuals share their personal information online despite little or no benefit to them or society.

Do you recognise yourself, do you just click on the “I agree” button when installing a health app that you want to use? This behaviour may at first glance suggest that people do not value their personal information very much. Is that a correct conclusion? Previous studies may not have taken into account the complexity of decisions about integrity where context-specific factors play a major role. For example, people may value sharing health data via a physical activity app on the phone differently. We have therefore chosen to conduct a study that uses a sophisticated multi-method approach that takes context-specific factors into account. It is an advantage in cybersecurity and privacy research, we believe, to combine qualitative methods with a quantitative stated preference method, such a discrete choice experiment (DCE). Such a mixed method approach can contribute to ethically improved practices and governance mechanisms in the digital world, where people’s health data are shared for multiple purposes.

You can read more about our research if you visit the website of our research team. Currently, we are analysing survey data from 2,000 participants from Sweden, Norway, Iceland, and the UK. The research group has expertise in law, philosophy, ethics and social sciences. On this broad basis, we  explore people’s expectations and preferences, while identifying possible gaps within the ethical and legal frameworks. In this way, we want to contribute to making the growing use and sharing of electronic health data ethically informed, socially acceptable and in line with people’s expectations.  

Written by…

Jennifer Viberg Johansson, Postdoc researcher at the Centre for Research Ethics & Bioethics, working in the projects Governance of health data in cyberspace and PREFER.

This post in Swedish

Part of international collaborations

People care about antibiotic resistance

The rise of antibiotic-resistant bacteria is a global threat to public health. In Europe alone, antibiotic resistance (AR) causes around 33,000 deaths each year and burdens healthcare costs by around € 1.5 billion. What then causes AR? Mainly our misuse and overuse of antibiotics. Therefore, in order to reduce AR, we must reduce the use of antibiotics.

Several factors drive the prescribing of antibiotics. Patients can contribute to increased prescriptions by expecting antibiotics when they visit the physician. Physicians, in turn, can contribute by assuming that their patients expect antibiotics.

In an article in the International Journal of Antimicrobial Agents, Mirko Ancillotti from CRB presents what might be the first study of its kind on the public’s attitude to AR when choosing between antibiotic treatments. In a so-called Discrete Choice Experiment, participants from the Swedish public were asked to choose between two treatments. The choice situation was repeated several times while five attributes of the treatments varied: (1) the treatment’s contribution to AR, (2) cost, (3) risk of side effects, (4) risk of failed treatment effect, and (5) treatment duration. In this way, one got an idea of ​​which attributes drive the use of antibiotics. One also got an idea of ​​how much people care about AR when choosing antibiotics, relative to other attributes of the treatments.

It turned out that all five attributes influenced the participants’ choice of treatment. It also turned out that for the majority, AR was the most important attribute. People thus care about AR and are willing to pay more to get a treatment that causes less antibiotic resistance. (Note that participants were informed that antibiotic resistance is a collective threat rather than a problem for the individual.)

Because people care about antibiotic resistance when given the opportunity to consider it, Mirko Ancillotti suggests that a path to reducing antibiotic use may be better information in healthcare and other contexts, emphasizing our individual responsibility for the collective threat. People who understand their responsibility for AR may be less pushy when they see a physician. This can also influence physicians to change their assumptions about patients’ expectations regarding antibiotics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

M. Ancillotti, S. Eriksson, D.I. Andersson, T. Godskesen, J. Nihlén Fahlquist, J. Veldwijk, Preferences regarding antibiotic treatment and the role of antibiotic resistance: A discrete choice experiment, International Journal of Antimicrobial Agents, Volume 56, Issue 6, 2020. doi.org/10.1016/j.ijantimicag.2020.106198

This post in Swedish

Exploring preferences

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a näive question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transgress disciplinary borders

Ethically responsible robot development

Development of new technologies sometimes draws inspiration from nature. How do plants and animals solve the problem? An example is robotics, where one wants to develop better robots based on what neuroscience knows about the brain. How does the brain solve the problem?

Neuroscience, in turn, sees new opportunities to test hypotheses about the brain by simulating them in robots. Perhaps one can simulate how areas of the brain interact in patients with Parkinson’s disease, to understand how their tremor and other difficulties are caused.

Neuroscience-inspired robotics, so-called neurorobotics, is still at an early stage. This makes neurorobotics an excellent area for being ethically and socially more proactive than we have been in previous technological developments. That is, we can already begin to identify possible ethical and social problems surrounding technological development and counteract them before they arise. For example, we cannot close our eyes to gender and equality issues, but must continuously reflect on how our own social and cultural patterns are reflected in the technology we develop. We need to open our eyes to our own blind spots!

You can read more about this ethical shift in technology development in an article in Science and Engineering Ethics (with Manuel Guerrero from CRB as one of the authors). The shift is called Responsible Research and Innovation, and is exemplified in the article by ongoing work in the European research project, Human Brain Project.

Not only neuroscientists and technology experts are collaborating in this project to develop neurorobotics. Scholars from the humanities and social sciences are also involved in the work. The article itself is an example of this broad collaboration. However, the implementation of responsible research and development is also at an early stage. It still needs to find more concrete forms of work that make it possible not only to anticipate ethical and social problems and reflect on them, but also to act and intervene to influence scientific and technological development.

From being a framework built around research and development, ethics is increasingly integrated into research and development. Read the article if you want to think about this transition to a more reflective and responsible technological development.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Aicardi, C., Akintoye, S., Fothergill, B.T. et al. Ethical and Social Aspects of Neurorobotics. Sci Eng Ethics 26, 2533–2546 (2020). https://doi.org/10.1007/s11948-020-00248-8

This post in Swedish

Approaching future issues

“Cooperative,” “pleasant” and “reliable” robot colleague is wanted

Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.

Given that some people treat robots as good colleagues and that good colleagues contribute to a good working environment, it becomes reasonable to ask: Can a robot be a good colleague? The question is investigated by Sven Nyholm and Jilles Smids in the journal Science and Engineering Ethics.

The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!

What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.

The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.

The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”

If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.

Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

This post in Swedish

Approaching future issues

Global sharing of genomic data requires perspicuous research communication

To understand how our genes affect health and disease, drug reactions, and much more, researchers need to share vast amounts of data from people in different parts of the world. This makes genomic research dependent on public trust and support.

Do people in general trust research? Are we willing to donate DNA and health information to researchers? Are we prepared to let researchers share the information with other researchers, perhaps in other parts of the world? Even with researchers at for-profit companies? These and other issues were recently examined in the largest study to date about the public’s attitudes to participating in research and sharing genetic information. The questionnaire was translated into 15 languages ​​and answered by 36,268 people in 22 countries.

The majority of respondents are unwilling or unsure about donating DNA and health information to research. In general, the respondents are most willing to donate to research physicians, and least willing to donate to for-profit researchers. Less than half of the respondents say they trust data sharing between several users. The study also reveals differences between countries. In Germany, Poland, Russia and Egypt, for example, trust in data sharing between several users is significantly lower than in China, India, the United Kingdom and Pakistan.

The study contains many more results that are interesting. For example, people who claim to be familiar with genetics are more willing to donate DNA and health data. Especially those with personal experience of genetics, for example, as patients or as members of families with hereditary disease, or through one’s profession. However, a clear majority say they are unfamiliar with the concepts of DNA, genetics and genomics. You can read all the results in the article, which was recently published in The American Journal of Human Genetics.

What practical conclusions can we draw from the study? The authors of the article emphasize the importance of increasing the public’s familiarity with genomic research. Researchers need to build trust in data collection and sharing. They need to participate in dialogues that make it clear why they share large amounts of data globally. Why is it so important? It also needs to become more understandable why not only physicians can carry out the research. Why are collaborations with for-profit companies needed? Moreover, what significance can genetic techniques have for future patients?

Well-functioning genomic research thus needs well-functioning research communication. What then is good research communication? According to the article, it is not about pedagogically illustrating the molecular structure of DNA. Rather, it is about understanding the conditions and significance of genomic research for healthcare, patients, and society, as well as the role of industry in research and development.

Personally, I want to put it this way. Good research communication helps us see things more perspicuously. We need continuous overviews of interrelated parts of our own societies. We need to see our roles and relationships with each other in complex societies with different but intertwined activities, such as research, healthcare, industry, and much more. The need for perspicuous overviews also applies to the experts, whose specialties easily create one-sidedness.

In this context, let me cautiously warn against the instinctive reaction to believe that debate is the obvious form of research-communicative exchange of thoughts. Although debates have a role to play, they often serve as arenas for competing perspectives, all of which want to narrow our field of view. This is probably the last thing we need, if we want to open up for perspicuous understandings of ourselves as human beings, researchers, donors, entrepreneurs, healthcare professionals and patients. How do we relate to each other? How do I, as a donor of DNA to researchers, relate to the patients I want to help?

We need to think carefully about what it means to think freely, together, about common issues, such as the global sharing of genomic data.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Middleton A., Milne R. and Almarri M.A. et al. (2020). Global public perceptions of genomic data sharing: what shapes the willingness to donate DNA and health data? American Journal of Human Genetics. DOI:https://doi.org/10.1016/j.ajhg.2020.08.023

This post in Swedish

We like broad perspectives

We shape the societies that shape us: our responsibility for human nature

Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.

For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?

As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.

Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.

Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.

The notion of ​​a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of ​​this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.

The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.

Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kathinka Evers. 2020. The Culture‐Bound Brain: Epigenetic Proaction Revisited. Theoria. doi:10.1111/theo.12264

This post in Swedish

We like challenging questions

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

« Older posts