A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Category: In the research debate (Page 14 of 36)

Are you conscious? Looking for reliable indicators

How can we be sure that a person in front of us is conscious? This might seem like a naïve question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”

Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.

Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?

It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.

We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?

The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness.  These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:

  1. Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
  2. Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
  3. Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
  4. Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
  5. Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
  6. Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.

This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.

The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025

We transcend disciplinary borders

Ethically responsible robot development

Development of new technologies sometimes draws inspiration from nature. How do plants and animals solve the problem? An example is robotics, where one wants to develop better robots based on what neuroscience knows about the brain. How does the brain solve the problem?

Neuroscience, in turn, sees new opportunities to test hypotheses about the brain by simulating them in robots. Perhaps one can simulate how areas of the brain interact in patients with Parkinson’s disease, to understand how their tremor and other difficulties are caused.

Neuroscience-inspired robotics, so-called neurorobotics, is still at an early stage. This makes neurorobotics an excellent area for being ethically and socially more proactive than we have been in previous technological developments. That is, we can already begin to identify possible ethical and social problems surrounding technological development and counteract them before they arise. For example, we cannot close our eyes to gender and equality issues, but must continuously reflect on how our own social and cultural patterns are reflected in the technology we develop. We need to open our eyes to our own blind spots!

You can read more about this ethical shift in technology development in an article in Science and Engineering Ethics (with Manuel Guerrero from CRB as one of the authors). The shift is called Responsible Research and Innovation, and is exemplified in the article by ongoing work in the European research project, Human Brain Project.

Not only neuroscientists and technology experts are collaborating in this project to develop neurorobotics. Scholars from the humanities and social sciences are also involved in the work. The article itself is an example of this broad collaboration. However, the implementation of responsible research and development is also at an early stage. It still needs to find more concrete forms of work that make it possible not only to anticipate ethical and social problems and reflect on them, but also to act and intervene to influence scientific and technological development.

From being a framework built around research and development, ethics is increasingly integrated into research and development. Read the article if you want to think about this transition to a more reflective and responsible technological development.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Aicardi, C., Akintoye, S., Fothergill, B.T. et al. Ethical and Social Aspects of Neurorobotics. Sci Eng Ethics 26, 2533–2546 (2020). https://doi.org/10.1007/s11948-020-00248-8

This post in Swedish

Approaching future issues

“Cooperative,” “pleasant” and “reliable” robot colleague is wanted

Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.

Given that some people treat robots as good colleagues and that good colleagues contribute to a good working environment, it becomes reasonable to ask: Can a robot be a good colleague? The question is investigated by Sven Nyholm and Jilles Smids in the journal Science and Engineering Ethics.

The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!

What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.

The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.

The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”

If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.

Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

This post in Swedish

Approaching future issues

Global sharing of genomic data requires perspicuous research communication

To understand how our genes affect health and disease, drug reactions, and much more, researchers need to share vast amounts of data from people in different parts of the world. This makes genomic research dependent on public trust and support.

Do people in general trust research? Are we willing to donate DNA and health information to researchers? Are we prepared to let researchers share the information with other researchers, perhaps in other parts of the world? Even with researchers at for-profit companies? These and other issues were recently examined in the largest study to date about the public’s attitudes to participating in research and sharing genetic information. The questionnaire was translated into 15 languages ​​and answered by 36,268 people in 22 countries.

The majority of respondents are unwilling or unsure about donating DNA and health information to research. In general, the respondents are most willing to donate to research physicians, and least willing to donate to for-profit researchers. Less than half of the respondents say they trust data sharing between several users. The study also reveals differences between countries. In Germany, Poland, Russia and Egypt, for example, trust in data sharing between several users is significantly lower than in China, India, the United Kingdom and Pakistan.

The study contains many more results that are interesting. For example, people who claim to be familiar with genetics are more willing to donate DNA and health data. Especially those with personal experience of genetics, for example, as patients or as members of families with hereditary disease, or through one’s profession. However, a clear majority say they are unfamiliar with the concepts of DNA, genetics and genomics. You can read all the results in the article, which was recently published in The American Journal of Human Genetics.

What practical conclusions can we draw from the study? The authors of the article emphasize the importance of increasing the public’s familiarity with genomic research. Researchers need to build trust in data collection and sharing. They need to participate in dialogues that make it clear why they share large amounts of data globally. Why is it so important? It also needs to become more understandable why not only physicians can carry out the research. Why are collaborations with for-profit companies needed? Moreover, what significance can genetic techniques have for future patients?

Well-functioning genomic research thus needs well-functioning research communication. What then is good research communication? According to the article, it is not about pedagogically illustrating the molecular structure of DNA. Rather, it is about understanding the conditions and significance of genomic research for healthcare, patients, and society, as well as the role of industry in research and development.

Personally, I want to put it this way. Good research communication helps us see things more perspicuously. We need continuous overviews of interrelated parts of our own societies. We need to see our roles and relationships with each other in complex societies with different but intertwined activities, such as research, healthcare, industry, and much more. The need for perspicuous overviews also applies to the experts, whose specialties easily create one-sidedness.

In this context, let me cautiously warn against the instinctive reaction to believe that debate is the obvious form of research-communicative exchange of thoughts. Although debates have a role to play, they often serve as arenas for competing perspectives, all of which want to narrow our field of view. This is probably the last thing we need, if we want to open up for perspicuous understandings of ourselves as human beings, researchers, donors, entrepreneurs, healthcare professionals and patients. How do we relate to each other? How do I, as a donor of DNA to researchers, relate to the patients I want to help?

We need to think carefully about what it means to think freely, together, about common issues, such as the global sharing of genomic data.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Middleton A., Milne R. and Almarri M.A. et al. (2020). Global public perceptions of genomic data sharing: what shapes the willingness to donate DNA and health data? American Journal of Human Genetics. DOI:https://doi.org/10.1016/j.ajhg.2020.08.023

This post in Swedish

We like broad perspectives

We shape the societies that shape us: our responsibility for human nature

Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.

For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?

As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.

Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.

Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.

The notion of ​​a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of ​​this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.

The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.

Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kathinka Evers. 2020. The Culture‐Bound Brain: Epigenetic Proaction Revisited. Theoria. doi:10.1111/theo.12264

This post in Swedish

We like challenging questions

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

Unethical research papers should be retracted

Articles that turn out to be based on fraudulent or flawed research are, of course, retracted by the journals that published them. The fact that there is a clearly stated policy for retracting fraudulent research is extremely important. Science as well as its societal applications must be able to trust that published findings are correct and not fabricated or distorted.

However, how should we handle articles that turn out to be based on unethical research? For example, research on the bodies of executed prisoners? Or research that exposes participants to unreasonable risks? Or research supported by unacceptable sources of funding?

In a new article, William Bülow, Tove E. Godskesen, Gert Helgesson and Stefan Eriksson examine whether academic journals have clearly formulated policies for retracting papers that are based on unethical research. The review shows that many journals lack such policies. This introduces arbitrariness and uncertainty into the system, the authors argue. Readers cannot trust that published research is ethical. They also do not know on what grounds articles are retracted or remain in the journal.

To motivate a clearly stated policy, the authors discuss four possible arguments for retracting unethical research papers. Two arguments are considered particularly conclusive. The first is that such a policy communicates that unethical research is unacceptable, which can deter researchers from acting unethically. The second argument is that journals that make it possible to complete unethical research by publishing it and that benefit from it become complicit in the unethical conduct.

Retraction of research papers is a serious matter and very compromising for researchers. Therefore, it is essential to clarify which forms and degrees of unethical conduct are sufficient to justify retraction. The authors cite as examples research based on serious violations of human rights, unfree research and research with unacceptable sources of funding.

The article concludes by recommending scientific journals to introduce a clearly stated policy for retracting unethical research: as clear as the policy for fraudulent research. Among other things, all retractions should be marked in the journal and the reasons behind the retractions should be specified in terms of both the kind and degree of unethical conduct.

For more details on the policy recommendation, read the article in the Journal of Medical Ethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bülow, W., Godskesen, T. E., Helgesson, G., Eriksson, S. Why unethical papers should be retracted. Journal of Medical Ethics, Published Online First: 13 August 2020. doi: 10.1136/medethics-2020-106140

This post in Swedish

We care about communication

Ethics as renewed clarity about new situations

An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values ​​and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.

The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.

The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.

Read the article, which has already attracted well-deserved attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Anaïs Rességuier, Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society

This post in Swedish

We like critical thinking

Ethical frameworks for research

The word ethical framework evokes the idea of ​​something rigid and separating, like the fence around the garden. The research that emerges within the framework is dynamic and constantly new. However, to ensure safety, it is placed in an ethical framework that sets clear boundaries for what researchers are allowed to do in their work.

That this is an oversimplified picture is clear after reading an inventive discussion of ethical frameworks in neuroscientific research projects, such as the Human Brain Project. The article is written by Arleen Salles and Michele Farisco at CRB and is published in AJOB Neuroscience.

The article questions not only the image of ethical frameworks as static boundaries for dynamic research activities. Inspired by ideas within so-called responsible research and innovation (RRI), the image that research can be separated from ethics and society is also questioned.

Researchers tend to regard research as their own concern. However, there are tendencies towards increasing collaboration not only across disciplinary boundaries, but also with stakeholders such as patients, industry and various forms of extra-scientific expertise. These tendencies make research an increasingly dispersed, common concern. Not only in retrospect in the form of applications, which presupposes that the research effort can be separated, but already when research is initiated, planned and carried out.

This could sound threatening, as if foreign powers were influencing the free search for truth. Nevertheless, there may also be something hopeful in the development. To see the hopeful aspect, however, we need to free ourselves from the image of ethical frameworks as static boundaries, separate from dynamic research.

With examples from the Human Brain Project, Arleen Salles and Michele Farisco try to show how ethical challenges in neuroscience projects cannot always be controlled in advance, through declared principles, values ​​and guidelines. Even ethical work is dynamic and requires living intelligent attention. The authors also try to show how ethical attention reaches all he way into the neuroscientific issues, concepts and working conditions.

When research on the human brain is not aware of its own cultural and societal conditions, but takes them for granted, it may mean that relevant questions are not asked and that research results do not always have the validity that one assumes they have.

We thus have good reasons to see ethical and societal reflections as living parts of neuroscience, rather than as rigid frameworks around it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles & Michele Farisco (2020) Of Ethical Frameworks and Neuroethics in Big Neuroscience Projects: A View from the HBP, AJOB Neuroscience, 11:3, 167-175, DOI: 10.1080/21507740.2020.1778116

This post in Swedish

We like real-life ethics

Diversity in research: why do we need it? (by Karin Grasenick & Julia Trattnig)

Scientific discovery is based on the novelty of the questions you ask. This means that if you want to discover something new, you probably have to ask a different question. And since different people have different preconceptions and experiences than you, they are likely to formulate their questions differently. This makes a case for diversity in research, If we want to make new discoveries that concern diverse groups, diversity in research becomes even more important.

The Human Brain Project participated in the FENS 2020 Virtual Forum this summer, an international virtual neuroscience conference that explores all domains in modern brain research. For the Human Brain Project (HBP), committed to responsible research and innovation, this includes diversity. Which is why Karin Grasenick, Coordinator for Gender and Diversity in the HBP, explored the relationship between diversity and new discovery in the session “Of mice, men and machines” at the FENS 2020.  

So why is diversity in research crucial to make new discoveries? Research depends on the questions asked, the models used, and the details considered. For this reason, it is important to reflect on why certain variables are analysed, or which aspects might play a role. An example is Parkinson’s disease, where patients are affected differently depending on both age and gender. Being a (biological) man or woman, old or young is important for both diagnosis and treatment. If we know that diversity matters in research on Parkinson’s disease, it probably should do so in most neuroscience. Apart from gender and age, we also need to consider other aspects of diversity, like race, ethnicity, education or social background. Because depending on who you are, biologically, culturally and socially, you are likely to need different things.

A quite recent example for this is Covid-19, which does not only display gender differences (as it affects more men than women), but also racial differences: Black and Latino people in the US have been disproportionately affected, regardless of their living area (rural or urban) or their age (old or young). Again, the reasons for this are not simply biologically essentialist (e.g. hormones or chromosomes), but also linked to social aspects such as gendered lifestyles (men are more often smokers than women), inequities in the health system or certain jobs which cannot be done remotely (see for example this BBC Future text on why Covid-19 is different for men and women or this one on the racial inequity of coronavirus in The New York Times).

Another example is Machine Learning. If we train AI on data that is not representative of the population, we introduce bias in the algorithm. For example, applications to diagnose skin cancer in medicine more often fail to recognize tumours in darker skin correctly because they are trained using pictures of fair skin. There are several reasons for not training AI properly, it could be a cost issue, lack of material to train the AI on, but it is not unlikely that people with dark skin are discriminated because scientists and engineers simply did not think about diversity when picking material for the AI to train on. In the case of skin cancer, it is clear that diversity could indeed save lives.

But where to start? When you do research, there are two questions that must be asked: First, what is the focus of your research? And second, who are the beneficiaries of your research?

Whenever your research focus includes tissues, cells, animals or humans, you should consider diversity factors like gender, age, race, ethnicity, and environmental influences. Moreover, any responsible scientist should consider who has access to their research and profits from it, as well as the consequences their research might have for end users or the broader public.

However, as a researcher you need to consider not only the research subjects and the people your results benefit. The diversity of the research team also matters, because different people perceive problems in different ways and use different methods and processes to solve them. Which is why a diverse team is more innovative.

If you want to find out more about the role of diversity in research, check out the presentation “Of mice, men and machines” or read the blogpost on Common Challenges in Neuroscience, AI, Medical Informatics, Robotics and New Insights with Diversity & Ethics.

Written by…

Karin Grasenick, founder and managing partner of convelop, coordinates all issues related to Diversity and Equal Opportunities in the Human Brain Project and works as a process facilitator, coach and lecturer.

&

Julia Trattnig, consultant and scientific staff member at convelop, supports the Human Brain Project concerning all measures and activities for gender mainstreaming and diversity management.

We recommend readings

This is a guest blog post from the Human Brain Project (HBP). The HBP as received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

Human Brain Project logo
« Older posts Newer posts »