A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: josepinefernow

Antimicrobial resistance: bringing the AMR community together

According to the WHO, antibiotic resistance is one of the biggest threats to global health, food security and development. Most of the disease burden is in the global south, but drug resistant infections can affect anyone, in any part of the world. Bacteria are always evolving, and antibiotic resistance is a natural process that develops through mutations. We can slow down the process by using antibiotics responsibly, but to save lives, we urgently need new antibiotics to fight the resistant bacteria that already today threaten our health.

There is a dilemma here: development of new antibiotics is a high-risk business, with very low return of investment, and big pharma is leaving the antibiotics field for precisely this reason. Responsible use of antibiotics means saving new drugs for the most severe cases. There are several initiatives filling the gap this creates. One example is the Innovative Medicines Initiative AMR Accelerator programme, with 9 projects working together to fill the pipeline with new antibiotics, and developing tools and infrastructures that can support antibiotics development.

Antimicrobial resistance (AMR) to antibiotics and other anti-infectives is a community problem. Managing it requires a community coming together to find solutions and work together to develop research infrastructures. For example, assessing the effectiveness of new antibiotics requires standardised high-quality infection models that can become available to projects, companies and research groups that are developing new antibacterial treatments. Recently, the AMR Accelerator COMBINE project announced a collaboration with some of the big players in the field: CARB-X, CAIRD, iiCON and Pharmacology Discovery Services. This kind of collaboration allows key actors to come together and share both expertise and data. The COMBINE project is developing a standardised protocol for an in vivo pneumonia model. It will become available to the scientific community, along with a bank of reference strains of Gram-negative bacteria that are clinically relevant, complete with a framework to bridge the gap between preclinical data and clinical outcomes based on mathematical modelling approaches.

The benefit of a standardised model is to support harmonisation. Ideally, data on how effective new antibiotic candidates are should be the same, regardless of the lab that performed the experiments. The point of the collaboration is to improve quality of the COMBINE pneumonia model. But who are they and what will they do? CARB-X (Combating Antibiotic-Resistant Bacteria Biopharmaceutical Accelerator) is a global non-profit partnership that supports early-stage antibacterial research and development. They will help validation of the pneumonia model. CAIRD (Center for Anti-Infective Research and Development) is working to advance anti-infective pharmacology. They are providing a benchmark by back-translation of clinical data. iiCON has a mission to accelerate and support the discovery and development of innovative new anti-infectives, diagnostics, and preventative products. They are supporting the mathematical modelling to ensure optimal dose selection. And finally, Pharmacology Discovery Services, a contract research organisation (CRO) working with preclinical antibacterial development, will supply efficacy data.

At the centre of this is the COMBINE project, which has a coordinating role in the AMR Accelerator: a cluster of public-private partnership projects funded by the Innovative Medicines Initiative (IMI). The AMR Accelerator brings together academia, pharma industry, patient organisations, non-profits and small and medium sized companies. The aim is to develop a robust pipeline of antibiotics and standardised tools that can be used by others in this community, to help in the fight against antimicrobial resistance.

In parallel, the effort to slow down antibiotic resistance continues. For example, Uppsala University coordinates the COMBINE project, and in 2016, the University founded the Uppsala Antibiotic Center, a multidisciplinary centre for research, education, innovation and awareness. The centre runs the AMR Studio podcast, showcasing some of the multidisciplinary research on antimicrobial resistance around the world. The University is also coordinating the ENABLE-2 antibacterial drug discovery platform funded by the Swedish Research Council, with an open call to support programmes in the early stages of discovery and development of new antibiotics.

Our own efforts at the Centre for Research Ethics & Bioethics are more focused on how we as individuals can help slow down the development of antibiotic resistance, and how we can assess the impact of how you frame antibiotic treatments when you ask patients about their preferences

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Do you want to know more?

EurekAlert! News release: Collaboration to improve the quality of in vivo antibiotics testing, 14 November 2023 https://www.eurekalert.org/news-releases/1007971.

Ancillotti M, Nihlén Fahlquist J, Eriksson S, Individual moral responsibility for antibiotic resistance, Bioethics, 2022;36(1):3-9. https://doi.org/10.1111/bioe.12958

Smith IP, Ancillotti M, de Bekker-Grob EW, Veldwijk J. Does It Matter How You Ask? Assessing the Impact of Failure or Effectiveness Framing on Preferences for Antibiotic Treatments in a Discrete Choice Experiment. Patient Prefer Adherence. 2022;16:2921-2936. https://doi.org/10.2147/PPA.S365624

A shorter version of this post in Swedish

Approaching future issues

Taking care of the legacy: curating responsible research and innovation practice

Responsible research and innovation, or RRI as it is often called in EU-project language, is both scholarship and practice. Over the last decade, the Human Brain Project Building has used structured and strategic approaches to embed responsible research and innovation practices across the project. The efforts to curate the legacy of this work includes the development an online Ethics & Society toolkit. But how does that work? And what does a toolkit need in order to ensure it has a role to play?

A recent paper by Lise Bitsch and Bernd Stahl in Frontiers in Research Metrics and Analytics explores whether this kind of toolkit can help embed the legacy of RRI activities in a large research project. According to them, a toolkit has the potential to play an important role in preserving RRI legacy. But they also point out that that potential can only be realised if we have organisational structures and funding in place to make sure that this legacy is retained. Because as all resources, it needs to be maintained, shared, used, and curated. To play a role in the long-term.

Even though this particular toolkit is designed to integrate insights and practises of responsible research and innovation in the Human Brain Project, there are lessons to be learned for other efforts to ensure acceptability, desirability and sustainability of processes and outcomes of research and innovation activities. The Human Brain Project is a ten-year European Flagship project that has gone through several phases. Bernd Stahl is the ethics director of the Human Brain Project, and Lise Bitsch has led the project’s responsible research and innovation work stream for the past three years. And there is a lot to be learned. For projects who are considering developing similar tools, they describe the process of designing and developing the toolkit.

But there are parts of the RRI-legacy that cannot fit in a toolkit. The impact of the ethical, social and reflective work in the Human Brain Project is visible in governance structures, how the project is managing and handling data, in its publications and communications. The authors are part of those structures.

In addition to the Ethics & Society toolkit, the work has been published in journals, shared on the Ethics Dialogues blog (where a first version of this post was published) and the HBP Society Twitter handle, offering more opportunities to engage and discuss in the EBRAINS community Ethics & Society space. The capacity building efforts carried out for the project and EBRAINS research infrastructure have been developed into an online ethics & society training resource, and the work with gender and diversity has resulted in a toolkit for equality, diversity and inclusion in project themes and teams.

Read the paper by Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy.

(A first version of this post was originally published on the Ethics Dialogues blog, March 13, 2023)

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy, Frontiers in Research Metrics and Analytics, 13 March 2023, Sec. Research Policy and Strategic Management, Volume 8 – 2023, https://doi.org/10.3389/frma.2023.1112106

Part of international collaborations

Science, science communication and language

All communications require a shared language and fruitful discussions rely on conceptual clarity and common terms. Different definitions and divergent nomenclatures is a challenge for science: across different disciplines, between professions and when engaging with different publics. The audience for science communications is diverse. Research questions and results need to be shared within the field, between fields, with policy makers and publics. To be effective, the language, style and channel should to be adapted to the audiences’ needs, values and expectations.

This is not just in public facing communications. A recent discussion in Neuron is addressing the semantics of “sentience” in scientific communication, starting from an article by Brett J Kagan et al. on how in vitro neurons learn and exhibit sentience when embodied in a simulated game world. The article was published in December 2022 and received a lot of attention: both positive media coverage and a mix of positive and negative reactions from the scientific community. In a response, Fuat Balci et al. express concerns about the key claim in the article: claims that the authors demonstrated that cortical neurons are able to (in vitro) self-organise and display intelligent and sentient behaviour in a simulated game-world. Balci et al. are (among other things) critical of the use of terms and concepts that they claim misrepresent the findings. They also claim that Kagan et al. are overselling the translational and societal relevance of their findings. In essence creating hype around their own research. They raise a discussion about the importance of scientific communication: media tends to relay information from abstracts and statements about the significance of the research, and the scientists themselves amplify these statements in interviews. They claim that overselling results has an impact on how we evaluate scientific credibility and reliability. 

Why does this happen? Balci et al. point to a paper by Jevin D. West and Carl T. Bergstrom, from 2021 on misinformation in and about science, suggesting that hype, hyperbole (using exaggeration as a figure of speech or rhetorical device) and publication bias might have to do with demands on different productivity metrics. According to West and Bergstrom, exaggeration in popular scientific writing isn’t just misinforming the public: it also misleads researchers. In turn leading to citation misdirection and citation bias. A related problem is predatory publishing, which has the potential to mislead those of us without the means to detect untrustworthy publishers. And to top it off, echo-chambers and filter bubbles help select and deselect information and amplify the messages they think you want to hear.

The discussion in Neuron has continued with a response by Brett J. Kagan et al., in a letter about scientific communication and the semantics of sentience. They start by stating that the use of language to describe specific phenomena is a contentious aspect of scientific discourse and that whether scientific communication is effective or not depends on the context where the language is used. And that in this case using the term “sentience” has a technical meaning in line with recent literature in theoretical biology and the free energy principle, where biotic self-organisation is defined as either active inference or sentient behaviour

They make an interesting point that takes us back to the beginning of this post, namely the challenges of multidisciplinary work. Advancing research in cross-disciplinary collaboration is often challenging in the beginning because of difficulties integrating across fields. But if the different nomenclatures and approaches are recognized as an opportunity to improve and innovate, there can be benefits.

Recently, another letter by Karen S. Rommelfanger, Khara M. Ramos and Arleen Salles added a layer of reflection on the conceptual conundrums for neuroscience. In their own field of neuroethics, calls for clear language and concepts in scientific practice and communication is nothing new. They have all argued that conceptual clarity can improve science, enhance our understanding and lead to a more nuanced and productive discussion about the ethical issues. In the letter, the authors raise an important point about science and society. If we really believe that scientific terminology can retain its technically defined meaning when we transfer words to contexts permeated by a variety of cultural assumptions and colloquial uses of those same terms, we run the risk of trivialising the social and ethical impact that the choice of scientific terminology can have. They ask whether it is responsible of scientists to consider peers as their only (relevant) audience, or if conceptual clarity in science might often require public engagement and a multidisciplinary conversation.

One could also suggest that the choice to opt for terms like “sentience” and “intelligence” as a technical characterisation of how cortical neurons function in a simulated in-vitro game-world, could be considered to be questionable also from the point of view of scientific development. If we agree that neuroscience can shed light on sentience and intelligence, we also have to admit that at as of yet, we don’t know exactly how it will illuminate these capacities. And perhaps that means it is too early to bind very specific technical meaning to terms that have both colloquial and cultural meaning, and which neuroscience can illuminate in as yet unknown ways?

You may wonder why an ethics blog writer dares to express views on scientific terminology. The point I am trying to make is that we all use language, but we also produce language. Everyday. Together. In almost everything we do. This means that words like sentience and intelligence belong to us all. We have a shared responsibility for how we use them. The decision to give these common words technical meaning has consequences for how people will understand neuroscience when the words find their way back out of the technical context. But there can also be consequences for science when the words find their way in, as in the case under discussion. Because the boundaries between science and society might not be so clearly distinguishable as one might think.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

This post in Swedish

We care about communication

AI narratives from the Global North

The way we develop, adopt, regulate and accept artificial intelligence is embedded in our societies and cultures. Our narratives about intelligent machines take on a flavour of the art, literature and imaginations of the people who live today, and of those that came before us. But some of us are missing from the stories that are told about thinking machines. A recent paper about forgotten African AI narratives and the future of AI in Africa shines a light on some of the missing narratives.

In the paper, Damian Eke and George Ogoh point to the fact that how artificial intelligence is developed, adopted, regulated and accepted is hugely influenced by socio-cultural, ethical, political, media and historical narratives. But most of the stories we tell about intelligent machines are imagined and conceptualised in the Global North. The paper begs the question whether it is a problem? And if so, in what way? When machine narratives put the emphasis on technology neutrality, that becomes a problem that goes beyond AI.

What happens when Global North narratives set the agenda for research and innovation also in the Global South, and what happens more specifically to the agenda for artificial intelligence? The impact is difficult to quantify. But when historical, philosophical, socio-cultural and political narratives from Africa are missing, we need to understand why and what it might imply. Damian Eke & George Ogoh provide a list of reasons for why this is important. One is concerns about the state of STEM education (science, technology, engineering and mathematics) in many African countries. Another reason is the well-documented issue of epistemic injustice: unfair discrimination against people because of prejudices about their knowledge. The dominance of Global North narratives could lead to devaluing the expertise of Africans in the tech community. This brings us to the point of the argument, which is that African socio-cultural, ethical and political contexts and narratives are absent from the global debate about responsible AI.

The paper makes the case for including African AI narratives not only into the research and development of artificial intelligence, but also into the ethics and governance of technology more broadly. Such inclusion would help counter epistemic injustice. If we fail to include narratives from the South into the AI discourse, the development can never be truly global. Moreover, excluding African AI narratives will limit our understanding of how different cultures in Africa conceptualise AI, and we miss an important perspective on how people across the world perceive the risks and benefits of machine learning and AI powered technology. Nor will we understand the many ways in which stories, art, literature and imaginations globally shape those perceptions.

If we want to develop an “AI for good”, it needs to be good for Africa and other parts of the Global South. According to Damian Eke and George Ogoh, it is possible to create a more meaningful and responsible narrative about AI. That requires that we identify and promote people-centred narratives. And anchor AI ethics for Africa in African ethical principles, like ubuntu. But the key for African countries to participate in the AI landscape is a greater focus on STEM education and research. The authors end their paper with a call to improve the diversity of voices in the global discourse about AI. Culturally sensitive and inclusive AI applications would benefit us all, for epistemic injustice is not just a geographical problem. Our view of whose knowledge has value is powered by a broad variety of forms of prejudice.

Damian Eke and George Ogoh are both actively contributing to the Human Brain Project’s work on responsible research and innovation. The Human Brain Project is a European Flagship project providing in-depth understanding of the complex structure and function of the human brain, using interdisciplinary approaches.

Do you want to learn more? Read the article here: Forgotten African AI Narratives and the future of AI in Africa.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Eke D, Ogoh G, Forgotten African AI Narratives and the future of AI in Africa, International Review of Information Ethics, 2022;31(08).

We want to be just

Human enhancement: Time for ethical guidance!

Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.

We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function? 

There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance? 

The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance. 

The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies. 

A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments. 

And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.

The public consultation will launch on January 11, the deadline to submit a response is January 25, 2021. 

Josepine Fernow

Written by…

Josepine Fernow, Coordinator at the Centre for Research Ethics & Bioethics (CRB), and communications leader for the SIENNA project.

SIENNA project logo

This post in Swedish

Diversity in research: why do we need it? (by Karin Grasenick & Julia Trattnig)

Scientific discovery is based on the novelty of the questions you ask. This means that if you want to discover something new, you probably have to ask a different question. And since different people have different preconceptions and experiences than you, they are likely to formulate their questions differently. This makes a case for diversity in research, If we want to make new discoveries that concern diverse groups, diversity in research becomes even more important.

The Human Brain Project participated in the FENS 2020 Virtual Forum this summer, an international virtual neuroscience conference that explores all domains in modern brain research. For the Human Brain Project (HBP), committed to responsible research and innovation, this includes diversity. Which is why Karin Grasenick, Coordinator for Gender and Diversity in the HBP, explored the relationship between diversity and new discovery in the session “Of mice, men and machines” at the FENS 2020.  

So why is diversity in research crucial to make new discoveries? Research depends on the questions asked, the models used, and the details considered. For this reason, it is important to reflect on why certain variables are analysed, or which aspects might play a role. An example is Parkinson’s disease, where patients are affected differently depending on both age and gender. Being a (biological) man or woman, old or young is important for both diagnosis and treatment. If we know that diversity matters in research on Parkinson’s disease, it probably should do so in most neuroscience. Apart from gender and age, we also need to consider other aspects of diversity, like race, ethnicity, education or social background. Because depending on who you are, biologically, culturally and socially, you are likely to need different things.

A quite recent example for this is Covid-19, which does not only display gender differences (as it affects more men than women), but also racial differences: Black and Latino people in the US have been disproportionately affected, regardless of their living area (rural or urban) or their age (old or young). Again, the reasons for this are not simply biologically essentialist (e.g. hormones or chromosomes), but also linked to social aspects such as gendered lifestyles (men are more often smokers than women), inequities in the health system or certain jobs which cannot be done remotely (see for example this BBC Future text on why Covid-19 is different for men and women or this one on the racial inequity of coronavirus in The New York Times).

Another example is Machine Learning. If we train AI on data that is not representative of the population, we introduce bias in the algorithm. For example, applications to diagnose skin cancer in medicine more often fail to recognize tumours in darker skin correctly because they are trained using pictures of fair skin. There are several reasons for not training AI properly, it could be a cost issue, lack of material to train the AI on, but it is not unlikely that people with dark skin are discriminated because scientists and engineers simply did not think about diversity when picking material for the AI to train on. In the case of skin cancer, it is clear that diversity could indeed save lives.

But where to start? When you do research, there are two questions that must be asked: First, what is the focus of your research? And second, who are the beneficiaries of your research?

Whenever your research focus includes tissues, cells, animals or humans, you should consider diversity factors like gender, age, race, ethnicity, and environmental influences. Moreover, any responsible scientist should consider who has access to their research and profits from it, as well as the consequences their research might have for end users or the broader public.

However, as a researcher you need to consider not only the research subjects and the people your results benefit. The diversity of the research team also matters, because different people perceive problems in different ways and use different methods and processes to solve them. Which is why a diverse team is more innovative.

If you want to find out more about the role of diversity in research, check out the presentation “Of mice, men and machines” or read the blogpost on Common Challenges in Neuroscience, AI, Medical Informatics, Robotics and New Insights with Diversity & Ethics.

Written by…

Karin Grasenick, founder and managing partner of convelop, coordinates all issues related to Diversity and Equal Opportunities in the Human Brain Project and works as a process facilitator, coach and lecturer.

&

Julia Trattnig, consultant and scientific staff member at convelop, supports the Human Brain Project concerning all measures and activities for gender mainstreaming and diversity management.

We recommend readings

This is a guest blog post from the Human Brain Project (HBP). The HBP as received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3).

Human Brain Project logo

How can we set future ethical standards for ICT, Big Data, AI and robotics?

josepine-fernow-siennaDo you use Google Maps to navigate in a new city? Ask Siri, Alexa or OK Google to play your favourite song? To help you find something on Amazon? To read a text message from a friend while you are driving your car? Perhaps your car is fitted with a semi-autonomous adaptive cruise control system… If any software or machine is going to perform in any autonomous way, it needs to collect data. About you, where you are going, what songs you like, your shopping habits, who your friends are and what you talk about. This begs the question:  are we willing to give up part of our privacy and personal liberty to enjoy the benefits technology offers.

It is difficult to predict the consequences of developing and using new technology. Policymakers struggle to assess the ethical, legal and human rights impacts of using different kinds of IT systems. In research, in industry and our homes. Good policy should be helpful for everyone that holds a stake. We might want it to protect ethical values and human rights, make research and development possible, allow technology transfer from academia to industry, make sure both large and smaller companies can develop their business, and make sure that there is social acceptance for technological development.

The European Union is serious about developing policy on the basis of sound research, rigorous empirical data and wide stakeholder consultation. In recent years, the Horizon2020 programme has invested € 10 million in three projects looking at the ethics and human rights implications of emerging digital technologies: PANELFIT, SHERPA and SIENNA.

The first project, PANELFIT (which is short for Participatory Approaches to a New Ethical and Legal Framework for ICT), will develop guidelines on the ethical and legal issues of ICT research and innovation. The second, SHERPA (stands for Shaping the ethical dimensions of Smart Information Systems (SIS) – A European Perspective), will develop tools to identify and address the ethical dimensions of smart information systems (SIS), which is the combination of artificial intelligence (AI) and big data analytics. SIENNA (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), will develop research ethics protocols, professional ethical codes, and better ethical and legal frameworks for AI and robotics, human enhancement technologies, and human genomics.

SSP-graphic

All three projects involve experts, publics and stakeholders to co-create outputs, in different ways. They also support the European Union’s vision of Responsible Research and Innovation (RRI). SIENNA, SHERPA and PANELFIT recently published an editorial in the Orbit Journal, inviting stakeholders and publics to engage with the projects and contribute to the work.

Want to read more? Rowena Rodrigues and Anaïs Resseguier have written about some of the issues raised by the use of artificial intelligence on Ethics Dialogues (The underdog in the AI and ethical debate: human autonomy), and you can find out more about the SIENNA project in a previous post on the Ethics Blog (Ethics, human rights and responsible innovation).

Want to know more about the collaboration between SIENNA, SHERPA and PANELFIT? Read the editorial in Orbit (Setting future ethical standards for ICT, Big Data, AI and robotics: The contribution of three European Projects), or watch a video from our joint webinar on May 20, 2019 on YouTube (SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics).

Want to know how SIENNA views the ethical impacts of AI and robotics? Download infographic (pdf) and read our state-of-the-art review for AI & robotics (deliverable report).

AI-robotics-ifographic

Josepine Fernow

This post in Swedish

We want solid foundations - the Ethics Blog

 

Ethics, human rights and responsible innovation

josepine-fernow2It is difficult to predict the consequences of developing and using new technologies. We interact with smart devices and intelligent software on an almost daily basis. Some of us use prosthetics and implants to go about our business and most of us will likely live to see self-driving cars. In the meantime, Swedish research shows that petting robot cats looks promising in the care of patients with dementia. Genetic tests are cheaper than ever, and available to both patients and consumers. If you spit in a tube and mail it to a US company, they will tell you where your ancestors are from. Who knows? You could be part sub Saharan African, and part Scandinavian at the same time, and (likely) still be you.

Technologies, new and old, have both ethical and human rights impact. Today, we are closer to scenarios we only pictured in science fiction a few decades ago. Technology develops fast and it is difficult to predict what is on the horizon. The legislation, regulation and ethical guidance we have today was developed for a different future. Policy makers struggle to assess the ethical, legal and human rights impact of new and emerging technologies. These frameworks are challenged when a country like Saudi Arabia, criticized for not giving equal rights to women, offers a robot honorary citizenship. This autumn marks the start of a research initiative that will look at some of these questions. A group of researchers from Europe, Asia, Africa and the Americas join forces to help improve the ethical and legal frameworks we have today.

The SIENNA project (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact) will deliver proposals for professional ethics codes, guidelines for research ethics committees and better regulation in three areas: human genetics and genomics, human enhancement, and artificial intelligence & robotics. The proposals will build on input from stakeholders, experts and citizens. SIENNA will also look at some of the more philosophical questions these technologies raise: Where do we draw the line between health and illness, normality and abnormality? Can we expect intelligent software to be moral? Do we accept giving up some of our privacy to screen our genome for genetic disorders? And if giving up some of our personal liberty is the price we have to pay to interact with machines, are we willing to pay it?

 The project is co-ordinated by the University of Twente. Uppsala University’s Centre for Research Ethics & Bioethics contributes expertise on the ethical, legal and social issues of genetics and genomics, and experience of communicating European research. Visit the SIENNA website at www.sienna-project.eu to find out more about the project and our partners!

Josepine Fernow

The SIENNA projectStakeholder-informed ethics for new technologies with high socio-economic and human rights impact – has received just under € 4 million for a 3,5 year project under the European Union’s H2020 research and innovation programme, grant agreement No 741716.

Disclaimer: This text and its contents reflects only SIENNA’s view. The Commission is not responsible for any use that may be made of the information it contains.

SIENNA project

This post in Swedish

Approaching future issues - the Ethics Blog

Research data, health cyberspace and direct-to-consumer genetic testing

josepine-fernow2We live in a global society, which means there are several actors that regulate both research and services directed at consumers. It is time again for our newsletter on current issues in biobank ethics and law. This time, Biobank Perspectives  lets you read about the legal aspects of direct-to-consumer genetic testing. Santa Slokenberga writes about her doctoral dissertation in law from Uppsala University and how the Council of Europe and the EU interact with each other and the legal systems in the member states. She believes direct-to-consumer genetic testing can be seen as a “test” of the European legal orders, showing us that there is need for formal cooperation and convergence as seemingly small matters can lead to large consequences.

We also follow up from a previous report on the General Data Protection Regulation in a Swedish perspective with more information about the Swedish Research Data Inquiry. We are also happy to announce that a group of researchers from the University of Oxford, University of Iceland, University of Oslo and the Centre for Research Ethics & Bioethics at Uppsala University received a Nordforsk grant to find solutions for governance of the “health cyberspace” that is emerging from assembling and using existing data for new purposes. To read more, download a pdf of the latest issue (4:2016), or visit the Biobank Perspectives site for more ethical and legal perspectives on biobank and registry research.

Josepine Fernow

This post in Swedish

Approaching future issues - the Ethics Blog

More biobank perspectives

If you did not get your fill during the Europe biobank week in Vienna, we give you more biobank related news in the latest issue of Biobank Perspectives, our newsletter on current issues in biobank ethics and law.

This time, Moa Kindström Dahlin describes what BBMRI-ERIC’s new federated Helpdesk for ELSI-issues can offer. We also invite you discuss public-private partnerships in research at a workshop in Uppsala on 7-8 November.

The legislative process on data protection in the EU might be over for now but there is still activity in government offices. Anna-Sara Lind gives you her view on the consequences for Sweden. We are also happy to announce that the guidelines for informed consent in collaborative rare disease research have received the IRDiRC Recognized Resources label.

You can read the newsletter on our website, or download a pdf version.

Josepine Fernow & Anna-Sara Lind

This post in Swedish

We recommend readings - the Ethics Blog

 

 

bbmri.se