How can we set future ethical standards for ICT, Big Data, AI and robotics?

July 11, 2019

josepine-fernow-siennaDo you use Google Maps to navigate in a new city? Ask Siri, Alexa or OK Google to play your favourite song? To help you find something on Amazon? To read a text message from a friend while you are driving your car? Perhaps your car is fitted with a semi-autonomous adaptive cruise control system… If any software or machine is going to perform in any autonomous way, it needs to collect data. About you, where you are going, what songs you like, your shopping habits, who your friends are and what you talk about. This begs the question:  are we willing to give up part of our privacy and personal liberty to enjoy the benefits technology offers.

It is difficult to predict the consequences of developing and using new technology. Policymakers struggle to assess the ethical, legal and human rights impacts of using different kinds of IT systems. In research, in industry and our homes. Good policy should be helpful for everyone that holds a stake. We might want it to protect ethical values and human rights, make research and development possible, allow technology transfer from academia to industry, make sure both large and smaller companies can develop their business, and make sure that there is social acceptance for technological development.

The European Union is serious about developing policy on the basis of sound research, rigorous empirical data and wide stakeholder consultation. In recent years, the Horizon2020 programme has invested € 10 million in three projects looking at the ethics and human rights implications of emerging digital technologies: PANELFIT, SHERPA and SIENNA.

The first project, PANELFIT (which is short for Participatory Approaches to a New Ethical and Legal Framework for ICT), will develop guidelines on the ethical and legal issues of ICT research and innovation. The second, SHERPA (stands for Shaping the ethical dimensions of Smart Information Systems (SIS) – A European Perspective), will develop tools to identify and address the ethical dimensions of smart information systems (SIS), which is the combination of artificial intelligence (AI) and big data analytics. SIENNA (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), will develop research ethics protocols, professional ethical codes, and better ethical and legal frameworks for AI and robotics, human enhancement technologies, and human genomics.

SSP-graphic

All three projects involve experts, publics and stakeholders to co-create outputs, in different ways. They also support the European Union’s vision of Responsible Research and Innovation (RRI). SIENNA, SHERPA and PANELFIT recently published an editorial in the Orbit Journal, inviting stakeholders and publics to engage with the projects and contribute to the work.

Want to read more? Rowena Rodrigues and Anaïs Resseguier have written about some of the issues raised by the use of artificial intelligence on Ethics Dialogues (The underdog in the AI and ethical debate: human autonomy), and you can find out more about the SIENNA project in a previous post on the Ethics Blog (Ethics, human rights and responsible innovation).

Want to know more about the collaboration between SIENNA, SHERPA and PANELFIT? Read the editorial in Orbit (Setting future ethical standards for ICT, Big Data, AI and robotics: The contribution of three European Projects), or watch a video from our joint webinar on May 20, 2019 on YouTube (SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics).

Want to know how SIENNA views the ethical impacts of AI and robotics? Download infographic (pdf) and read our state-of-the-art review for AI & robotics (deliverable report).

AI-robotics-ifographic

Josepine Fernow

This post in Swedish

We want solid foundations - the Ethics Blog

 


Contemplative conversations

November 19, 2018

Pär SegerdahlWhen we face new sensitive and worrying issues, there is an instinctive reaction: this must be debated! But is debate always the right way, if we want to take human concerns seriously?

That some are worried about new research and technology, is a fact. That others are not worried, is also a fact. Suppose these people handle their differences by debating with each other. What happens?

What happens is that they leave the actual world, which varies as much as people are different, and end up in a universal world of rational reasons. Those who worry must argue for their concerns: All sensible people should feel worried! Those who are not worried must provide weighty counter-arguments: No sensible person should feel worried!

Debate thus creates an either/or conflict from what was only a difference. Polarization increases the fear, which amplifies the desire to be absolutely right. Everyone wants to own the uniquely compelling reason that everyone should obey. But since we are different, the debate becomes a vertiginous hall of mirrors. It multiplies exaggerated world images in which we lose ourselves and each other.

The worry itself, as trembling human fact, is forgotten. The only thing that engages us is the weighty reason for, or against, being worried. The only thing that interests us is what everyone should feel. Is that taking human concerns seriously? Is it taking ourselves seriously?

If a child is worried, we do not ask the child to argue for its worries, and we do not comfort the child by refuting it. We take care of the child; we take care of its worries, as compassionate parents.

I play with the idea that we and our societies would be in better shape if we more often avoided the absolute world of reasons. Through its universality, it appears, of course, like a utopia of peace and unity among rational beings. In fact, it often creates polarization and perplexes us with its exaggerated images of the world. Arguing for the right cause in debate is perhaps not always as noble as we take it to be.

We are, more often than we think, like children. That is, we are human. Therefore, we need, more often than we think, to take care of ourselves. As compassionate parents. That is another instinct, which could characterize conversations about sensitive issues.

We need to take care of ourselves. But how? What is the alternative to debate? For want of better words: contemplative conversations. Or, if you want: considerate conversations. Rather than polarizing, such an open spirit welcomes us all, with our actual differences.

Perhaps that is how we become adults with regard to the task of living well with each other. By tenderly taking care of ourselves as children.

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog


%d bloggers like this: