A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: law

Human rights and legal issues related to artificial intelligence

How do we take responsibility for a technology that is used almost everywhere? As we develop more and more uses of artificial intelligence (AI), the challenges grow to get an overview of how this technology can affect people and human rights.

Although AI legislation is already being developed in several areas, Rowena Rodrigues argues that we need a panoramic overview of the widespread challenges. What does the situation look like? Where can human rights be threatened? How are the threats handled? Where do we need to make greater efforts? In an article in the Journal of Responsible Technology, she suggests such an overview, which is then discussed on the basis of the concept of vulnerability.

The article identifies ten problem areas. One problem is that AI makes decisions based on algorithms where the decision process is not completely transparent. Why did I not get the job, the loan or the benefit? Hard to know when computer programs deliver the decisions as if they were oracles! Other problems concern security and liability, for example when automatic decision-making is used in cars, medical diagnosis, weapons or when governments monitor citizens. Other problem areas may involve risks of discrimination or invasion of privacy when AI collects and uses large amounts of data to make decisions that affect individuals and groups. In the article you can read about more problem areas.

For each of the ten challenges, Rowena Rodrigues identifies solutions that are currently in place, as well as the challenges that remain to be addressed. Human rights are then discussed. Rowena Rodrigues argues that international human rights treaties, although they do not mention AI, are relevant to most of the issues she has identified. She emphasises the importance of safeguarding human rights from a vulnerability perspective. Through such a perspective, we see more clearly where and how AI can challenge human rights. We see more clearly how we can reduce negative effects, develop resilience in vulnerable communities, and tackle the root causes of the various forms of vulnerability.

Rowena Rodrigues is linked to the SIENNA project, which ends this month. Read her article on the challenges of a technology that is used almost everywhere: Legal and human rights issues of AI: Gaps, challenges and vulnerabilities.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Rowena Rodrigues. 2020. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology 4. https://doi.org/10.1016/j.jrt.2020.100005

This post in Swedish

We recommend readings

Learning from international attempts to legislate psychosurgery

So-called psychosurgery, in which psychiatric disorders are treated by neurosurgery, for example, by cutting connections in the brain, may have a somewhat tarnished reputation after the insensitive use of lobotomy in the 20th century to treat anxiety and depression. Nevertheless, neurosurgery for psychiatric disorders can help some patients and the area develops rapidly. The field probably needs an updated regulation, but what are the challenges?

The issue is examined from an international perspective in an article in Frontiers in Human Neuroscience. Neurosurgery for psychiatric disorders does not have to involve destroying brain tissue or cutting connections. In so-called deep brain stimulation, for example, electrical pulses are sent to certain areas of the brain. The method has been shown to relieve movement disorders in patients with Parkinson’s disease. This unexpected possibility illustrates one of the challenges. How do we delimit which treatments the regulation should cover in an area with rapid scientific and technical development?

The article charts legislation on neurosurgery for psychiatric disorders from around the world. The purpose is to find strengths and weaknesses in the various legislations. The survey hopes to justify reasonable ways of dealing with the challenges in the future, while achieving greater international harmonisation. The challenges are, as I said, several, but regarding the challenge of delimiting the treatments to be covered in the regulation, the legislation in Scotland is mentioned as an example. It does not provide an exhaustive list of treatments that are to be covered by the regulation, but states that treatments other than those listed may also be covered.

If you are interested in law and want a more detailed picture of the questions that need to be answered for a good regulation of the field, read the article: International Legal Approaches to Neurosurgery for Psychiatric Disorders.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Chandler JA, Cabrera LY, Doshi P, Fecteau S, Fins JJ, Guinjoan S, Hamani C, Herrera-Ferrá K, Honey CM, Illes J, Kopell BH, Lipsman N, McDonald PJ, Mayberg HS, Nadler R, Nuttin B, Oliveira-Maia AJ, Rangel C, Ribeiro R, Salles A and Wu H (2021) International Legal Approaches to Neurosurgery for Psychiatric Disorders. Front. Hum. Neurosci. 14:588458. doi: 10.3389/fnhum.2020.588458

This post in Swedish

Thinking about law

How can we set future ethical standards for ICT, Big Data, AI and robotics?

josepine-fernow-siennaDo you use Google Maps to navigate in a new city? Ask Siri, Alexa or OK Google to play your favourite song? To help you find something on Amazon? To read a text message from a friend while you are driving your car? Perhaps your car is fitted with a semi-autonomous adaptive cruise control system… If any software or machine is going to perform in any autonomous way, it needs to collect data. About you, where you are going, what songs you like, your shopping habits, who your friends are and what you talk about. This begs the question:  are we willing to give up part of our privacy and personal liberty to enjoy the benefits technology offers.

It is difficult to predict the consequences of developing and using new technology. Policymakers struggle to assess the ethical, legal and human rights impacts of using different kinds of IT systems. In research, in industry and our homes. Good policy should be helpful for everyone that holds a stake. We might want it to protect ethical values and human rights, make research and development possible, allow technology transfer from academia to industry, make sure both large and smaller companies can develop their business, and make sure that there is social acceptance for technological development.

The European Union is serious about developing policy on the basis of sound research, rigorous empirical data and wide stakeholder consultation. In recent years, the Horizon2020 programme has invested € 10 million in three projects looking at the ethics and human rights implications of emerging digital technologies: PANELFIT, SHERPA and SIENNA.

The first project, PANELFIT (which is short for Participatory Approaches to a New Ethical and Legal Framework for ICT), will develop guidelines on the ethical and legal issues of ICT research and innovation. The second, SHERPA (stands for Shaping the ethical dimensions of Smart Information Systems (SIS) – A European Perspective), will develop tools to identify and address the ethical dimensions of smart information systems (SIS), which is the combination of artificial intelligence (AI) and big data analytics. SIENNA (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), will develop research ethics protocols, professional ethical codes, and better ethical and legal frameworks for AI and robotics, human enhancement technologies, and human genomics.

SSP-graphic

All three projects involve experts, publics and stakeholders to co-create outputs, in different ways. They also support the European Union’s vision of Responsible Research and Innovation (RRI). SIENNA, SHERPA and PANELFIT recently published an editorial in the Orbit Journal, inviting stakeholders and publics to engage with the projects and contribute to the work.

Want to read more? Rowena Rodrigues and Anaïs Resseguier have written about some of the issues raised by the use of artificial intelligence on Ethics Dialogues (The underdog in the AI and ethical debate: human autonomy), and you can find out more about the SIENNA project in a previous post on the Ethics Blog (Ethics, human rights and responsible innovation).

Want to know more about the collaboration between SIENNA, SHERPA and PANELFIT? Read the editorial in Orbit (Setting future ethical standards for ICT, Big Data, AI and robotics: The contribution of three European Projects), or watch a video from our joint webinar on May 20, 2019 on YouTube (SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics).

Want to know how SIENNA views the ethical impacts of AI and robotics? Download infographic (pdf) and read our state-of-the-art review for AI & robotics (deliverable report).

AI-robotics-ifographic

Josepine Fernow

This post in Swedish

We want solid foundations - the Ethics Blog

 

Fourth issue of our newsletter about biobanks

Now you can read the fourth newsletter this year from CRB and BBMRI.se about ethical and legal issues in biobanking:

The newsletter contains three news items:

  1. Moa Kindström Dahlin describes the work on ethical and legal issues in the European platform for biobanking, BBMRI-ERIC, and reflects on what law is.
  2. Josepine Fernow features two PhD projects on research participants’ and patients’ preferences and perceptions of risk information.
  3. Anna-Sara Lind discusses the ruling of the European Court of Justice against the Safe Harbour agreement with the United States.

(Link to PDF version of the newsletter)

And finally, a link to the December issue of the newsletter from BBMRI.se:

Merry Christmas and a Happy New Year!

Pär Segerdahl

We recommend readings - the Ethics Blog

All you need is law? The ethics of legal scholarship (By Moa Kindström Dahlin)

Moa Kindström DahlinWorking as a lawyer in a multidisciplinary centre for research ethics and bioethics, as I do, often brings up to date questions regarding the relationship between law and ethics. What kind of ethical competence does academic lawyers need, and what kind of ethical challenges do we face? I will try to address some aspects of these challenges.

First, I must confess. I am a believer, a believer of law.

That does not mean that I automatically like all regulations, it is just that I cannot see a better way to run the world, but through a common system of legal norms. Believing in law means that I accept living in a different universe. I know the non-lawyers cannot always see my universe, but I see it clearly, and I believe in it. You’ll have to trust me – and all other lawyers – through training and education, we see this parallel universe and believe in it.

I do not always like what I see, but I do accept that it exists.

I think that understanding a lawyer’s understanding of what law is, is a necessary precondition for going deeper into the understanding of what I here refer to as the ethics of legal scholarship. So, what is law? This question has a thousand answers, stemming from different philosophical theories, but I choose to put it like this:

Law is an idea as well as a practical reality and a practice.

As a reality, law is the sum of all regulation, locally (e.g. Sweden), regionally (e.g. Europe) and internationally. For example, the statutes, the preparatory works, court decisions, the academic legal literature, the general legal principles and other legal sources where we find the answers to questions such as “Is it legal to do this or that?” or “Might I be responsible for this specific act in some way?”

The practice of law has to do with the application of general legal knowledge (whatever that means) to a specific case, and this application always involves interpretation. This means that law is contextual. The result of its application differs depending on situation, time and place.

Law as an idea is the illusion that there are legal answers out there somewhere, ready to be discovered, described and applied. Lawyers live in a universe where this illusion is accepted, although every lawyer knows that this is oversimplified. There is rarely an obvious answer to a posed question, and there are often several different interpretations that can be made.

The legal universe is a universe of planets and orbits: different legal sources and jurisdictions, different legal traditions and ideas on how to interpret legal sources. There are numerous legal theories, perspectives and ideologies: legal positivism, critical legal studies, law and economics and therapeutic jurisprudence to name a few. The way we, the lawyers, choose to look at the law – the lens of our telescope if you like – affects how we perceive and decipher what we see.

Law is sometimes described as codified ethics. The legal system of a state often provides structures and systems for new technologies and medical progress. Therefore, law plays an important role when analyzing a state’s political system or the organization of its welfare system.

Law, in short, is a significant piece of a puzzle in the world as we know it.

This means that the idea of law as something concrete, something we can discover and describe, creates our perception of reality. Yet, we must be aware of the fact that the law itself is intangible, and answers to legal questions might differ, depending on whom (which lawyer) is making the analysis and which lens is being used.

Sometimes the answer is clear and precise, but many times the answer is vague and blurry. When the law seems unclear, it is up to us, the lawyers, to heal it.

We cannot accept “legal gaps”.

The very idea that law is a system that provides all the answers means that we must try to find all the answers within the system. If we cannot find them, we have to create them. Therefore, proposing and creating legal answers is one of the tasks for legal scholars. With this task comes great power. If a lawyer states that something is a description of what law is, such a description may be used as an argument for a political development in that direction.

Therefore the descriptions of what law is and what is legal within a field – especially if the regulation in the field is new or under revision – must always be nuanced and clearly motivated. If the statement as to what law is emanates from certain starting points, this should be clarified in order to make the reasoning transparent.

This is what I would like to call the ethics of legal scholarship.

It is worth repeating: Research within legal scholarship always requires thoughtfulness. We, the scholars, have to be careful and ethically aware all the time. Our answers and statements as to legal answers are always normative, never just descriptive. Every time an academic lawyer answers a question, the answer or statement might itself become a legal source and be referred to as a part of the law.

Law is constantly reconstructing itself and is, to some extent, self-sufficient. But if law is law, does that mean that all you need is law?

Moa Kindström Dahlin

Thinking about law - the Ethics Blog