A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: future prospects (Page 1 of 10)

Patient views on treatment of Parkinson’s disease with embryonic stem cells

Stem cells taken from human embryos very early after fertilization can be grown as embryonic stem cell lines. These embryonic stem cells are called pluripotent, as they can differentiate into virtually all of the body’s cell types (without being able to develop into an individual). The medical interest in embryonic stem cells is related to the possibility of using them to regenerate damaged tissue. One disease one hopes to be able to develop stem cell treatment for is Parkinson’s disease.

In Sweden, it is permitted to use leftover donated embryos from IVF treatment for research purposes. However, not to produce medical products. The path towards possible future treatments is lined with legal and ethical uncertainties. In addition, the moral status of the embryo has been debated for a very long time, without any consensus on the matter being reached.

In this situation, studies of people’s perceptions of the use of human embryonic stem cells for the development of medical treatments become urgent. Recently, the first study of the perceptions of patients, the group that can become recipients, was published. It is an interview study with seventeen patients in Sweden who have Parkinson’s disease. Author is Jennifer Drevin along with six co-authors.

The interviewees were generally positive about using human embryonic stem cells to treat Parkinson’s disease. They did not regard the embryo as a life with human rights, but at the same time they saw the embryo as something special. It was considered that the embryo has great value for the couple who want to become parents and emphasized the importance of the woman’s or the couple’s free and informed consent to donation. As patients, they expressed interest in a treatment that did not limit everyday life through, for example, complicated daily medication. They were interested in better cognitive and communicative abilities and wanted to be more independent: not having to ask family members for support in everyday tasks. The effectiveness of the treatment was considered important and there was concern that stem cell treatment might not be effective enough, or have side effects.

Furthermore, concerns were expressed that donors could be exploited, for example poor and vulnerable groups, and that financial compensation could have negative effects. Allowing donation only of leftover embryos from IVF treatment was considered reassuring, as the main purpose would not be to make money. Finally, there was concern that the pharmaceutical industry would not always prioritize the patient over profit and that expensive stem cell treatments could lead to societal and global injustices. Suspicions that companies will not use embryos ethically were expressed, and some felt that it was more problematic to make a profit on products from embryos than on other medical products. Transparency around the process of developing and using medical stem cell products was considered important.

If you want to see more results, read the study here: Patients’ views on using human embryonic stem cells to treat Parkinson’s disease: an interview study.

It can be difficult to draw general conclusions from the study and the summary above reproduces some of the statements in the interviews. We should, among other things, keep in mind that the interviews were conducted with a small number of patients who themselves have the disease and that the study was conducted in Sweden. The authors emphasize that the study can help clinicians and researchers develop treatments in ways that take into account patients’ needs and concerns. A better understanding of people’s attitudes can also contribute to the public debate and support the development of policy and legislation.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Drevin, J., Nyholm, D., Widner, H. et al. Patients’ views on using human embryonic stem cells to treat Parkinson’s disease: an interview study. BMC Med Ethics 23, 102 (2022). https://doi.org/10.1186/s12910-022-00840-6

This post in Swedish

In dialogue with patients

Attitudes, norms and values ​​that can influence antibiotic resistance

Human use of antibiotics creates an evolutionary pressure that drives the development of antibiotic-resistant bacteria. If antibiotics lose their effectiveness, simple infections can become life-threatening and it becomes more difficult to treat infections in hospitals in connection with surgical interventions or other treatments. We should therefore reduce the use of antibiotics and use them more wisely.

Greece is at the top among European countries when it comes to antibiotics consumption. Nevertheless, studies have shown that Greeks are aware of the connection between the overuse of antibiotics and antibiotic resistance. It is not as surprising as it may sound. Other research shows that information alone is not enough to change people’s behaviour.

Since ignorance about the problem cannot explain the overuse of antibiotics in Greece, other factors should be investigated. In an article in BMC Public Health, Dimitrios Papadimou, Erik Malmqvist and Mirko Ancillotti present an interview study (focus groups) in which other possible explanations were examined, such as attitudes, norms and values ​​among Greeks.

The Greek participants saw overconsumption of antibiotics as an entrenched habit in Greece. It is easy to get access to antibiotics, they are often used without a doctor’s prescription, sometimes even as a precaution. In addition, doctors frequently prescribe antibiotics as a reliable remedy, participants said. Although critical of this Greek pattern of antibiotic consumption, participants considered it morally questionable to restrict individual access to potentially beneficial antibiotic treatments in the name of the greater good. Nor did they want to place the responsibility for handling antibiotic resistance on the individual. The whole of society must take responsibility, it was argued, perhaps above all government actors, healthcare staff and food producers. Finally, participants expressed doubts about the possibility of effectively managing antibiotic resistance in Greece.

There certainly seem to be more factors than limited awareness of the problem behind the overuse of antibiotics in Greece (and in other countries). If you would like more details and discussion, read the study here: Socio-cultural determinants of antibiotic resistance: a qualitative study of Greeks’ attitudes, perceptions and values

Hopefully, the study motivates future quantitative investigations of attitudes, norms and values, with more participants. Changing the use of antibiotics is probably like changing the course of a huge ship. Simply being aware of the necessary change is not enough.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Papadimou, D., Malmqvist, E. & Ancillotti, M. Socio-cultural determinants of antibiotic resistance: a qualitative study of Greeks’ attitudes, perceptions and values. BMC Public Health 22, 1439 (2022). https://doi.org/10.1186/s12889-022-13855-w

This post in Swedish

Approaching future issues

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

Self-confidence in the midst of uncertainty

Feeling confident is natural when we have the knowledge that the task requires. However, self-confidence can be harmful if we think that we know what we do not know. It can be really problematic if we make a habit of pretending that we know. Perhaps because we demand it of ourselves.

There is also another kind of self-confidence, which can seem unnatural. I am thinking of a rarely noticed form of self-confidence, which can awaken just when we are uncertain about how to think and act. But how can self-confidence arise precisely when we are uncertain? It sounds not only unnatural, but also illogical. And was it not harmful to exhibit self-confidence in such situations?

I am thinking of the self-confidence to be just as uncertain as we are, because our uncertainty is a fact that we are certain of: I do not know. It is easy to overlook the fact that even uncertainty is a reality that can be ascertained and investigated in ourselves. Sometimes it is important to take note of our uncertainty. That is sticking to the facts too!

What happens if we do not trust uncertainty when we are uncertain? I think we then tend to seek guidance from others, who seem to know what we do not know. It seems not only natural, but also logical. It is reasonable to do so, of course, if relevant knowledge really exists elsewhere. Asking others, who can be judged to know better, also requires a significant measure of self-confidence and good judgment, in the midst of uncertainty.

But suppose we instinctively seek guidance from others as soon as we are uncertain, because we do not dare to stick to uncertainty in such moments. What happens if we always run away from uncertainty, without stopping and paying attention to it, as if uncertainty were something impermissible? In such a judgmental attitude to uncertainty, knowledge and certainty can become a demand that we feel must be met, towards ourselves and towards each other, if only as a facade. We are then back where we started, in pretended knowledge, which now might become a collective high-risk game and not just an individual bad habit.

Collective knowledge games can of course work, if sufficiently many influential players have the knowledge that the tasks require and knowledge is disseminated in a well-organized manner. Maybe we think that it should be possible to build such a society, a secure knowledge society. The question I wonder about is how sustainable this is in the long run, if the emphasis on certainty does not simultaneously emphasize also uncertainty and questioning. Not for the sake of questioning, but because uncertainty is also a fact that needs attention.

In philosophy and ethics, it is often uncertainty that primarily drives the work. This may sound strange, but even uncertainty can be investigated. If we ask a tentative question about something we sincerely wonder about, clearer questions can soon arise that we continue to wonder about, and soon the investigation will begin. The investigation comes to life because we dare to trust ourselves, because we dare to give ourselves time to think, in the midst of uncertainty, which can become clarity if we do not run away from it. In the investigation, we can of course notice that we need more knowledge about specific issues, knowledge that is acquired from others or that we ourselves develop through empirical studies. But it is not only specific knowledge that informs the investigation. The work with the questions that express our uncertainty clarifies ourselves and makes our thinking clearer. Knowledge gets a well-considered context, where it is needed, which enlightens knowledge.

A “pure” game of knowledge is hardly sustainable in the long run, if its demands are not open also to the other side of knowledge, to the uncertainty that can be difficult to separate from ourselves. Such openness requires that we trust not only the rules of the game, but also ourselves. But do we dare to trust ourselves when we are uncertain?

I think we dare, if we see uncertainty as a fact that can be investigated and clarified, instead of judging it as something dangerous that should not be allowed to be a fact. That is when it can become dangerous.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Can consumers help counteract antimicrobial resistance?

Antimicrobial resistance (AMR) occurs when microorganisms (bacteria and viruses, etc.) survive treatments with antimicrobial drugs, such as antibiotics. However, the problem is not only caused by unwise use of such drugs on humans. Such drugs are also used on a large scale in animals in food production, which is a significant cause of AMR.

In an article in the journal Frontiers in Sustainable Food Systems, Mirko Ancillotti and three co-authors discuss the possibility that food consumers can contribute to counteracting AMR. This is a specific possibility that they argue is often overlooked when addressing the general public.

A difficulty that arises when AMR needs to be handled by several actors, such as authorities, food producers, consumers and retailers, is that the actors transfer the responsibility to the others. Consumers can claim that they would buy antibiotic-smart goods if they were offered in stores, while retailers can claim that they would sell such goods if consumers demanded them. Both parties can also blame how, for example, the market or legislation governs them. Another problem is that if one actor, for example the authorities, takes great responsibility, other actors feel less or no responsibility.

The authors of the article propose that one way out of the difficulty could be to influence consumers to take individual responsibility for AMR. Mirko Ancillotti has previously found evidence that people care about antibiotic resistance. Perhaps a combination of social pressure and empowerment could engage consumers to individually act more wisely from an AMR perspective?

The authors make comparisons with the climate movement and suggest digital innovations in stores and online, which can inform, exert pressure and support AMR-smarter food choices. One example could be apps that help consumers see their purchasing pattern, suggest product alternatives, and inform about what is gained from an AMR perspective by choosing the alternative.

Read the article with its constructive proposal to engage consumers against antimicrobial resistance: The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ancillotti, Mirko; Nilsson, Elin; Nordvall, Anna-Carin; Oljans, Emma. The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance. Frontiers in Sustainable Food Systems, 2022.

This post in Swedish

Approaching future issues

Fact resistance, human nature and contemplation

Sometimes we all resist facts. I saw a cyclist slip on the icy road. When I asked if it went well, she was on her feet in an instant and denied everything: “I did not fall!” It is human to deny facts. They can hurt and be disturbing.

What are we resisting? The usual answer is that fact-resistant individuals or groups resist facts about the world around us, such as statistics on violent crime, on vaccine side effects, on climate change or on the spread of disease. It then becomes natural to offer resistance to fact resistance by demanding more rigour in the field of knowledge. People should learn to turn more rigorously to the world they live in! The problem is that fact-resistant attitudes do just that. They are almost bewitched by the world and by the causes of what are perceived as outrageous problems in it. And now we too are bewitched by fact resistance and speculate about the causes of this outrageous problem.

Of course, we believe that our opposition is justified. But who does not think so? Legitimate resistance is met by legitimate resistance, and soon the conflict escalates around its double spiral of legitimacy. The possibility of resolving it is blocked by the conflict itself, because all parties are equally legitimate opponents of each other. Everyone hears their own inner voices warning them from acknowledging their mistakes, from acknowledging their uncertainty, from acknowledging their human resistance to reality, as when we fall off the bike and wish it had never happened. The opposing side would immediately seize the opportunity! Soon, our mistake is a scandal on social media. So we do as the person who slipped on the icy road, we deny everything without thinking: “I was not wrong, I had my own facts!” We ignore the fact that life thereby becomes a lie, because our inner voices warn us from acknowledging our uncertainty. We have the right to be recognized, our voices insist, at least as an alternative to the “established view.”

Conflicts give us no time for reflection. Yet, there is really nothing stopping us from sitting down, in the midst of conflict, and resolving it within ourselves. When we give ourselves time to think for ourselves, we are freer to acknowledge our uncertainty and examine our spirals of thought. Of course, this philosophical self-examination does not resolve the conflict between legitimate opponents which escalates around us as increasingly impenetrable and real. It only resolves the conflict within ourselves. But perhaps our thoughtful philosophical voice still gives a hint of how, just by allowing us to soar in uncertainty, we already see the emptiness of the conflict and are free from it?

If we more often dared to soar in uncertainty, if it became more permissible to say “I do not know,” if we listened more attentively to thoughtful voices instead of silencing them with loud knowledge claims, then perhaps fact resistance also decreases. Perhaps fact resistance is not least resistance to an inner fact. To a single inner fact. What fact? Our insecurity as human beings, which we do not permit ourselves. But if you allow yourself to slip on the icy road, then you do not have to deny that you did!

A more thoughtful way of being human should be possible. We shape the societies that shape us.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

How can neuroethics and AI ethics join their forces?

As I already wrote on this blog, there has been an explosion of AI in recent years. AI affects so many aspects of our lives that it is virtually impossible to avoid interacting with it. Since AI has such an impact, it must be examined from an ethical point of view, for the very basic reason that it can be developed and/or used for both good and evil.

In fact, AI ethics is becoming increasingly popular nowadays. As it is a fairly young discipline, even though it has roots in, for example, digital and computer ethics, the question is open about its status and methodology. To simplify the debate, the main trend is to conceive AI ethics in terms of practical ethics, for example, with a focus on the impact of AI on traditional practices in education, work, healthcare, entertainment, among others. In addition to this practically oriented analysis, there is also attention to the impact of AI on the way we understand our society and ourselves as part of it.

In this debate about the identity of AI ethics, the need for a closer collaboration with neuroethics has been briefly pointed out, but so far no systematic reflection has been made on this need. In a new article, I propose, together with Kathinka Evers and Arleen Salles, an argument to justify the need for closer collaboration between neuroethics and AI ethics. In a nutshell, even though they both have specific identities and their topics do not completely overlap, we argue that neuroethics can complement AI ethics for both content-related and methodological reasons.

Some of the issues raised by AI are related to fundamental questions that neuroethics has explored since its inception. Think, for example, of topics such as intelligence: what does it mean to be intelligent? In what sense can a machine be qualified as an intelligent agent? Could this be a misleading use of words? And what ethical implications can this linguistic habit have, for example, on how we attribute responsibility to machines and to humans? Another issue that is increasingly gaining ground in AI ethics literature, as I wrote on this blog, is the conceivability and the possibility of artificial consciousness. Neuroethics has worked extensively on both intelligence and consciousness, combining applied and fundamental analyses, which can serve as a source of relevant information for AI ethics.

In addition to the above content-related reasons, neuroethics can also provide AI ethics with a methodological model. To illustrate, the kind of conceptual clarification performed in fundamental neuroethics can enrich the identification and assessment of the practical ethical issues raised by AI. More specifically, neuroethics can provide a three-step model of analysis to AI ethics: 1. Conceptual relevance: can specific notions, such as autonomy, be attributed to AI? 2. Ethical relevance: are these specific notions ethically salient (i.e., do they require ethical evaluation)? 3. Ethical value: what is the ethical significance and the related normative implications of these specific notions?

This three-step approach is a promising methodology for ethical reflection about AI which avoids the trap anthropocentric self-projection, a risk that actually affects both the philosophical reflection on AI and its technical development.

In this way, neuroethics can contribute to avoiding both hypes and disproportionate worries about AI, which are among the biggest challenges facing AI ethics today.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Farisco, M., Evers, K. & Salles, A. On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence. Neuroethics 15, 4 (2022). https://doi.org/10.1007/s12152-022-09484-0

We transcend disciplinary borders

Human enhancement: Time for ethical guidance!

Perhaps you also dream about being more than you are: faster, better, bolder, stronger, smarter, and maybe more attractive? Until recently, technology to improve and enhance our abilities was mostly science fiction, but today we can augment our bodies and minds in a way that challenges our notions of normal and abnormal. Blurring the lines between treatments and enhancements. Very few scientists and companies that develop medicines, prosthetics, and implants would say that they are in the human enhancement business. But the technologies they develop still manage to move from one domain to another. Our bodies allow for physical and cosmetic alterations. And there are attempts to make us live longer. Our minds can also be enhanced in several ways: our feelings and thoughts, perhaps also our morals, could be improved, or corrupted.

We recognise this tension from familiar debates about more common uses of enhancements: doping in sports, or students using ADHD medicines to study for exams. But there are other examples of technologies that can be used to enhance abilities. In the military context, altering our morals, or using cybernetic implants could give us ‘super soldiers’. Using neuroprostheses to replace or improve memory that was damaged by neurological disease would be considered a treatment. But what happens when it is repurposed for the healthy to improve memory or another cognitive function? 

There have been calls for regulation and ethical guidance, but because very few of the researchers and engineers that develop the technologies that can be used to enhance abilities would call themselves enhancers, the efforts have not been very successful. Perhaps now is a good time to develop guidelines? But what is the best approach? A set of self-contained general ethical guidelines, or is the field so disparate that it requires field- or domain-specific guidance? 

The SIENNA project (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has been tasked with developing this kind of ethical guidance for Human Enhancement, Human Genetics, Artificial Intelligence and Robotics, three very different technological domains. Not surprising, given the challenges to delineate, human enhancement has by far proved to be the most challenging. For almost three years, the SIENNA project mapped the field, analysed the ethical implications and legal requirements, surveyed how research ethics committees address the ethical issues, and proposed ways to improve existing regulation. We have received input from stakeholders, experts, and publics. Industry representatives, academics, policymakers and ethicists have participated in workshops and reviewed documents. Focus groups in five countries and surveys with 11,000 people in 11 countries in Europe, Africa, Asia, and the Americas have also provided insight in the public’s attitudes to using different technologies to enhance abilities or performance. This resulted in an ethical framework, outlining several options for how to approach the process of translating this to practical ethical guidance. 

The framework for human enhancement is built on three case studies that can bring some clarity to what is at stake in a very diverse field; antidepressants, dementia treatment, and genetics. These case studies have shed some light on the kinds of issues that are likely to appear, and the difficulties involved with the complex task of developing ethical guidelines for human enhancement technologies. 

A lot of these technologies, their applications, and enhancement potentials are in their infancy. So perhaps this is the right time to promote ways for research ethics committees to inform researchers about the ethical challenges associated with human enhancement. And encouraging them to reflect on the potential enhancement impacts of their work in ethics self-assessments. 

And perhaps it is time for ethical guidance for human enhancement after all? At least now there is an opportunity for you and others to give input in a public consultation in mid-January 2021! If you want to give input to SIENNA’s proposals for human enhancement, human genomics, artificial intelligence, and robotics, visit the website to sign up for news www.sienna-project.eu.

The public consultation will launch on January 11, the deadline to submit a response is January 25, 2021. 

Josepine Fernow

Written by…

Josepine Fernow, Coordinator at the Centre for Research Ethics & Bioethics (CRB), and communications leader for the SIENNA project.

SIENNA project logo

This post in Swedish

Ethically responsible robot development

Development of new technologies sometimes draws inspiration from nature. How do plants and animals solve the problem? An example is robotics, where one wants to develop better robots based on what neuroscience knows about the brain. How does the brain solve the problem?

Neuroscience, in turn, sees new opportunities to test hypotheses about the brain by simulating them in robots. Perhaps one can simulate how areas of the brain interact in patients with Parkinson’s disease, to understand how their tremor and other difficulties are caused.

Neuroscience-inspired robotics, so-called neurorobotics, is still at an early stage. This makes neurorobotics an excellent area for being ethically and socially more proactive than we have been in previous technological developments. That is, we can already begin to identify possible ethical and social problems surrounding technological development and counteract them before they arise. For example, we cannot close our eyes to gender and equality issues, but must continuously reflect on how our own social and cultural patterns are reflected in the technology we develop. We need to open our eyes to our own blind spots!

You can read more about this ethical shift in technology development in an article in Science and Engineering Ethics (with Manuel Guerrero from CRB as one of the authors). The shift is called Responsible Research and Innovation, and is exemplified in the article by ongoing work in the European research project, Human Brain Project.

Not only neuroscientists and technology experts are collaborating in this project to develop neurorobotics. Scholars from the humanities and social sciences are also involved in the work. The article itself is an example of this broad collaboration. However, the implementation of responsible research and development is also at an early stage. It still needs to find more concrete forms of work that make it possible not only to anticipate ethical and social problems and reflect on them, but also to act and intervene to influence scientific and technological development.

From being a framework built around research and development, ethics is increasingly integrated into research and development. Read the article if you want to think about this transition to a more reflective and responsible technological development.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Aicardi, C., Akintoye, S., Fothergill, B.T. et al. Ethical and Social Aspects of Neurorobotics. Sci Eng Ethics 26, 2533–2546 (2020). https://doi.org/10.1007/s11948-020-00248-8

This post in Swedish

Approaching future issues

“Cooperative,” “pleasant” and “reliable” robot colleague is wanted

Robots are getting more and more functions in our workplaces. Logistics robots pick up the goods in the warehouse. Military robots disarm the bombs. Caring robots lift patients and surgical robots perform the operations. All this in interaction with human staff, who seem to have got brave new robot colleagues in their workplaces.

Given that some people treat robots as good colleagues and that good colleagues contribute to a good working environment, it becomes reasonable to ask: Can a robot be a good colleague? The question is investigated by Sven Nyholm and Jilles Smids in the journal Science and Engineering Ethics.

The authors approach the question conceptually. First, they propose criteria for what a good colleague is. Then they ask if robots can live up to the requirements. The question of whether a robot can be a good colleague is interesting, because it turns out to be more realistic than we first think. We do not demand as much from a colleague as from a friend or a life partner, the authors argue. Many of our demands on good colleagues have to do with their external behavior in specific situations in the workplace, rather than with how they think, feel and are as human beings in different situations of life. Sometimes, a good colleague is simply someone who gets the job done!

What criteria are mentioned in the article? Here I reproduce, in my own words, the authors’ list, which they do not intend to be exhaustive. A good colleague works well together to achieve goals. A good colleague can chat and help keep work pleasant. A good colleague does not bully but treats others respectfully. A good colleague provides support as needed. A good colleague learns and develops with others. A good colleague is consistently at work and is reliable. A good colleague adapts to how others are doing and shares work-related values. A good colleague may also do some socializing.

The authors argue that many robots already live up to several of these ideas about what a good colleague is, and that the robots in our workplaces will be even better colleagues in the future. The requirements are, as I said, lower than we first think, because they are not so much about the colleague’s inner human life, but more about reliably displayed behaviors in specific work situations. It is not difficult to imagine the criteria transformed into specifications for the robot developers. Much like in a job advertisement, which lists behaviors that the applicant should be able to exhibit.

The manager of a grocery store in this city advertised for staff. The ad contained strange quotation marks, which revealed how the manager demanded the facade of a human being rather than the interior. This is normal: to be a professional is to be able to play a role. The business concept of the grocery store was, “we care.” This idea would be a positive “experience” for customers in the meeting with the staff. A greeting, a nod, a smile, a generally pleasant welcome, would give this “experience” that we “care about people.” Therefore, the manager advertised for someone who, in quotation marks, “likes people.”

If staff can be recruited in this way, why should we not want “cooperative,” “pleasant” and “reliable” robot colleagues in the same spirit? I am convinced that similar requirements already occur as specifications when robots are designed for different functions in our workplaces.

Life is not always deep and heartfelt, as the robotization of working life reflects. The question is what happens when human surfaces become so common that we forget the quotation marks around the mechanically functioning facades. Not everyone is as clear on that point as the “humanitarian” store manager was.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Nyholm, S., Smids, J. Can a Robot Be a Good Colleague?. Sci Eng Ethics 26, 2169–2188 (2020). https://doi.org/10.1007/s11948-019-00172-6

This post in Swedish

Approaching future issues

« Older posts