A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the research debate (Page 2 of 31)

Using artificial intelligence with academic integrity

AI tools can both transform and produce content such as texts, images and music. The tools are also increasingly available as online services. One example is the ChatGPT tool, which you can ask questions and get well-informed, logically reasoned answers from. Answers that the tool can correct if you point out errors and ambiguities. You can interact with the tool almost as if you were conversing with a human.

Such a tool can of course be very useful. It can help you solve problems and find relevant information. I venture to guess that the response from the tool can also stimulate creativity and open the mind to unexpected possibilities, just as conversations with people tend to do. However, like all technology, these tools can also be abused and students have already used ChatGPT to complete their assignments.

The challenge in education and research is thus to learn to use these AI tools with academic integrity. Using AI tools is not automatically cheating. Seven participants in a European network for academic integrity (ENAI), including Sonja Bjelobaba at CRB, write about the challenge in an editorial in International Journal for Educational Integrity. Above all, the authors summarize tentative recommendations from ENAI on the ethical use of AI in academia.

An overarching aim in the recommendations is to integrate recommendations on AI with other related recommendations on academic integrity. Thus, all persons, sources and tools that influenced ideas or generated content must be clearly acknowledged – including the use of AI tools. Appropriate use of tools that affect the form of the text (such as proofreading tools, spelling checkers and thesaurus) are generally acceptable. Furthermore, an AI tool cannot be listed as a co-author in a publication, as the tool cannot take responsibility for the content.

The recommendations also emphasize the importance of educational efforts on the ethical use of AI tools. Read the recommendations in their entirety here: ENAI Recommendations on the ethical use of Artificial Intelligence in Education.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Foltynek, T., Bjelobaba, S., Glendinning, I. et al. ENAI Recommendations on the ethical use of Artificial Intelligence in Education. International Journal for Educational Integrity 19, 12 (2023). https://doi.org/10.1007/s40979-023-00133-4

This post in Swedish

We care about education

Encourage children to take responsibility for others?

It happens that academics write visionary texts that highlight great human challenges. I blogged about such a philosophically visionary article a few years ago; an article in which Kathinka Evers discussed the interaction between society and the brain. In the article, she developed the idea that we have a “proactive” responsibility to adapt our societies to what we know about the brain’s strengths and weaknesses. Above all, she emphasized that the knowledge we have today about the changeability of the brain gives us a proactive responsibility for our own human nature, as this nature is shaped and reshaped in interaction with the societies we build.

Today I want to recommend a visionary philosophical article by Jessica Nihlén Fahlquist, an article that I think has points of contact with Kathinka Evers’ paper. Here, too, the article highlights our responsibility for major human challenges, such as climate and, above all, public health. Here, too, human changeability is emphasized, not least during childhood. Here, too, it is argued that we have a responsibility to be proactive (although the term is not used). But where Kathinka Evers starts from neuroscience, Jessica Nihlén Fahlquist starts from virtue ethics and from social sciences that see children as social actors.

Jessica Nihlén Fahlquist points out that we live in more complex societies and face greater global challenges than ever before in human history. But humans are also complex and can under favorable circumstances develop great capacities for taking responsibility. Virtue ethics has this focus on the human being and on personal character traits that can be cultivated and developed to varying degrees. Virtue ethics is sometimes criticized for not being sufficiently action-guiding. But it is hard to imagine that we can deal with major human challenges through action-guiding rules and regulations alone. Rules are never as complex as human beings. Action-guiding rules assume that the challenges are already under some sort of control and thus are not as uncertain anymore. Faced with complex challenges with great uncertainties, we may have to learn to trust the human being. Do we dare to trust ourselves when we often created the problems?

Jessica Nihlén Fahlquist reasons in a way that brings to mind Kathinka Evers’ idea of a proactive responsibility for our societies and our human nature. Nihlén Fahlquist suggests, if I understand her correctly, that we already have a responsibility to create environments that support the development of human character traits that in the future can help us meet the challenges. We already have a responsibility to support greater abilities to take responsibility in the future, one could say.

Nihlén Fahlquist focuses on public health challenges and her reasoning is based on the pandemic and the issue of vaccination of children. Parents have a right and a duty to protect their children from risks. But reasonably, parents can also be considered obliged not to be overprotective, but also to consider the child’s development of agency and values. The virus that spread during the pandemic did not cause severe symptoms in children. Vaccination therefore does not significantly protect the child’s own health, but would be done with others in mind. Studies show that children may be capable of reasoning in terms of such responsibility for others. Children who participate in medical research can, for example, answer that they participate partly to help others. Do we dare to encourage capable children to take responsibility for public health by letting them reason about their own vaccination? Is it even the case that we should support children to cultivate such responsibility as a virtue?

Nihlén Fahlquist does not claim that children themselves have this responsibility to get vaccinated out of solidarity with others. But if some children prove to be able to reason in such a morally complex way about their own vaccination, one could say that these children’s sense of responsibility is something unexpected and admirable, something that we cannot demand from a child. By encouraging and supporting the unexpected and admirable in children, it can eventually become an expected responsibility in adults, suggests Jessica Nihlén Fahlquist. Virtue ethics makes it meaningful to think in terms of such possibilities, where humans can change and their virtues can grow. Do we dare to believe in such possibilities in ourselves? If you do not expect the unexpected you will not discover it, said a visionary Greek philosopher named Heraclitus.

Jessica Nihlén Fahlquist’s article is multifaceted and innovative. In this post, I have only emphasized one of her lines of thought, which I hope has made you curious about an urgent academic text: Taking risks to protect others – pediatric vaccination and moral responsibility.

In summary, Jessica Nihlén Fahlquist argues that vaccination should be regarded as an opportunity for children to develop their sense of responsibility and that parents, schools, healthcare professionals and public health authorities should include children in debates about ethical public health issues.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Jessica Nihlén Fahlquist, Taking Risks to Protect Others – Pediatric Vaccination and Moral Responsibility, Public Health Ethics, 2023;, phad005, https://doi.org/10.1093/phe/phad005

This post in Swedish

Approaching future issues

When ordinary words get scientific uses

A few weeks ago, Josepine Fernow wrote an urgent blog post about science and language. She linked to a research debate about conceptual challenges for neuroscience, challenges that arise when ordinary words get specialized uses in science as technically defined terms.

In the case under debate, the word “sentience” had been imported into the scientific study of the brain. A research group reported that they were able to determine that in vitro neurons from humans and mice have learning abilities and that they exhibit “sentience” in a simulated game world. Of course, it caused quite a stir that some neurons grown in a laboratory could exhibit sentience! But the research team did not mean what attracted attention. They meant something very technical that only a specialist in the field can understand. The surprising thing about the finding was therefore the choice of words.

When the startling choice of words was questioned by other researchers, the research team defended themselves by saying that they defined the term “sentience” strictly scientifically, so that everyone should have understood what they meant, at least the colleagues in the field. Well, not all people are specialists in the relevant field. Thus the discovery – whatever it was that was discovered – raised a stir among people as if it were a discovery of sentience in neurons grown in a laboratory.

The research group’s attitude towards their own technical language is similar to an attitude I encountered long ago in a famous theorist of language, Noam Chomsky. This is what Chomsky said about the scientific study of the nature of language: “every serious approach to the study of language departs from the common-sense usage, replacing it by some technical concept.” Chomsky is of course right that linguistics defines its own technical concepts of language. But one can sense a certain hubris in the statement, because it sounds as if only a linguistic theorist could understand “language” in a way that is worthy of serious attention. This is untenable, because it raises the question what a technical concept of language is. In what sense is a technical concept a concept of language? Is it a technical concept of language in the common sense? Or is it a technical concept of language in the same inaccessible sense? In the latter case, the serious study of language seems to degenerate into a navel-gazing that does not access language.

For a technical concept of language to be a concept of language, our ordinary notions must be taken into account. Otherwise, the technical concept ceases to be a concept of language.

This is perhaps something to consider in neuroscience as well. Namely to the extent that one wants to shed light on phenomena such as consciousness and sentience. Of course, neuroscience will define its own technical concepts of these phenomena, as in the debated case. But if the technical concepts are to function as concepts of consciousness and sentience, then one cannot neglect our ordinary uses of words.

Science is very serious and important. But if the special significance of science goes to our heads, then our attitude risks undermining the great importance of science for humanity. Here you can read the views of three neuroethicists on these important linguistic issues: Conceptual conundrums for neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Minding our language

Taking care of the legacy: curating responsible research and innovation practice

Responsible research and innovation, or RRI as it is often called in EU-project language, is both scholarship and practice. Over the last decade, the Human Brain Project Building has used structured and strategic approaches to embed responsible research and innovation practices across the project. The efforts to curate the legacy of this work includes the development an online Ethics & Society toolkit. But how does that work? And what does a toolkit need in order to ensure it has a role to play?

A recent paper by Lise Bitsch and Bernd Stahl in Frontiers in Research Metrics and Analytics explores whether this kind of toolkit can help embed the legacy of RRI activities in a large research project. According to them, a toolkit has the potential to play an important role in preserving RRI legacy. But they also point out that that potential can only be realised if we have organisational structures and funding in place to make sure that this legacy is retained. Because as all resources, it needs to be maintained, shared, used, and curated. To play a role in the long-term.

Even though this particular toolkit is designed to integrate insights and practises of responsible research and innovation in the Human Brain Project, there are lessons to be learned for other efforts to ensure acceptability, desirability and sustainability of processes and outcomes of research and innovation activities. The Human Brain Project is a ten-year European Flagship project that has gone through several phases. Bernd Stahl is the ethics director of the Human Brain Project, and Lise Bitsch has led the project’s responsible research and innovation work stream for the past three years. And there is a lot to be learned. For projects who are considering developing similar tools, they describe the process of designing and developing the toolkit.

But there are parts of the RRI-legacy that cannot fit in a toolkit. The impact of the ethical, social and reflective work in the Human Brain Project is visible in governance structures, how the project is managing and handling data, in its publications and communications. The authors are part of those structures.

In addition to the Ethics & Society toolkit, the work has been published in journals, shared on the Ethics Dialogues blog (where a first version of this post was published) and the HBP Society Twitter handle, offering more opportunities to engage and discuss in the EBRAINS community Ethics & Society space. The capacity building efforts carried out for the project and EBRAINS research infrastructure have been developed into an online ethics & society training resource, and the work with gender and diversity has resulted in a toolkit for equality, diversity and inclusion in project themes and teams.

Read the paper by Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy.

(A first version of this post was originally published on the Ethics Dialogues blog, March 13, 2023)

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Bernd Carsten Stahl and Lise Bitsch: Building a responsible innovation toolkit as project legacy, Frontiers in Research Metrics and Analytics, 13 March 2023, Sec. Research Policy and Strategic Management, Volume 8 – 2023, https://doi.org/10.3389/frma.2023.1112106

Part of international collaborations

Does the severity of an illness qualify the moral motivation to act?

I have to admit that I had a little trouble cracking the code in the article which I will now try to summarize briefly. I hope that the title I have chosen is not already a misunderstanding. Moral philosophy is not easy, but the subject of the article is urgent so I still want to try.

Illness is generally perceived as something bad, as an evil. If we are to speak in terms of value, we can say that illness has negative value. Individual cases of illness usually create a moral motivation in us to mitigate the ill person’s negative condition. How strong this motivation is depends on several factors, but the severity of the disease is a relevant factor. The motivation to act typically increases with the severity of the disease.

This of course comes as no surprise. The motivation to alleviate a person’s cold is not very strong because a cold is not a severe condition. A runny nose is nothing to complain about. But in the face of more severe conditions such as blood poisoning, diabetes and cancer, the moral drive to act increases. “This condition is very severe” we say and feel that it is very important to act.

So what is the problem that motivates the article? If I am interpreting the authors correctly, the problem is that it is not so easy to convert this obvious use of language into a rule to follow. I recently bought a kettle that came with this warning: “Do not fill the kettle with an excessive amount of water.” The warning is, in a way, self-evident. Of course, you should not fill the kettle with an excessive amount of water! The motivation to pour should have stopped before the water level got excessively high. Even though the language is perfectly obvious, the rule is not as obvious, because when is the water level excessively high? When should we stop pouring?

The problem with the word “severity” is similar, or at least that is my interpretation. “Severity” is an obvious linguistic tool when we discuss illness and the need to do something about it. But at the same time, it is difficult to define the term as a description of when conditions are (more or less) severe and when it is (more or less) motivated to do something about them. Some philosophers have therefore criticized the use of “severity” in discussions about, for example, priority setting in healthcare. The situation would become somewhat paradoxical, since an obviously relevant concept would be excluded because it is unclear how it can be transformed into a description that can be followed as if it were a simple rule.

If I understand the article correctly, the authors want to defend the concept of severity by showing that severity qualifies our moral motivation to act when someone is ill. They do this by describing six other concepts that it is more generally accepted should qualify how morally important it is to do something about a condition, including the concepts of need and lack of well-being. None of the six concepts coincides completely with the concept of severity, but when we try to assess how they affect the need to act, we will often simultaneously assess the severity. And when we assess the severity of an illness, we will often at the same time assess how the illness affects well-being, for example.

The authors’ conclusion is that the concept of severity is a morally relevant concept that should be considered in future discussions, as severity qualifies the moral motivation to act. However, I may have misunderstood the reasoning, so if you want to be on the safe side, you can read the article here: Severity as a moral qualifier of malady.

I want to end the post with a personal side note: I am inclined to say that the philosophical difficulty in defining the concept of severity (when we talk about disease) is similar to the difficulty in defining the concept of excess (when we talk about water levels). What makes these concepts so useful is their great pliability. It is difficult to say what “severe disease” or “excessively high water level” is, because it depends on so much. Pliable words like these are like tracking dogs that sensitively move through the terrain in all possible relevant directions. But if we try to reconstruct the tracking dog’s sensitivity in general intellectual terms, without access to the dog’s sense of smell, experiences and instincts, we run into great difficulties.

Should these philosophical difficulties motivate us to get rid of the dog? Of course not! Just as we learn incredibly much from following a tracking dog, we learn incredibly much from following the words “severe disease,” even if the journey is arduous. This underlines the authors’ conclusion: severity should be considered a morally significant concept that continues to deserve our attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Solberg, C.T., Barra, M., Sandman, L. et al. Severity as a moral qualifier of malady. BMC Medical Ethics 24, 25 (2023). https://doi.org/10.1186/s12910-023-00903-2

This post in Swedish

We like challenging questions

Ethical challenges when children with cancer are recruited for research

Cancer is a common cause of death among children, but improved treatments have significantly increased survival, especially in high-income countries. A prerequisite for this development is research.

When we think of a hospital, we think mainly of the care given to patients there. But care and research are largely developed together in the hospitals. Treatments given in the hospitals are tested in research carried out in the hospitals. This overlap of care and research in the same setting creates ethical challenges. Not least because it can be difficult to see and maintain the differences when, as I said, the activities overlap.

Kajsa Norbäck, PhD student at CRB, investigates in an interview study Swedish healthcare professionals’ perceptions and experiences of ethical challenges when children with cancer are recruited for research in the hospitals where they are patients. Research is needed for future childhood cancer care, but what are the challenges when approaching children with cancer and their parents with the question of research participation?

The interview material is rich and difficult to summarize in a blog post, but I want to highlight a few findings that particularly impressed me. I recommend those interested to take the time to read the entire article in peace and quiet. Interview studies provide a living direct contact with reality from the perspective of the interviewees. Kajsa Norbäck writes that interview studies give us informative examples of ethical challenges. Such examples are needed to give the ethical reflection concreteness and grounding in reality.

The interviewed healthcare professionals particularly emphasized the importance of establishing a trusting relationship with the family. Only when you have such a relationship does it make sense to discuss possible research participation. Personally, I cannot help but interpret it as meaning that the care relationship with patient and family must be established first. It is within the framework of the care relationship that possible research participation can be discussed in a trusting manner. But trust can also be a dilemma, the interviews show. The interviewees stated that many families had so much trust in healthcare and research that it could feel too easy and predictable to get consent for research participation. They also had the impression that parents could sometimes give consent to research out of fear of not having done everything they could to save the child, as if research was a last chance to get effective care.

The challenge of managing the overlap of care and research also extends to the professional role of the physician. Physicians have a care responsibility, but since the care they can offer rests on research, they also feel a research responsibility: they feel a responsibility to recruit research participants from among their patients. This dual responsibility can naturally create conflicts of interest, of which they give informative examples in the interviews.

In the middle of this force field of challenges we have the child, who may have difficulty making itself heard, perhaps because many of us have difficulty being a listener. Here is what one of the interviewees says: “We often talk about informing and I think that’s a strange word. I think the greatest competence is to listen.” There is a lot to listen to in Kajsa Norbäck’s interview study as well, more than I can reproduce in a blog post. Read her article here: Ethical concerns when recruiting children with cancer for research: Swedish healthcare professionals’ perceptions and experiences.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Norbäck, K., Höglund, A.T., Godskesen, T. and Frygner-Holm, S. Ethical concerns when recruiting children with cancer for research: Swedish healthcare professionals’ perceptions and experiences. BMC Medical Ethics 24, 23 (2023). https://doi.org/10.1186/s12910-023-00901-4

This post in Swedish

Ethics needs empirical input

Science, science communication and language

All communications require a shared language and fruitful discussions rely on conceptual clarity and common terms. Different definitions and divergent nomenclatures is a challenge for science: across different disciplines, between professions and when engaging with different publics. The audience for science communications is diverse. Research questions and results need to be shared within the field, between fields, with policy makers and publics. To be effective, the language, style and channel should to be adapted to the audiences’ needs, values and expectations.

This is not just in public facing communications. A recent discussion in Neuron is addressing the semantics of “sentience” in scientific communication, starting from an article by Brett J Kagan et al. on how in vitro neurons learn and exhibit sentience when embodied in a simulated game world. The article was published in December 2022 and received a lot of attention: both positive media coverage and a mix of positive and negative reactions from the scientific community. In a response, Fuat Balci et al. express concerns about the key claim in the article: claims that the authors demonstrated that cortical neurons are able to (in vitro) self-organise and display intelligent and sentient behaviour in a simulated game-world. Balci et al. are (among other things) critical of the use of terms and concepts that they claim misrepresent the findings. They also claim that Kagan et al. are overselling the translational and societal relevance of their findings. In essence creating hype around their own research. They raise a discussion about the importance of scientific communication: media tends to relay information from abstracts and statements about the significance of the research, and the scientists themselves amplify these statements in interviews. They claim that overselling results has an impact on how we evaluate scientific credibility and reliability. 

Why does this happen? Balci et al. point to a paper by Jevin D. West and Carl T. Bergstrom, from 2021 on misinformation in and about science, suggesting that hype, hyperbole (using exaggeration as a figure of speech or rhetorical device) and publication bias might have to do with demands on different productivity metrics. According to West and Bergstrom, exaggeration in popular scientific writing isn’t just misinforming the public: it also misleads researchers. In turn leading to citation misdirection and citation bias. A related problem is predatory publishing, which has the potential to mislead those of us without the means to detect untrustworthy publishers. And to top it off, echo-chambers and filter bubbles help select and deselect information and amplify the messages they think you want to hear.

The discussion in Neuron has continued with a response by Brett J. Kagan et al., in a letter about scientific communication and the semantics of sentience. They start by stating that the use of language to describe specific phenomena is a contentious aspect of scientific discourse and that whether scientific communication is effective or not depends on the context where the language is used. And that in this case using the term “sentience” has a technical meaning in line with recent literature in theoretical biology and the free energy principle, where biotic self-organisation is defined as either active inference or sentient behaviour

They make an interesting point that takes us back to the beginning of this post, namely the challenges of multidisciplinary work. Advancing research in cross-disciplinary collaboration is often challenging in the beginning because of difficulties integrating across fields. But if the different nomenclatures and approaches are recognized as an opportunity to improve and innovate, there can be benefits.

Recently, another letter by Karen S. Rommelfanger, Khara M. Ramos and Arleen Salles added a layer of reflection on the conceptual conundrums for neuroscience. In their own field of neuroethics, calls for clear language and concepts in scientific practice and communication is nothing new. They have all argued that conceptual clarity can improve science, enhance our understanding and lead to a more nuanced and productive discussion about the ethical issues. In the letter, the authors raise an important point about science and society. If we really believe that scientific terminology can retain its technically defined meaning when we transfer words to contexts permeated by a variety of cultural assumptions and colloquial uses of those same terms, we run the risk of trivialising the social and ethical impact that the choice of scientific terminology can have. They ask whether it is responsible of scientists to consider peers as their only (relevant) audience, or if conceptual clarity in science might often require public engagement and a multidisciplinary conversation.

One could also suggest that the choice to opt for terms like “sentience” and “intelligence” as a technical characterisation of how cortical neurons function in a simulated in-vitro game-world, could be considered to be questionable also from the point of view of scientific development. If we agree that neuroscience can shed light on sentience and intelligence, we also have to admit that at as of yet, we don’t know exactly how it will illuminate these capacities. And perhaps that means it is too early to bind very specific technical meaning to terms that have both colloquial and cultural meaning, and which neuroscience can illuminate in as yet unknown ways?

You may wonder why an ethics blog writer dares to express views on scientific terminology. The point I am trying to make is that we all use language, but we also produce language. Everyday. Together. In almost everything we do. This means that words like sentience and intelligence belong to us all. We have a shared responsibility for how we use them. The decision to give these common words technical meaning has consequences for how people will understand neuroscience when the words find their way back out of the technical context. But there can also be consequences for science when the words find their way in, as in the case under discussion. Because the boundaries between science and society might not be so clearly distinguishable as one might think.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

This post in Swedish

We care about communication

Digital biomarkers to test new drugs for mental health

Somewhat simplified, we usually understand biomarkers as substances in the body that can be detected, for example, through blood or urine tests, and that indicate a biological state, such as cancers or diabetes. Biomarkers can be used to make a diagnosis, predict disease risks and to monitor an ongoing treatment.

Nowadays, people also talk about digital biomarkers. To get an idea of what it is all about, think of the smartphone applications that can record movement patterns, heart rate and more. The new digital biomarkers are measurable physiological or behavioural data that are collected in a similar way and where the measuring equipment is usually portable or placed in the body. This data can be followed in real time to monitor the patient’s health status and recovery, without the need for the patient to make repeated hospital visits. However, the question of how these digital data can be understood as biomarkers does not seem completely clear.

Some concurrently published articles in the journal Frontiers in Psychiatry discuss the possibility of using digital biomarkers to test the safety and efficacy of new drugs in mental health. For this to work, these new ways of collecting data and monitoring changes in real time must of course also work safely and effectively. They must moreover satisfy ethical and legal demands on data protection and oversight. The articles discuss these and other challenges. In one article, for example, the question of how we should understand “bio” when we go from traditional biomarkers to digital ones is discussed. Another paper presents results from an attempt to use a digital biomarker to predict cognitive function.

In the editorial introducing the articles, Deborah Mascalzoni, among others, emphasizes that the use of digital biomarkers still lacks a satisfactory regulated context and that issues of data protection and risks of discrimination when data of this kind are collected must be addressed. You can find the editorial here: Digital biomarkers in testing the safety and efficacy of new drugs in mental health: A collaborative effort of patients, clinicians, researchers, and regulators. There you will also find a link to all articles.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Johanna Maria Catharina Blom, Cristina Benatti, Deborah Mascalzoni, Fabio Tascedda and Luca Pani. Editorial: Digital biomarkers in testing the safety and efficacy of new drugs in mental health: A collaborative effort of patients, clinicians, researchers, and regulators. Frontiers in Psychiatry, 2023. https://doi.org/10.3389/fpsyt.2022.1107037

This post in Swedish

We recommend readings

A new project will explore the prospect of artificial awareness

The neuroethics group at CRB has just started its work as part of a new European research project about artificial awareness. The project is called “Counterfactual Assessment and Valuation for Awareness Architecture” (CAVAA), and is funded for a duration of four years. The consortium is composed of 10 institutions, coordinated by the Radboud University in the Netherlands.

The goal of CAVAA is “to realize a theory of awareness instantiated as an integrated computational architecture…, to explain awareness in biological systems and engineer it in technological ones.” Different specific objectives derive from this general goal. First, CAVAA has a robust theoretical component: it relies on a strong theoretical framework. Conceptual reflection on awareness, including its definition and the identification of features that allow its attribution to either biological organisms or artificial systems, is an explicit task of the project. Second, CAVAA is interested in exploring the connection between awareness in biological organisms and its possible replication in artificial systems. The project thus gives much attention to the connection between neuroscience and AI. Third, against this background, CAVAA aims at replicating awareness in artificial settings. Importantly, the project also has a clear ethical responsibility, more specifically about anticipating the potential societal and ethical impact of aware artificial systems.

There are several reasons why a scientific project with a strong engineering and computer science component also has philosophers on board. We are asked to contribute to developing a strong and consistent theoretical account of awareness, including the conceptual conceivability and the technical feasibility of its artificial replication. This is not straightforward, not only because there are many content-related challenges, but also because there are logical traps to avoid. For instance, we should avoid the temptation to validate an empirical statement on the basis of our own theory: this would possibly be tautological or circular.

In addition to this theoretical contribution, we will also collaborate in identifying indicators of awareness and benchmarks for validating the cognitive architecture that will be developed. Finally, we will collaborate in the ethical analysis concerning potential future scenarios related to artificial awareness, such as the possibility of developing artificial moral agents or the need to extend moral rights also to artificial systems.

In the end, there are several potential contributions that philosophy can provide to the scientific attempt to replicate biological awareness in artificial systems. Part of this possible collaboration is the fundamental and provoking question: why should we try to develop artificial awareness at all? What is the expected benefit, should we succeed? This is definitely an open question, with possible arguments for and against attempting such a grand accomplishment.

There is also another question of equal importance, which may justify the effort to identify the necessary and sufficient conditions for artificial systems to become aware, and how to recognize them as such. What if we will inadvertently create (or worse: have already created) forms of artificial awareness, but do not recognize this and treat them as if they were unaware? Such scenarios also confront us with serious ethical issues. So, regardless of our background beliefs about artificial awareness, it is worth investing in thinking about it.

Stay tuned to hear more from CAVAA!

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Part of international collaborations

Longer hospital stays can worsen self-injurious behaviour

Can a hospital stay make the disease worse? It sounds paradoxical, but of course it can occur as a result of, for example, misdiagnosis and negligence, or of overtreatment. When it comes to psychiatric illnesses and ailments, which are often sensitive to the interaction with the environment, it can be difficult to see how the situation at the hospital affects the illness. Therefore, it is important to be attentive.

A new study by Antoinette Lundahl, carried out together with Gert Helgesson and Niklas Juth, draws attention to the problem in the care of patients who self-harm. They did a survey with healthcare staff at psychiatric clinics in Stockholm. The respondents answered questions about experiences of care longer than a week with this patient group. A majority of the respondents believed that it had detrimental effects on self-injurious behaviour if the patients stayed longer than a week in their ward. They also considered that the patients often stayed too long in the ward and that the reasons for the extended length of stay were in several cases non-medical.

How are we to understand this? How might hospitalization increase the risk of the behaviour to be treated? In the discussion part of the article, various possible explanations are suggested, for example conflicts on the ward or that patients spread self-injurious behaviours to each other. Another possible explanation is that the hospital stay is used by the patient to transfer responsibility for handling painful feelings and thoughts to others. Such avoidance strategies only have a short-term effect and increase the pain in the long term. The self-injurious behaviour can be reinforced as a way to get more care and attention. A kind of “care addiction” develops in the patient, you could say.

How should we understand the extended hospital stays? The respondents mentioned several non-medical reasons, such as uncertainty about the patient’s housing, or that patients who look fragile or are assertive influence the staff to extend the length of care. Another reason for extended care times was assumed to be doctors’ fear of being held responsible for suicide or attempted suicide after discharge, a fear which paradoxically could increase the risk.

Read Antoinette Lundahl’s article here: Hospital staff at most psychiatric clinics in Stockholm experience that patients who self-harm have too long hospital stays, with ensuing detrimental effects.

Then you can also read more about the respondents’ suggestions for improvements, such as giving patients clear care plans with fixed discharge dates, short treatment times (a few days), and information about what is expected of them during the hospital stay. Better collaboration with outpatient care was also recommended, as well as more non-medical treatments in inpatient care.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Antoinette Lundahl, Gert Helgesson & Niklas Juth (2022) Hospital staff at most psychiatric clinics in Stockholm experience that patients who self-harm have too long hospital stays, with ensuing detrimental effects, Nordic Journal of Psychiatry, 76:4, 287-294, DOI: 10.1080/08039488.2021.1965213

This post in Swedish

We have a clinical perspective

« Older posts Newer posts »