A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: Artificial Intelligence (Page 1 of 4)

Debate on responsibility and academic authorship

Who can be listed as an author of a research paper? There seems to be some confusion about the so-called Vancouver rules for academic authorship, which serve as publication ethical guidelines in primarily medicine and the natural sciences (but sometimes also in the humanities and social sciences). According to these rules, an academic author must have contributed intellectually to the study, participated in the writing process, and approved the final version of the paper. However, the deepest confusion seems to concern the fourth rule, which requires that an academic author must take responsibility for the accuracy and integrity of the published research. The confusion is not lessened by the fact that artificial intelligences such as ChatGPT have started to be used in the research and writing process. Researchers sometimes ask the AI ​​to generate objections to the researchers’ reasoning, which of course can make a significant contribution to the research process. The AI ​​can also generate text that contributes to the process of writing the article. Should such an AI count as a co-author?

No, says the Committee on Publication Ethics (COPE) with reference to the last requirement of the Vancouver rules: an AI cannot be an author of an academic publication, because it cannot take responsibility for the published research. The committee’s dismissal of AI authorship has sparked a small but instructive debate in the Journal of Medical Ethics. The first to write was Neil Levy who argued that responsibility (for entire studies) is not a reasonable requirement for academic authorship, and that an AI could already count as an author (if the requirement is dropped). This prompted a response from Gert Helgesson and William Bülow, who argued that responsibility (realistically interpreted) is a reasonable requirement, and that an AI cannot be counted as an author, as it cannot take responsibility.

What is this debate about? What does the rule that gave rise to it say? It states that, to be considered an author of a scientific article, you must agree to be accountable for all aspects of the work. You must ensure that questions about the accuracy and integrity of the published research are satisfactorily investigated and resolved. In short, an academic writer must be able to answer for the work. According to Neil Levy, this requirement is too strong. In medicine and the natural sciences, it is often the case that almost none of the researchers listed as co-authors can answer for the entire published study. The collaborations can be huge and the researchers are specialists in their own narrow fields. They lack the overview and competence to assess and answer for the study in its entirety. In many cases, not even the first author can do this, says Neil Levy. If we do not want to make it almost impossible to be listed as an author in many scientific disciplines, responsibility must be abolished as a requirement for authorship, he argues. Then we have to accept that AI can already today be counted as co-author of many scientific studies, if the AI made a significant intellectual contribution to the research.

However, Neil Levy opens up for a third possibility. The responsibility criterion could be reinterpreted so that it can be fulfilled by the researchers who today are usually listed as authors. What is the alternative interpretation? A researcher who has made a significant intellectual contribution to a research article must, in order to be listed as an author, accept responsibility for their “local” contribution to the study, not for the study as a whole. An AI cannot, according to this interpretation, count as an academic author, because it cannot answer or be held responsible even for its “local” contribution to the study.

According to Gert Helgesson and William Bülow, this third possibility is the obviously correct interpretation of the fourth Vancouver rule. The reasonable interpretation, they argue, is that anyone listed as an author of an academic publication has a responsibility to facilitate an investigation, if irregularities or mistakes can be suspected in the study. Not only after the study is published, but throughout the research process. However, no one can be held responsible for an entire study, sometimes not even the first author. You can only be held responsible for your own contribution, for the part of the study that you have insight into and competence to judge. However, if you suspect irregularities in other parts of the study, then as an academic author you still have a responsibility to call attention to this, and to act so that the suspicions are investigated if they cannot be immediately dismissed.

The confusion about the fourth criterion of academic authorship is natural, it is actually not that easy to understand, and should therefore be specified. The debate in the Journal of Medical Ethics provides an instructive picture of how differently the criterion can be interpreted, and it can thus motivate proposals on how the criterion should be specified. You can read Neil Levy’s article here: Responsibility is not required for authorship. The response from Gert Helgesson and William Bülow can be found here: Responsibility is an adequate requirement for authorship: a reply to Levy.

Personally, I want to ask whether an AI, which cannot take responsibility for research work, can be said to make significant intellectual contributions to scientific studies. In academia, we are expected to be open to criticism from others and not least from ourselves. We are expected to be able to critically assess our ideas, theories, and methods: judge whether objections are valid and then defend ourselves or change our minds. This is an important part of the doctoral education and the research seminar. We cannot therefore be said to contribute intellectually to research, I suppose, if we do not have the ability to self-critically assess the accuracy of our contributions. ChatGPT can therefore hardly be said to make significant intellectual contributions to research, I am inclined to say. Not even when it generates self-critical or self-defending text on the basis of statistical calculations in huge language databases. It is the researchers who judge whether generated text inspires good reasons to either change their mind or defend themselves. If so, it would be a misunderstanding to acknowledge the contribution of a ChatGPT in a research paper, as is usually done with research colleagues who contributed intellectually to the study without meeting the other requirements for academic authorship. Rather, the authors of the study should indicate how the ChatGPT was used as a tool in the study, similar to how they describe the use of other tools and methods. How should this be done? In the debate, it is argued that this also needs to be specified.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Levy N. Responsibility is not required for authorship. Journal of Medical Ethics. Published Online First: 15 May 2024. doi: 10.1136/jme-2024-109912

Helgesson G, Bülow W. Responsibility is an adequate requirement for authorship: a reply to Levy. Journal of Medical Ethics. Published Online First: 04 July 2024. doi: 10.1136/jme-2024-110245

This post in Swedish

We participate in debates

Why should we try to build conscious AI?

In a recent post on this blog I summarized the main points of a pre-print where I analyzed the prospect of artificial consciousness from an evolutionary perspective. I took the brain and its architecture as a benchmark for addressing the technical feasibility and conceptual plausibility of engineering consciousness in artificial intelligence systems. The pre-print has been accepted and it is now available as a peer-reviewed article online.

In this post I want to focus on one particular point that I analyzed in the paper, and which I think is not always adequately accounted for in the debate about AI consciousness: what are the benefits of pursuing artificial consciousness in the first place, for science and for society at large? Why should we attempt to engineer subjective experience in AI systems? What can we realistically expect from such an endeavour?

There are several possible answers to these questions. At the epistemological level (with reference to what we can know) it is possible that developing artificial systems that replicate some features of our conscious experience could enable us to better understand biological consciousness, through similarities as well as through differences. At the technical level (with reference to what we can do) it is possible that the development of artificial consciousness would be a game-changer in AI, for instance giving AI the capacity for intentionality and theory of mind, and for anticipating the consequences not only of human decisions, but also of its own “actions.” At the societal and ethical level (with reference to our co-existence with others and to what is good and bad for us) especially the latter capabilities (intentionality, theory of mind, and anticipation) could arguably help AI to better inform humans about potential negative impacts of its functioning and use on society, and to help avoid them while favouring positive impacts. Of course, on the negative side, as showed by human history, both intentionality and theory of mind may be used by the AI for negative purposes, for instance for favouring the AI’s own interests or the interests of the limited groups that control it. Human intentionality has not always favoured out-group individuals or species, or indeed the planet as a whole. This point connects to one of the most debated issues in AI ethics, the so-called AI alignment problem: how can we be sure that AI systems conform to human values? How can we make AI aligned with our own interests? And whose values and interests should we take as reference? Cultural diversity is an important and challenging factor to take into account in these reflections.

I think there is also a question that precedes that of AI value alignment: can AI really have values? In other words, is the capacity for evaluation that possibly drives the elaboration of values in AI the same as in humans? And is AI capable of evaluating its own values, including its ethical values, a reflective process that drives the self-critical elaboration of values in humans, making us evaluative subjects? In fact, the capacity for evaluation (which may be defined as the sensitivity to reward signals and the ability to discriminate between good and bad things in the world on the basis of specific needs, motivations, and goals) is a defining feature of biological organisms, namely of the brain. AI may be programmed to discriminate between what humans consider to be good and bad things in the world, and it is also conceivable that AI will be less dependent on humans in applying this distinction. However, this does not entail that it “evaluates” in the sense that it autonomously performs an evaluation and subjectively experiences its evaluation.

It is possible that an AI system may approximate the diversity of cognitive processes that the brain has access to, for instance the processing of various sensory modalities, while AI remains unable to incorporate the values attributed to the processed information and to its representation, as the human brain can do. In other words, to date AI remains devoid of any experiential content, and for this reason, for the time being, AI is different from the human brain because of its inability to attribute experiential value to information. This is the fundamental reason why present AI systems lack subjective experience. If we want to refer to needs (which are a prerequisite for the capacity for evaluation), current AI appears limited to epistemic needs, without access to, for example, moral and aesthetic needs. Therefore, the values that AI has at least so far been able to develop or be sensible to are limited to the epistemic level, while morality and aesthetics are beyond our present technological capabilities. I do not deny that overcoming this limitation may be a matter of further technological progress, but for the time being we should carefully consider this limitation in our reflections about whether it is wise to strive for conscious AI systems. If the form of consciousness that we can realistically aspire to engineer today is limited to the cognitive dimension, without any sensibility to ethical deliberation and aesthetic appreciation, I am afraid that the risk of misusing or exploiting it for selfish purposes is quite high.

One could object that an AI system limited to epistemic values is not really conscious (at least not in a fully human sense). However, the fact remains that its capacity to interact with the world to achieve the goals it has been programmed to achieve would be greatly enhanced if it had this cognitive form of consciousness. This increases our responsibility to hypothetically consider whether conscious AI, even if limited and much more rudimentary than human consciousness, may be for the better or for the worse.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco, Kathinka Evers, Jean-Pierre Changeux. Is artificial consciousness achievable? Lessons from the human brain. Neural Networks, Volume 180, 2024. https://doi.org/10.1016/j.neunet.2024.106714

We like challenging questions

Artificial consciousness and the need for epistemic humility

As I wrote in previous posts on this blog, the discussion about the possibility of engineering an artificial form of consciousness is growing along with the impressive advances of artificial intelligence (AI). Indeed, there are many questions arising from the prospect of an artificial consciousness, including its conceivability and its possible ethical implications. We  deal with these kinds of questions as part of a EU multidisciplinary project, which aims to advance towards the development of artificial awareness.

Here I want to describe the kind of approach to the issue of artificial consciousness that I am inclined to consider the most promising. In a nutshell, the research strategy I propose to move forward in clarifying the empirical and theoretical issues of the feasibility and the conceivability of artificial consciousness, consists in starting from the form of consciousness we are familiar with (biological consciousness) and from its correlation with the organ that science has revealed is crucial for it (the brain).

In a recent paper, available as a pre-print, I analysed the question of the possibility of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relationship to consciousness as a benchmark. In other words, to avoid vague and abstract speculations about artificial consciousness, I believe it is necessary to consider the correlation between brain and consciousness that resulted from biological evolution, and use this correlation as a reference model for the technical attempts to engineer consciousness.

In fact, there are several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience, which current AI is still limited in emulating or accounting for. Among these are:

  • massive biochemical and neuronal diversity
  • long period of epigenetic development, that is, changes in the brain’s connections that eventually change the number of neurons and their connections in the brain network as a result of the interaction with the external environment
  • embodied sensorimotor experience of the world
  • spontaneous brain activity, that is, an intrinsic ability to act which is independent of external stimulation
  • autopoiesis, that is, the capacity to constantly reproduce and maintain itself
  • emotion-based reward systems
  • clear distinction between conscious and non-conscious representations, and the consequent unitary and specific properties of conscious representations
  • semantic competence of the brain, expressed in the capacity for understanding
  • the principle of degeneracy, which means that the same neuronal networks may support different functions, leading to plasticity and creativity.

These are just some of the brain features that arguably play a key role for biological consciousness and that may inspire current research on artificial consciousness.

Note that I am not claiming that the way consciousness arises from the brain is in principle the only possible way for consciousness to exist: this would amount to a form of biological chauvinism or anthropocentric narcissism.  In fact, current AI is limited in its ability to emulate human consciousness. The reasons for these limitations are both intrinsic, that is, dependent on the structure and architecture of AI, and extrinsic, that is, dependent on the current stage of scientific and technological knowledge. Nevertheless, these limitations do not logically exclude that AI may achieve alternative forms of consciousness that are qualitatively different from human consciousness, and that these artificial forms of consciousness may be either more or less sophisticated, depending on the perspectives from which they are assessed.

In other words, we cannot exclude in advance that artificial systems are capable of achieving alien forms of consciousness, so different from ours that it may not even be appropriate to continue to call it consciousness, unless we clearly specify what is common and what is different in artificial and human consciousness. The problem is that we are limited in our language as well as in our thinking and imagination. We cannot avoid relying on what is within our epistemic horizon, but we should also avoid the fallacy of hasty generalization. Therefore, we should combine the need to start from the evolutionary correlation between brain and consciousness as a benchmark for artificial consciousness, with the need to remain humble and acknowledge the possibility that artificial consciousness may be of its own kind, beyond our view.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Approaching future issues

Objects that behave humanly

Many forms of artificial intelligence could be considered objects that behave humanly. However, it does not take much for us humans to personify non-living objects. We get angry at the car that does not start or the weather that does not let us have a picnic, as if they were against us. Children spontaneously personify simple toys and can describe the relationship between geometric shapes as, “the small circle is trying to escape from the big triangle.”

We are increasingly encountering artificial intelligence designed to give a human impression, for example in the form of chatbots for customer service when shopping online. Such AI can even be equipped with personal traits, a persona that becomes an important part of the customer experience. The chatbot can suggest even more products for you and effectively generate additional sales based on the data collected about you. No wonder the interest in developing human-like AI is huge. Part of it has to do with user-friendliness, of course, but at the same time, an AI that you find personally attractive will grab your attention. You might even like the chatbot or feel it would be impolite to turn it off. During the time that the chatbot has your attention, you are exposed to increasingly customized advertising and receive more and more package offers.

You can read about this and much more in an article about human relationships with AI designed to give a human impression: Human/AI relationships: challenges, downsides, and impacts on human/human relationships. The authors discuss a large number of examples of such AI, ranging from the chatbots above to care robots and AI that offers psychotherapy, or AI that people chat with to combat loneliness. The opportunities are great, but so are the challenges and possible drawbacks, which the article highlights.

Perhaps particularly interesting is the insight into how effectively AI can create confusion by exposing us to objects equipped with human response patterns. Our natural tendency to anthropomorphize non-human things meets high-tech efforts to produce objects that are engineered to behave humanly. Here it is no longer about imaginatively projecting social relations onto non-human objects, as in the geometric example above. In interaction with AI objects, we react to subtle social cues that the objects are equipped with. We may even feel a moral responsibility for such AI and grieve when companies terminate or modify it.

The authors urge caution so that we do not overinterpret AI objects as persons. At the same time, they warn of the risk that, by avoiding empathic responses, we become less sensitive to real people in need. Truly confusing!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zimmerman, A., Janhonen, J. & Beer, E. Human/AI relationships: challenges, downsides, and impacts on human/human relationships. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00348-8

This post in Swedish

We recommend readings

A strategy for a balanced discussion of conscious AI

Science and technology advance so rapidly that it is hard to keep up with them. This is true not only for the general public, but also for the scientists themselves and for scholars from fields like ethics and regulation, who find it increasingly difficult to predict what will come next. Today AI is among the most advanced scientific endeavors, raising both significant expectations and more or less exaggerated worries. This is mainly due to the fact that AI is a concept so emotionally, socially, and politically charged as to make a balanced evaluation very difficult. It is even more so when capacities and features that are considered almost uniquely human, or at least shared with a limited number of other animals, are attributed to AI. This is the case with consciousness.

Recently, there has been a lively debate about the possibility of developing conscious AI. What are the reasons for this great interest? I think it has to do with the mentioned rapid advances in science and technology, as well as new intersections between different disciplines. Specifically, I think that three factors play an important role: the significant advancement in understanding the cerebral bases of conscious perception, the impressive achievements of AI technologies, and the increasing interaction between neuroscience and AI. The latter factor, in particular, resulted in so-called brain-inspired AI, a form of AI that is explicitly modeled on our brains.

This growing interest in conscious AI cannot ignore certain risks of varying relevance, including theoretical, practical, and ethical relevance. Theoretically, there is not a shared, overarching theory or definition of consciousness. Discussions about what consciousness is, what the criteria for a good scientific theory should be, and how to compare the various proposed theories of consciousness are still open and difficult to resolve.

Practically, the challenge is how to identify conscious systems. In other words, what are the indicators that reliably indicate whether a system, either biological or artificial, is conscious?

Finally, at the ethical level several issues arise. Here the discussion is very lively, with some calling for an international moratorium on all attempts to build artificial consciousness. This extreme position is motivated by the need for avoiding any form of suffering, including possibly undetectable artificial forms of suffering. Others question the very reason for working towards conscious AI: why should we open another, likely riskier box, when society cannot really handle the impact of AI, as illustrated by Large Language Models? For instance, chatbots like ChatGPT show an impressive capacity to interact with humans through natural language, which creates a strong feeling that these AI systems have features like consciousness, intentionality, and agency, among others. This attribution of human qualities to AI eventually impacts the way we think about it, including how much weight and value we give to the answers that these chatbots provide.

The two arguments above illustrate possible ethical concerns that can be raised against the development of conscious artificial systems. Yet are the concerns justified? In a recent chapter, I propose a change in the underlying approach to the issue of artificial consciousness. This is to avoid the risk of vague and not sufficiently multidimensional analyses. My point is that consciousness is not a unified, abstract entity, but rather like a prism, which includes different dimensions that could possibly have different levels. Based on a multidimensional view of consciousness, in a previous paper I contributed a list of indicators that are relevant also for identifying consciousness in artificial systems. In principle, it is possible that AI can manifest some dimensions of consciousness (for instance, those related to sophisticated cognitive tasks) while lacking others (for instance, those related to emotional or social tasks). In this way, the indicators provide not only a practical tool for identifying conscious systems, but also an ethical tool to make the discussion on possible conscious AI more balanced and realistic. The question whether some AI is conscious or not cannot be considered a yes/no question: there are several nuances that make the answer more complex.

Indeed, the indicators mentioned above are affected by a number of limitations, including the fact that they are developed for humans and animals, not specifically for AI. For this reason, research is still ongoing on how to adapt these indicators or possibly develop new indicators specific for AI. If you want to read more, you can find my chapter here: The ethical implications of indicators of consciousness in artificial systems.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Michele Farisco. The ethical implications of indicators of consciousness in artificial systems. Developments in Neuroethics and Bioethics. Available online 1 March 2024. https://doi.org/10.1016/bs.dnb.2024.02.009

We want solid foundations

Women on AI-assisted mammography

The use of AI tools in healthcare has become a recurring theme on this blog. So far, the posts have mainly been about mobile and online apps for use by patients and the general public. Today, the theme is more advanced AI tools which are used professionally by healthcare staff.

Within the Swedish program for breast cancer screening, radiologists interpret large amounts of X-ray images to detect breast cancer at an early stage. The workload is great and most of the time the images show no signs of cancer or pre-cancers. Today, AI tools are being tested that could improve mammography in several ways. AI could be used as an assisting resource for the radiologists to detect additional tumors. It could also be used as an independent reader of images to relieve radiologists, as well as to support assessments of which patients should receive care more immediately.

For AI-assisted mammography to work, not only the technology needs to be developed. Researchers also need to investigate how women think about AI-assisted mammography. How do they perceive AI-assisted breast cancer screening? Four researchers, including Jennifer Viberg Johansson and Åsa Grauman at CRB, interviewed sixteen women who underwent mammography at a Swedish hospital where an AI tool was tested as a third reviewer of the X-ray images, along with the two radiologists.

Several of the interviewees emphasized that AI is only a tool: AI cannot replace the doctor because humans have abilities beyond image recognition, such as intuition, empathy and holistic thinking. Another finding was that some of the interviewees had a greater tolerance for human error than if the AI tool failed, which was considered unacceptable. Some argued that if the AI tool makes a mistake, the mistake will be repeated systematically, while human errors are occasional. Some believed that the responsibility when the technology fails lies with the humans and not with the technology.

Personally, I cannot help but speculate that the sharp distinction between human error, which is easier to reconcile with, and unacceptably failing technology, is connected to the fact that we can say of humans who fail: “After all, the radiologists surely did their best.” On the other hand, we hardly say about failing AI: “After all, the technology surely did its best.” Technology does not become subject to certain forms of conciliatory considerations.

The authors themselves emphasize that the participants in the study saw AI as a valuable tool in mammography, but held that the tool cannot replace humans in the process. The authors also emphasize that the interviewees preferred that the AI tool identify possible tumors with high sensitivity, even if this leads to many false positive results and thus to unnecessary worry and fear. In order for patients to understand AI-assisted healthcare, effective communication efforts are required, the authors conclude.

It is difficult to summarize the rich material from interview studies. For more results, read the study here: Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Viberg Johansson J, Dembrower K, Strand F, et al. Women’s perceptions and attitudes towards the use of AI in mammography in Sweden: a qualitative interview study. BMJ Open 2024;14:e084014. doi: 10.1136/bmjopen-2024-084014

This post in Swedish

Approaching future issues

Using artificial intelligence with academic integrity

AI tools can both transform and produce content such as texts, images and music. The tools are also increasingly available as online services. One example is the ChatGPT tool, which you can ask questions and get well-informed, logically reasoned answers from. Answers that the tool can correct if you point out errors and ambiguities. You can interact with the tool almost as if you were conversing with a human.

Such a tool can of course be very useful. It can help you solve problems and find relevant information. I venture to guess that the response from the tool can also stimulate creativity and open the mind to unexpected possibilities, just as conversations with people tend to do. However, like all technology, these tools can also be abused and students have already used ChatGPT to complete their assignments.

The challenge in education and research is thus to learn to use these AI tools with academic integrity. Using AI tools is not automatically cheating. Seven participants in a European network for academic integrity (ENAI), including Sonja Bjelobaba at CRB, write about the challenge in an editorial in International Journal for Educational Integrity. Above all, the authors summarize tentative recommendations from ENAI on the ethical use of AI in academia.

An overarching aim in the recommendations is to integrate recommendations on AI with other related recommendations on academic integrity. Thus, all persons, sources and tools that influenced ideas or generated content must be clearly acknowledged – including the use of AI tools. Appropriate use of tools that affect the form of the text (such as proofreading tools, spelling checkers and thesaurus) are generally acceptable. Furthermore, an AI tool cannot be listed as a co-author in a publication, as the tool cannot take responsibility for the content.

The recommendations also emphasize the importance of educational efforts on the ethical use of AI tools. Read the recommendations in their entirety here: ENAI Recommendations on the ethical use of Artificial Intelligence in Education.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Foltynek, T., Bjelobaba, S., Glendinning, I. et al. ENAI Recommendations on the ethical use of Artificial Intelligence in Education. International Journal for Educational Integrity 19, 12 (2023). https://doi.org/10.1007/s40979-023-00133-4

This post in Swedish

We care about education

A new project will explore the prospect of artificial awareness

The neuroethics group at CRB has just started its work as part of a new European research project about artificial awareness. The project is called “Counterfactual Assessment and Valuation for Awareness Architecture” (CAVAA), and is funded for a duration of four years. The consortium is composed of 10 institutions, coordinated by the Radboud University in the Netherlands.

The goal of CAVAA is “to realize a theory of awareness instantiated as an integrated computational architecture…, to explain awareness in biological systems and engineer it in technological ones.” Different specific objectives derive from this general goal. First, CAVAA has a robust theoretical component: it relies on a strong theoretical framework. Conceptual reflection on awareness, including its definition and the identification of features that allow its attribution to either biological organisms or artificial systems, is an explicit task of the project. Second, CAVAA is interested in exploring the connection between awareness in biological organisms and its possible replication in artificial systems. The project thus gives much attention to the connection between neuroscience and AI. Third, against this background, CAVAA aims at replicating awareness in artificial settings. Importantly, the project also has a clear ethical responsibility, more specifically about anticipating the potential societal and ethical impact of aware artificial systems.

There are several reasons why a scientific project with a strong engineering and computer science component also has philosophers on board. We are asked to contribute to developing a strong and consistent theoretical account of awareness, including the conceptual conceivability and the technical feasibility of its artificial replication. This is not straightforward, not only because there are many content-related challenges, but also because there are logical traps to avoid. For instance, we should avoid the temptation to validate an empirical statement on the basis of our own theory: this would possibly be tautological or circular.

In addition to this theoretical contribution, we will also collaborate in identifying indicators of awareness and benchmarks for validating the cognitive architecture that will be developed. Finally, we will collaborate in the ethical analysis concerning potential future scenarios related to artificial awareness, such as the possibility of developing artificial moral agents or the need to extend moral rights also to artificial systems.

In the end, there are several potential contributions that philosophy can provide to the scientific attempt to replicate biological awareness in artificial systems. Part of this possible collaboration is the fundamental and provoking question: why should we try to develop artificial awareness at all? What is the expected benefit, should we succeed? This is definitely an open question, with possible arguments for and against attempting such a grand accomplishment.

There is also another question of equal importance, which may justify the effort to identify the necessary and sufficient conditions for artificial systems to become aware, and how to recognize them as such. What if we will inadvertently create (or worse: have already created) forms of artificial awareness, but do not recognize this and treat them as if they were unaware? Such scenarios also confront us with serious ethical issues. So, regardless of our background beliefs about artificial awareness, it is worth investing in thinking about it.

Stay tuned to hear more from CAVAA!

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

Part of international collaborations

AI narratives from the Global North

The way we develop, adopt, regulate and accept artificial intelligence is embedded in our societies and cultures. Our narratives about intelligent machines take on a flavour of the art, literature and imaginations of the people who live today, and of those that came before us. But some of us are missing from the stories that are told about thinking machines. A recent paper about forgotten African AI narratives and the future of AI in Africa shines a light on some of the missing narratives.

In the paper, Damian Eke and George Ogoh point to the fact that how artificial intelligence is developed, adopted, regulated and accepted is hugely influenced by socio-cultural, ethical, political, media and historical narratives. But most of the stories we tell about intelligent machines are imagined and conceptualised in the Global North. The paper begs the question whether it is a problem? And if so, in what way? When machine narratives put the emphasis on technology neutrality, that becomes a problem that goes beyond AI.

What happens when Global North narratives set the agenda for research and innovation also in the Global South, and what happens more specifically to the agenda for artificial intelligence? The impact is difficult to quantify. But when historical, philosophical, socio-cultural and political narratives from Africa are missing, we need to understand why and what it might imply. Damian Eke & George Ogoh provide a list of reasons for why this is important. One is concerns about the state of STEM education (science, technology, engineering and mathematics) in many African countries. Another reason is the well-documented issue of epistemic injustice: unfair discrimination against people because of prejudices about their knowledge. The dominance of Global North narratives could lead to devaluing the expertise of Africans in the tech community. This brings us to the point of the argument, which is that African socio-cultural, ethical and political contexts and narratives are absent from the global debate about responsible AI.

The paper makes the case for including African AI narratives not only into the research and development of artificial intelligence, but also into the ethics and governance of technology more broadly. Such inclusion would help counter epistemic injustice. If we fail to include narratives from the South into the AI discourse, the development can never be truly global. Moreover, excluding African AI narratives will limit our understanding of how different cultures in Africa conceptualise AI, and we miss an important perspective on how people across the world perceive the risks and benefits of machine learning and AI powered technology. Nor will we understand the many ways in which stories, art, literature and imaginations globally shape those perceptions.

If we want to develop an “AI for good”, it needs to be good for Africa and other parts of the Global South. According to Damian Eke and George Ogoh, it is possible to create a more meaningful and responsible narrative about AI. That requires that we identify and promote people-centred narratives. And anchor AI ethics for Africa in African ethical principles, like ubuntu. But the key for African countries to participate in the AI landscape is a greater focus on STEM education and research. The authors end their paper with a call to improve the diversity of voices in the global discourse about AI. Culturally sensitive and inclusive AI applications would benefit us all, for epistemic injustice is not just a geographical problem. Our view of whose knowledge has value is powered by a broad variety of forms of prejudice.

Damian Eke and George Ogoh are both actively contributing to the Human Brain Project’s work on responsible research and innovation. The Human Brain Project is a European Flagship project providing in-depth understanding of the complex structure and function of the human brain, using interdisciplinary approaches.

Do you want to learn more? Read the article here: Forgotten African AI Narratives and the future of AI in Africa.

Josepine Fernow

Written by…

Josepine Fernow, science communications project manager and coordinator at the Centre for Research Ethics & Bioethics, develops communications strategy for European research projects

Eke D, Ogoh G, Forgotten African AI Narratives and the future of AI in Africa, International Review of Information Ethics, 2022;31(08).

We want to be just

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

« Older posts