A blog from the Centre for Research Ethics & Bioethics (CRB)

Tag: AI

Conceivability and feasibility of artificial consciousness

Can artificial consciousness be engineered, is the endeavor even conceivable?  In a number of previous posts, I have explored the possibility of developing AI consciousness from different perspectives, including ethical analysis, a comparative analysis of artificial and biological consciousness, and a reflection about the fundamental motivation behind the development of AI consciousness.

Together with Kathinka Evers from CRB, and with other colleagues from the CAVAA project, I recently published a new paper which aims to clarify the first preparatory steps that would need to be taken on the path to AI consciousness: Preliminaries to artificial consciousness: A multidimensional heuristic approach. These first requirements are above all logical and conceptual. We must understand and clarify the concepts that motivate the endeavor. In fact, the growing discussion about AI consciousness often lacks consistency and clarity, which risks creating confusion about what is logically possible, conceptually plausible, and technically feasible.

As a possible remedy to these risks, we propose an examination of the different meanings attributed to the term “consciousness,” as the concept has many meanings and is potentially ambiguous. For instance, we propose a basic distinction between the cognitive and the experiential dimensions of consciousness: awareness can be understood as the ability to process information, store it in memory, and possibly retrieve it if relevant to the execution of specific tasks, while phenomenal consciousness can be understood as subjective experience (“what it is like to be” in a particular state, such as being in pain).

This distinction between cognitive and experiential dimensions is just one illustration of how the multidimensional nature of consciousness is clarified in our model, and how the model can support a more balanced and realistic discussion of the replication of consciousness in AI systems. In our multidisciplinary article, we try to elaborate a model that serves both as a theoretical tool for clarifying key concepts and as an empirical guide for developing testable hypotheses. Developing concepts and models that can be tested empirically is crucial for bridging philosophy and science, eventually making philosophy more informed by empirical data and improving the conceptual architecture of science.

In the article we also illustrate how our multidimensional model of consciousness can be tested empirically. We focus on awareness as a case study. As we see it, awareness has two fundamental capacities: the capacity to select relevant information from the environment, and the capacity to intentionally use this information to achieve specific goals. Basically, in order to be considered aware, the information processing should be more sophisticated than a simple input-output processing. For example, the processing needs to evaluate the relevance of information on the basis of subjective priors, such as needs and expectations. Furthermore, in order to be considered aware, information processing should be combined with a capacity to model or virtualize the world, in order to predict more distant future states. To truly be markers of awareness, these capacities for modelling and virtualization should be combined with an ability to intentionally use them for goal-directed behavior.

There are already some technical applications that exhibit capacities like these. For instance, researchers from the CAVAA project have developed a robot system which is able to adapt and correct its functioning and to learn “on the fly.” These capacities make the system able to dynamically and autonomously adapt its behavior to external circumstances to achieve its goals. This illustrates how awareness as a dimension of consciousness can already be engineered and reproduced.

Is this sufficient to conclude that AI consciousness is a fact? Yes and no. The full spectrum of consciousness has not yet been engineered and perhaps its complete reproduction is not conceivable or feasible. In fact, the phenomenal dimension of consciousness appears as a stumbling block against “full” AI consciousness. Among other things, because subjective experience arises from the capacity of biological subjects to evaluate the world, that is, to assign specific values to it on the basis of subjective needs. These needs are not just cognitive needs, as in the case of awareness, but emotionally charged and with a more comprehensive impact on the subjective state. Nevertheless, we cannot rule out this possibility a priori, and the fundamental question whether there can be a “ghost in the machine” remains open for further investigation.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

K. Evers, M. Farisco, R. Chatila, B.D. Earp, I.T. Freire, F. Hamker, E. Nemeth, P.F.M.J. Verschure, M. Khamassi, Preliminaries to artificial consciousness: A multidimensional heuristic approach, Physics of Life Reviews, Volume 52, 2025, Pages 180-193, ISSN 1571-0645, https://doi.org/10.1016/j.plrev.2025.01.002

We like challenging questions

How can we set future ethical standards for ICT, Big Data, AI and robotics?

josepine-fernow-siennaDo you use Google Maps to navigate in a new city? Ask Siri, Alexa or OK Google to play your favourite song? To help you find something on Amazon? To read a text message from a friend while you are driving your car? Perhaps your car is fitted with a semi-autonomous adaptive cruise control system… If any software or machine is going to perform in any autonomous way, it needs to collect data. About you, where you are going, what songs you like, your shopping habits, who your friends are and what you talk about. This begs the question:  are we willing to give up part of our privacy and personal liberty to enjoy the benefits technology offers.

It is difficult to predict the consequences of developing and using new technology. Policymakers struggle to assess the ethical, legal and human rights impacts of using different kinds of IT systems. In research, in industry and our homes. Good policy should be helpful for everyone that holds a stake. We might want it to protect ethical values and human rights, make research and development possible, allow technology transfer from academia to industry, make sure both large and smaller companies can develop their business, and make sure that there is social acceptance for technological development.

The European Union is serious about developing policy on the basis of sound research, rigorous empirical data and wide stakeholder consultation. In recent years, the Horizon2020 programme has invested € 10 million in three projects looking at the ethics and human rights implications of emerging digital technologies: PANELFIT, SHERPA and SIENNA.

The first project, PANELFIT (which is short for Participatory Approaches to a New Ethical and Legal Framework for ICT), will develop guidelines on the ethical and legal issues of ICT research and innovation. The second, SHERPA (stands for Shaping the ethical dimensions of Smart Information Systems (SIS) – A European Perspective), will develop tools to identify and address the ethical dimensions of smart information systems (SIS), which is the combination of artificial intelligence (AI) and big data analytics. SIENNA (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact), will develop research ethics protocols, professional ethical codes, and better ethical and legal frameworks for AI and robotics, human enhancement technologies, and human genomics.

SSP-graphic

All three projects involve experts, publics and stakeholders to co-create outputs, in different ways. They also support the European Union’s vision of Responsible Research and Innovation (RRI). SIENNA, SHERPA and PANELFIT recently published an editorial in the Orbit Journal, inviting stakeholders and publics to engage with the projects and contribute to the work.

Want to read more? Rowena Rodrigues and Anaïs Resseguier have written about some of the issues raised by the use of artificial intelligence on Ethics Dialogues (The underdog in the AI and ethical debate: human autonomy), and you can find out more about the SIENNA project in a previous post on the Ethics Blog (Ethics, human rights and responsible innovation).

Want to know more about the collaboration between SIENNA, SHERPA and PANELFIT? Read the editorial in Orbit (Setting future ethical standards for ICT, Big Data, AI and robotics: The contribution of three European Projects), or watch a video from our joint webinar on May 20, 2019 on YouTube (SIENNA, SHERPA, PANELFIT: Setting future ethical standards for ICT, Big Data, SIS, AI & Robotics).

Want to know how SIENNA views the ethical impacts of AI and robotics? Download infographic (pdf) and read our state-of-the-art review for AI & robotics (deliverable report).

AI-robotics-ifographic

Josepine Fernow

This post in Swedish

We want solid foundations - the Ethics Blog

 

Can a robot learn to speak?

Pär SegerdahlThere are self-modifying computer programs that “learn” from success and failure. Chess-playing computers, for example, become better through repeated games against humans.

Could a similar robot also learn to speak? If the robot gets the same input as a child gets when it learns to speak, should it not be possible in principle?

Notice how the question zigzags between child and machine. We say that the robot learns. We say that the child gets input. We speak of the robot as if it were a child. We speak of the child as if it were a robot. Finally, we take this linguistic zigzagging seriously as a fascinating question, perhaps even a great research task.

An AI expert and prospective father who dreamed of this great research task took the following ambitious measures. He equipped his whole house with cameras and microphones, to document all parent-child interactions during the child’s first years. Why? He wanted to know exactly what kind of linguistic input a child gets when it learns to speak. At a later stage, he might be able to give a self-modifying robot the same input and test if it also learns to speak.

How did the project turn out? The personal experience of raising the child led the AI ​​expert to question the whole project of teaching a robot to speak. How could a personal experience lead to the questioning of a seemingly serious scientific project?

Here, I could start babbling about how amiably social children are compared to cold machines. How they learn in close relationships with their parents. How they curiously and joyfully take the initiative, rather than calculatingly await input.

The problem is that such babbling on my part would make it seem as if the AI ​​expert simply was wrong about robots and children. That he did not know the facts, but now is more well-informed. It is not that simple. For the idea behind ​​the project presupposed unnoticed linguistic zigzagging. Already in asking the question, the boundaries between robots and children are blurred. Already in the question, we have half answered it!

We cannot be content with responding to the question in the headline with a simple, “No, it cannot.” We must reject the question as nonsense. Deceitful zigzagging creates the illusion that we are dealing with a serious question, worthy of scientific study.

This does not exclude, however, that computational linguistics increasingly uses self-modifying programs, and with great success. But that is another question.

Pär Segerdahl

Beard, Alex. How babies learn – and why robots can’t compete. The Guardian, 3 April 2018

This post in Swedish

We like critical thinking : www.ethicsblog.crb.uu.se