A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Pär Segerdahl (Page 5 of 39)

Does the brain make room for free will?

The question of whether we have free will has been debated throughout the ages and everywhere in the world. Can we influence our future or is it predetermined? If everything is predetermined and we lack free will, why should we act responsibly and by what right do we hold each other accountable?

There have been different ideas about what predetermines the future and excludes free will. People have talked about fate and about the gods. Today, we rather imagine that it is about necessary causal relationships in the universe. It seems that the strict determinism of the material world must preclude the free will that we humans perceive ourselves to have. If we really had free will, we think, then nature would have to give us a space of our own to decide in. A causal gap where nature does not determine everything according to its laws, but allows us to act according to our will. But this seems to contradict our scientific world view.

In an article in the journal Intellectica, Kathinka Evers at CRB examines the plausibility of this choice between two extreme positions: either strict determinism that excludes free will, or free will that excludes determinism.

Kathinka Evers approaches the problem from a neuroscientific perspective. This particular perspective has historically tended to support one of the positions: strict determinism that excludes free will. How can the brain make room for free will, if our decisions are the result of electrochemical processes and of evolutionarily developed programs? Is it not right there, in the brain, that our free will is thwarted by material processes that give us no space to act?

Some authors who have written about free will from a neuroscientific perspective have at times explained away freedom as the brain’s user’s illusion: as a necessary illusion, as a fictional construct. Some have argued that since social groups function best when we as individuals assume ourselves to be responsible actors, we must, after all, keep this old illusion alive. Free will is a fiction that works and is needed in society!

This attitude is unsound, says Kathinka Evers. We cannot build our societies on assumptions that contradict our best knowledge. It would be absurd to hold people responsible for actions that they in fact have no ability to influence. At the same time, she agrees that the notion of free will is socially important. But if we are to retain the notion, it must be consistent with our knowledge of the brain.

One of the main points of the article is that our knowledge of the brain could actually provide some room for free will. The brain could function beyond the opposition between indeterminism and strict determinism, some neuroscientific theories suggest. This does not mean that there would be uncaused neural events. Rather, a determinism is proposed where the relationship between cause and effect is variable and contingent, not invariable and necessary, as we commonly assume. As far as I understand, it is about the fact that the brain has been shown to function much more independently, actively and flexibly than in the image of it as a kind of programmed machine. Different incoming nerve signals can stabilize different neural patterns of connections in the brain, which support the same behavioural ability. And the same incoming nerve signal can stabilize different patterns of connections in the brain that result in the same behavioural ability. Despite great variation in how individuals’ neural patterns of connections are stabilized, the same common abilities are supported. This model of the brain is thus deterministic, while being characterized by variability. It describes a kind of kaleidoscopically variable causality in the brain between incoming signals and resulting behaviours and abilities.

Kathinka Evers thus hypothetically suggests that this variability in the brain, if real, could provide empirical evidence that free will is compatible with determinism.

Read the philosophically exciting article here: Variable determinism in social applications: translating science to society

Although Kathinka Evers suggests that a certain amount of free will could be compatible with what we know about the brain, she emphasizes that neuroscience gives us increasingly detailed knowledge about how we are conditioned by inherited programs, for example, during adolescence, as well as by our conditions and experiences in childhood. We should, after all, be cautiously restrained in praising and blaming each other, she concludes the article, referring to the Stoic Epictetus, one of the philosophers who thought about free will and who rather emphasized freedom from the notion of a free will.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Evers Kathinka (2021/2). Variable Determinism in Social Applications: Translating Science to Society. In Monier Cyril & Khamassi Mehdi (Eds), Liberty and cognition, Intellectica, 75, pp.73-89.

This post in Swedish

We like challenging questions

Artificial intelligence: augmenting intelligence in humans or creating human intelligence in machines?

Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.

Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.

What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?

Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI ​​literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.

But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.

It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.

Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?

I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.

Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!

Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354

This post in Swedish

We recommend readings

Dignity in a nursing home when the body fails

The proportion of elderly people in the population is increasing and the tendency is to provide care for the elderly at home as long as possible. Nursing homes are therefore usually inhabited by the very weakest, with several concurrent illnesses and often in need of palliative care.

Living a dignified life in old age naturally becomes more difficult when the body and mind fail and you become increasingly dependent on others. As a nursing home resident, it can be close at hand to feel unworthy and a nuisance. And as staff, in stressful situations it can happen that you sometimes thoughtlessly treat the elderly in an undignified manner.

Preserving the dignity of the elderly is an important responsibility of nursing homes. But what does reality look like for the residents? How does the care provider take responsibility for dignified care? And is it reasonable to regard the residents as passive recipients of dignified care? Isn’t such a view in itself undignified?

These questions suggest that we need to look more closely at the reality of the elderly in a nursing home. Bodil Holmberg has done this together with Tove Godskesen, in a study published in the journal BMC Geriatrics. Participatory observations and interviews with residents and staff at a nursing home in Sweden provided rich material to analyse and reflect on.

As expected, it was found that the major threat to the residents’ dignity was precisely how the body fails at a faster rate. This created fear of becoming increasingly dependent on others as well as feelings of anguish, loneliness and meaninglessness. However, it was also found that the elderly themselves had a repertoire of ways to deal with their situation. Their self-knowledge enabled them to distinguish between what they could still do and what they had to accept. In addition, aging itself gave rise to new challenges to engage with. One of the residents proudly told how they had developed a way to pick up the grabbing tong when it had been dropped, by sliding deeper into the wheelchair to reach the floor. Teaching new staff how to carry out intricate medical procedures also gave rise to pride.

As aging challenges a dignified life, older people thus develop self-knowledge and a whole repertoire of ways to maintain a dignified life. This is an essential observation that the authors make. It shows the importance of not considering nursing home residents as passive recipients of dignified care. If I understand the authors correctly, they suggest that we could instead think in terms of assisting older people when their bodies fail: assisting them in their own attempts to lead dignified lives.

Participatory observations and interviews can help us see reality more clearly. The method can clarify both the expected and the unexpected. Read the pertinent article here: Dignity in bodily care at the end of life in a nursing home: an ethnographic study

The authors also found examples of undignified treatment of the residents. In another article, also from this year, they discuss barriers and facilitators of ethical encounters at the end of life in a nursing home. Reference to the latter article can be found below.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Holmberg, B., Godskesen, T. Dignity in bodily care at the end of life in a nursing home: an ethnographic study. BMC Geriatr 22, 593 (2022). https://doi.org/10.1186/s12877-022-03244-8

Holmberg, B., Godskesen, T. Barriers to and facilitators of ethical encounters at the end of life in a nursing home: an ethnographic study. BMC Palliat Care 21, 134 (2022). https://doi.org/10.1186/s12904-022-01024-0

This post in Swedish

Ethics needs empirical input

Self-confidence in the midst of uncertainty

Feeling confident is natural when we have the knowledge that the task requires. However, self-confidence can be harmful if we think that we know what we do not know. It can be really problematic if we make a habit of pretending that we know. Perhaps because we demand it of ourselves.

There is also another kind of self-confidence, which can seem unnatural. I am thinking of a rarely noticed form of self-confidence, which can awaken just when we are uncertain about how to think and act. But how can self-confidence arise precisely when we are uncertain? It sounds not only unnatural, but also illogical. And was it not harmful to exhibit self-confidence in such situations?

I am thinking of the self-confidence to be just as uncertain as we are, because our uncertainty is a fact that we are certain of: I do not know. It is easy to overlook the fact that even uncertainty is a reality that can be ascertained and investigated in ourselves. Sometimes it is important to take note of our uncertainty. That is sticking to the facts too!

What happens if we do not trust uncertainty when we are uncertain? I think we then tend to seek guidance from others, who seem to know what we do not know. It seems not only natural, but also logical. It is reasonable to do so, of course, if relevant knowledge really exists elsewhere. Asking others, who can be judged to know better, also requires a significant measure of self-confidence and good judgment, in the midst of uncertainty.

But suppose we instinctively seek guidance from others as soon as we are uncertain, because we do not dare to stick to uncertainty in such moments. What happens if we always run away from uncertainty, without stopping and paying attention to it, as if uncertainty were something impermissible? In such a judgmental attitude to uncertainty, knowledge and certainty can become a demand that we feel must be met, towards ourselves and towards each other, if only as a facade. We are then back where we started, in pretended knowledge, which now might become a collective high-risk game and not just an individual bad habit.

Collective knowledge games can of course work, if sufficiently many influential players have the knowledge that the tasks require and knowledge is disseminated in a well-organized manner. Maybe we think that it should be possible to build such a society, a secure knowledge society. The question I wonder about is how sustainable this is in the long run, if the emphasis on certainty does not simultaneously emphasize also uncertainty and questioning. Not for the sake of questioning, but because uncertainty is also a fact that needs attention.

In philosophy and ethics, it is often uncertainty that primarily drives the work. This may sound strange, but even uncertainty can be investigated. If we ask a tentative question about something we sincerely wonder about, clearer questions can soon arise that we continue to wonder about, and soon the investigation will begin. The investigation comes to life because we dare to trust ourselves, because we dare to give ourselves time to think, in the midst of uncertainty, which can become clarity if we do not run away from it. In the investigation, we can of course notice that we need more knowledge about specific issues, knowledge that is acquired from others or that we ourselves develop through empirical studies. But it is not only specific knowledge that informs the investigation. The work with the questions that express our uncertainty clarifies ourselves and makes our thinking clearer. Knowledge gets a well-considered context, where it is needed, which enlightens knowledge.

A “pure” game of knowledge is hardly sustainable in the long run, if its demands are not open also to the other side of knowledge, to the uncertainty that can be difficult to separate from ourselves. Such openness requires that we trust not only the rules of the game, but also ourselves. But do we dare to trust ourselves when we are uncertain?

I think we dare, if we see uncertainty as a fact that can be investigated and clarified, instead of judging it as something dangerous that should not be allowed to be a fact. That is when it can become dangerous.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

Safeguards when biobank research complies with the General Data Protection Regulation

The General Data Protection Regulation (GDPR) entails a tightening of EU data protection rules. These rules do not only apply to the processing of personal data by companies. They apply in general, also to scientific research, which in many cases could entail serious restrictions on research. However, the GDPR allows for several derogations and exemptions when it comes to research that would otherwise probably be made impossible or considerably more difficult.

Such derogations are allowed only if appropriate safeguards, which are in accordance with the regulation, are in place. But what safeguards may be required? Article 89 of the regulation mentions technical and organizational measures to ensure compliance with the principle of data minimization: personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. Otherwise, Article 89 does not specify what safeguards are required, or what it means that the safeguards must be in accordance with the GDPR.

Biobank and genetic research require large amounts of biological samples and health-related data. Personal data may need to be stored for a long time and reused by new research groups for new research purposes. This would not be possible if the regulation did not grant an exemption from the rule that personal data may not be stored longer than necessary and for purposes not specified at data collection. But the question remains, what safeguards may be required to grant exemption?

The issue is raised by Ciara Staunton and three co-authors in an article in Frontiers in Genetics. The article begins by discussing the regulation and how to interpret the requirement that the safeguards should be “in accordance with the GDPR.” Then six possible safeguards are proposed for biobank and genetic research. The proposal is based on a thorough review of a number of documents that regulate health research.

Here, I merely want to recommend reading to anyone working on the issue of appropriate safeguards in biobank and genetic research. Therefore, I mention only briefly that the proposed safeguards concern (1) consent, (2) independent review and oversight, (3) accountable processes, (4) clear and transparent policies and processes, (5) security, and (6) training and education.

If you want to know more about the proposed safeguards, you will find the article here: Appropriate Safeguards and Article 89 of the GDPR: Considerations for Biobank, Databank and Genetic Research.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ciara Staunton, Santa Slokenberga, Andrea Parziale and Deborah Mascalzoni. Appropriate Safeguards and Article 89 of the GDPR: Considerations for Biobank, Databank and Genetic Research. Frontiers in Genetics. 18 February 2022 doi: 10.3389/fgene.2022.719317

This post in Swedish

We recommend readings

Using surplus embryos to treat Parkinson’s disease: perceptions among the Swedish public

The use of human embryos in stem cell research can create moral unease, as embryos are usually destroyed when researchers extract stem cells from them. If one considers the embryo as a potential life, this can be perceived as a human life opportunity being extinguished.

At the same time, stem cell research aims to support human life through the development of treatments for diseases that today lack effective treatment. Moreover, not everyone sees the embryo as a potential life. When stem cell research is regulated, policymakers can therefore benefit from current knowledge about the public’s attitudes to this research.

Åsa Grauman and Jennifer Drevin recently published an interview study of perceptions among the Swedish public about the use of donated embryos for the treatment of Parkinson’s disease. The focus in the interviews on a specific disease is interesting, as it emphasizes the human horizon of stem cell research. This can nuance the issues and invite more diverse reasoning.

The interviewees were generally positive about using donated surplus embryos from IVF treatment to develop stem cell treatment for Parkinson’s disease. This also applied to participants who saw the embryo as a potential life. However, this positive attitude presupposed a number of conditions. The participants emphasized, among other things, that informed consent must be obtained from both partners in the couple, and that the researchers must show respect and sensitivity in their work with embryos. The latter requirement was also made by participants who did not see the embryo as a potential life. They emphasized that people have different values and that researchers and the pharmaceutical industry should take note of this.

Many participants also considered that the use of embryos in research on Parkinson’s disease is justified because the surplus embryos would otherwise be discarded without benefit. Several also expressed a priority order, where surplus embryos should primarily be donated to other couples, secondarily to drug development, and lastly discarded.

If you want to see more results, read the study: Perceptions on using surplus embryos for the treatment of Parkinson’s disease among the Swedish population: a qualitative study.

I would like to mention that the complexity of the questions was also expressed in such a way that one and the same person could express different perceptions in different parts of the interview, and switch back and forth between different perspectives. This is not a defect, I would say, but a form of wisdom that is essential when difficult ethical issues are discussed.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Grauman, Å., Drevin, J. Perceptions on using surplus embryos for the treatment of Parkinson’s disease among the Swedish population: a qualitative study. BMC Med Ethics 23, 15 (2022). https://doi.org/10.1186/s12910-022-00759-y

This post in Swedish

Ethics needs empirical input

Can consumers help counteract antimicrobial resistance?

Antimicrobial resistance (AMR) occurs when microorganisms (bacteria and viruses, etc.) survive treatments with antimicrobial drugs, such as antibiotics. However, the problem is not only caused by unwise use of such drugs on humans. Such drugs are also used on a large scale in animals in food production, which is a significant cause of AMR.

In an article in the journal Frontiers in Sustainable Food Systems, Mirko Ancillotti and three co-authors discuss the possibility that food consumers can contribute to counteracting AMR. This is a specific possibility that they argue is often overlooked when addressing the general public.

A difficulty that arises when AMR needs to be handled by several actors, such as authorities, food producers, consumers and retailers, is that the actors transfer the responsibility to the others. Consumers can claim that they would buy antibiotic-smart goods if they were offered in stores, while retailers can claim that they would sell such goods if consumers demanded them. Both parties can also blame how, for example, the market or legislation governs them. Another problem is that if one actor, for example the authorities, takes great responsibility, other actors feel less or no responsibility.

The authors of the article propose that one way out of the difficulty could be to influence consumers to take individual responsibility for AMR. Mirko Ancillotti has previously found evidence that people care about antibiotic resistance. Perhaps a combination of social pressure and empowerment could engage consumers to individually act more wisely from an AMR perspective?

The authors make comparisons with the climate movement and suggest digital innovations in stores and online, which can inform, exert pressure and support AMR-smarter food choices. One example could be apps that help consumers see their purchasing pattern, suggest product alternatives, and inform about what is gained from an AMR perspective by choosing the alternative.

Read the article with its constructive proposal to engage consumers against antimicrobial resistance: The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Ancillotti, Mirko; Nilsson, Elin; Nordvall, Anna-Carin; Oljans, Emma. The Status Quo Problem and the Role of Consumers Against Antimicrobial Resistance. Frontiers in Sustainable Food Systems, 2022.

This post in Swedish

Approaching future issues

Fact resistance, human nature and contemplation

Sometimes we all resist facts. I saw a cyclist slip on the icy road. When I asked if it went well, she was on her feet in an instant and denied everything: “I did not fall!” It is human to deny facts. They can hurt and be disturbing.

What are we resisting? The usual answer is that fact-resistant individuals or groups resist facts about the world around us, such as statistics on violent crime, on vaccine side effects, on climate change or on the spread of disease. It then becomes natural to offer resistance to fact resistance by demanding more rigour in the field of knowledge. People should learn to turn more rigorously to the world they live in! The problem is that fact-resistant attitudes do just that. They are almost bewitched by the world and by the causes of what are perceived as outrageous problems in it. And now we too are bewitched by fact resistance and speculate about the causes of this outrageous problem.

Of course, we believe that our opposition is justified. But who does not think so? Legitimate resistance is met by legitimate resistance, and soon the conflict escalates around its double spiral of legitimacy. The possibility of resolving it is blocked by the conflict itself, because all parties are equally legitimate opponents of each other. Everyone hears their own inner voices warning them from acknowledging their mistakes, from acknowledging their uncertainty, from acknowledging their human resistance to reality, as when we fall off the bike and wish it had never happened. The opposing side would immediately seize the opportunity! Soon, our mistake is a scandal on social media. So we do as the person who slipped on the icy road, we deny everything without thinking: “I was not wrong, I had my own facts!” We ignore the fact that life thereby becomes a lie, because our inner voices warn us from acknowledging our uncertainty. We have the right to be recognized, our voices insist, at least as an alternative to the “established view.”

Conflicts give us no time for reflection. Yet, there is really nothing stopping us from sitting down, in the midst of conflict, and resolving it within ourselves. When we give ourselves time to think for ourselves, we are freer to acknowledge our uncertainty and examine our spirals of thought. Of course, this philosophical self-examination does not resolve the conflict between legitimate opponents which escalates around us as increasingly impenetrable and real. It only resolves the conflict within ourselves. But perhaps our thoughtful philosophical voice still gives a hint of how, just by allowing us to soar in uncertainty, we already see the emptiness of the conflict and are free from it?

If we more often dared to soar in uncertainty, if it became more permissible to say “I do not know,” if we listened more attentively to thoughtful voices instead of silencing them with loud knowledge claims, then perhaps fact resistance also decreases. Perhaps fact resistance is not least resistance to an inner fact. To a single inner fact. What fact? Our insecurity as human beings, which we do not permit ourselves. But if you allow yourself to slip on the icy road, then you do not have to deny that you did!

A more thoughtful way of being human should be possible. We shape the societies that shape us.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

We care about communication

Illness prevention needs to be adapted to people’s illness perceptions

Several factors increase the risk of cardiovascular disease. Many of these we can influence ourselves through changes in lifestyle or preventive drug treatment. But people’s attitudes to prevention vary with their perceptions of cardiovascular disease. Health communication to support preventive measures therefore needs to take into account people’s illness perceptions.

Åsa Grauman and three colleagues conducted an online survey with 423 randomly selected Swedes aged 40 to 70 years. Participants were asked to answer questions about themselves and about how they view cardiovascular disease. They then participated in an experiment designed to capture how they weighted their preferences regarding health check results.

The results showed a wide variety of perceptions about cardiovascular disease. Women more often cited stress as their most important risk factor while men more often cited overweight and obesity. An interesting result is that people who stated that they smoked, had hypertension, were overweight or lived sedentary, tended to downplay that factor as less risky for cardiovascular disease. On the other hand, people who stated that they experienced stress had a tendency to emphasize stress as a high risk of cardiovascular disease. People who reported family history as a personal risk of illness showed a greater reluctance to participate in health examinations.

Regarding preferences about health check results, it was found that the participants preferred to have their results presented in everyday words and with an overall assessment (rather than, for example, in numbers). They also preferred to get the results in a letter (rather than by logging in to a website) that included lifestyle recommendations, and they preferred 30 minutes of consultation (over no or only 15 minutes of consultation).

It is important to reach out with the message that the risk of cardiovascular disease can be affected by lifestyle changes, and that health checks can identify risk factors in people who are otherwise asymptomatic. Especially people with a family history of cardiovascular disease, who in the study were more reluctant to undergo health examinations, may need to be aware of this.

To reach out with the message, it needs to be adapted to how people perceive cardiovascular disease, and we need to become better at supporting correct perceptions, the authors conclude. I have mentioned only a small selection of results from the study. If you want to see the richness of results, read the article: Public perceptions of myocardial infarction: Do illness perceptions predict preferences for health check results.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Åsa Grauman, Jennifer Viberg Johansson, Marie Falahee, Jorien Veldwijk. 2022, Public perceptions of myocardial infarction: Do illness perceptions predict preferences for health check results. Preventive Medicine Reports 26, https://doi.org/10.1016/j.pmedr.2021.101683

This post in Swedish

Exploring preferences

Images of good and evil artificial intelligence

As Michele Farisco has pointed out on this blog, artificial intelligence (AI) often serves as a projection screen for our self-images as human beings. Sometimes also as a projection screen for our images of good and evil, as you will soon see.

In AI and robotics, autonomy is often sought in the sense that the artificial intelligence should be able to perform its tasks optimally without human guidance. Like a self-driving car, which safely takes you to your destination without you having to steer, accelerate or brake. Another form of autonomy that is often sought is that artificial intelligence should be self-learning and thus be able to improve itself and become more powerful without human guidance.

Philosophers have discussed whether AI can be autonomous even in another sense, which is associated with human reason. According to this picture, we can as autonomous human beings examine our final goals in life and revise them if we deem that new knowledge about the world motivates it. Some philosophers believe that AI cannot do this, because the final goal, or utility function, would make it irrational to change the goal. The goal is fixed. The idea of such stubbornly goal-oriented AI can evoke worrying images of evil AI running amok among us. But the idea can also evoke reassuring images of good AI that reliably supports us.

Worried philosophers have imagined an AI that has the ultimate goal of making ordinary paper clips. This AI is assumed to be self-improving. It is therefore becoming increasingly intelligent and powerful when it comes to its goal of manufacturing paper clips. When the raw materials run out, it learns new ways to turn the earth’s resources into paper clips, and when humans try to prevent it from destroying the planet, it learns to destroy humanity. When the planet is wiped out, it travels into space and turns the universe into paper clips.

Philosophers who issue warnings about “evil” super-intelligent AI also express hopes for “good” super-intelligent AI. Suppose we could give self-improving AI the goal of serving humanity. Without getting tired, it would develop increasingly intelligent and powerful ways of serving us, until the end of time. Unlike the god of religion, this artificial superintelligence would hear our prayers and take ever-smarter action to help us. It would probably sooner or later learn to prevent earthquakes and our climate problems would soon be gone. No theodicy in the world could undermine our faith in this artificial god, whose power to protect us from evil is ever-increasing. Of course, it is unclear how the goal of serving humanity can be defined. But given the opportunity to finally secure the future of humanity, some hopeful philosophers believe that the development of human-friendly self-improving AI should be one of the most essential tasks of our time.

I read all this in a well-written article by Wolfhart Totschnig, who questions the rigid goal orientation associated with autonomous AI in the scenarios above. His most important point is that rigidly goal-oriented AI, which runs amok in the universe or saves humanity from every predicament, is not even conceivable. Outside its domain, the goal loses its meaning. The goal of a self-driving car to safely take the user to the destination has no meaning outside the domain of road traffic. Domain-specific AI can therefore not be generalized to the world as a whole, because the utility function loses its meaning outside the domain, long before the universe is turned into paper clips or the future of humanity is secured by an artificially good god.

This is, of course, an important philosophical point about goals and meaning, about specific domains and the world as a whole. The critique helps us to more realistically assess the risks and opportunities of future AI, without being bewitched by our images. At the same time, I get the impression that Totschnig continues to use AI as a projection screen for human self-images. He argues that future AI may well revise its ultimate goals as it develops a general understanding of the world. The weakness of the above scenarios was that they projected today’s domain-specific AI, not the general intelligence of humans. We then do not see the possibility of a genuinely human-like AI that self-critically reconsiders its final goals when new knowledge about the world makes it necessary. Truly human-equivalent AI would have full autonomy.

Projecting human self-images on future AI is not just a tendency, as far as I can judge, but a norm that governs the discussion. According to this norm, the wrong image is projected in the scenarios above. An image of today’s machines, not of our general human intelligence. Projecting the right self-image on future AI thus appears as an overall goal. Is the goal meaningful or should it be reconsidered self-critically?

These are difficult issues and my impression of the philosophical discussion may be wrong. If you want to judge for yourself, read the article: Fully autonomous AI.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Totschnig, W. Fully Autonomous AI. Sci Eng Ethics 26, 2473–2485 (2020). https://doi.org/10.1007/s11948-020-00243-z

This post in Swedish

We like critical thinking

« Older posts Newer posts »