A blog from the Centre for Research Ethics & Bioethics (CRB)

Category: In the media (Page 2 of 7)

Herb Terrace about the chimpanzee Nim – do you see the contradiction?

Have you seen small children make repeated attempts to squeeze a square object through a round hole (plastic toy for the little ones)? You get puzzled: Do they not see that it is impossible? The object and the hole have different shapes!

Sometimes adults are just as puzzling. Our intellect does not always fit reality. Yet, we force our thoughts onto reality, even when they have different shapes. Maybe we are extra stubborn precisely when it is not possible. This post is about such a case.

Herb Terrace is known as the psychologist who proved that apes cannot learn language. He himself tried to teach sign language to the chimpanzee Nim, but failed according to his own judgement. When Terrace took a closer look at the videotapes, where Nim interacted with his human sign-language teachers, he saw how Nim merely imitated the teachers’ signs, to get his reward.

I recently read a blog post by Terrace where he not only repeats the claim that his research demonstrates that apes cannot learn language. The strange thing is that he also criticizes his own research severely. He writes that he used the wrong method with Nim, namely, that of giving him rewards when the teacher judged that he made the right signs. The reasoning becomes even more puzzling when Terrace writes that not even a human child could learn language with such a method.

To me, this is as puzzling as a child’s insistence on squeezing a square object through a round hole. If Terrace used the wrong method, which would not work even on a human child, then how can he conclude that Project Nim demonstrates that apes cannot learn language? Nevertheless, he insists on reasoning that way, without feeling that he contradicts himself. Nor does anyone who read him seem to experience any contradiction. Why?

Perhaps because most of us think that humans cannot teach animals anything at all, unless we train them with rewards. Therefore, since Nim did not learn language with this training method, apes cannot learn language. Better methods do not work on animals, we think. If Terrace failed, then everyone must fail, we think.

However, one researcher actually did try a better method in ape language research. She used an approach to young apes that works with human children. She stopped training the apes via a system of rewards. She lived with the apes, as a parent with her children. And it succeeded!

Terrace almost never mentions the name of the successful ape language researcher. After all, she used a method that is impossible with animals: she did not train them. Therefore, she cannot have succeeded, we think.

I can tell you that the name of the successful researcher is Sue Savage-Rumbaugh. To see a round reality beyond a square thinking, we need to rethink our thought pattern. If you want to read a book that tries to do such rethinking about apes, humans and language, I recommend a philosophical self-critique that I wrote with Savage-Rumbaugh and her colleague William Fields.

To philosophize is to learn to stop imposing our insane thoughts on reality. Then we finally see reality as it is.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Segerdahl, P., Fields, W. & Savage-Rumbaugh, S. 2005. Kanzi’s Primal Language. The Cultural Initiation of Primates into Language. Palgrave Macmillan.

Understanding enculturated apes

This post in Swedish

Communicating thought provoking research in our common language

Pär SegerdahlAfter having been the editor of the Ethics Blog for eight years, I would like to describe the research communication that usually occurs on this blog.

The Ethics Blog wants to avoid the popular scientific style that sometimes occurs in the media, which reports research results on the form, “We have traditionally believed that…, but a recent scientific study shows that…” This is partly because the Ethics Blog is run by a research center in ethics, CRB. Although ethics may involve empirical studies (for example, interviews and surveys), it is not least a matter of thinking. If you, as an ethicist, want to develop new recommendations on informed consent, you must think clearly and thoroughly. However, no matter how rigorously you think, you can never say, “We have traditionally believed that it is ethically important to inform patients about…, but recent philosophical thoughts show that we should avoid doing that.”

Thinking does not provide the authority that empirical research gives. As an ethicist or a philosopher, I cannot report my conclusions as if they were research results. Nor can I invoke “recent thoughts” as evidence. Thoughts give no evidence. Ethicists therefore present their entire thinking on different issues to the critical gaze of readers. They present their conclusions as open suggestions to the reader: “Here is how I honestly think about this issue, can you see it that way too?”

The Ethics Blog therefore avoids merely disseminating research results. Of course, it informs about new findings, but it emphasizes their thought provoking aspects. It chooses to reflect on what is worth thinking about in the research. This allows research communication to work more on equal terms with the reader, since the author and the reader meet in thinking about aspects that make both wonder. Moreover, since each post tries to stand on its own, without invoking intellectual authority (“the ethicists’ most recent thoughts show that…”), the reader can easily question the blogger’s attempts to think independently.

In short: By communicating research in a philosophical spirit, science can meet people on more equal terms than when they are informed about “recent scientific findings.” By focusing on the thought provoking aspects of the research, research communication can avoid a patronizing attitude to the reader. At least that is the ambition of the Ethics Blog.

Another aspect of the research communication at CRB, also beyond the Ethics Blog, is that we want to use our ordinary language as far as possible. Achieving a simple style of writing, however, is not easy! Why are we making this effort, which is almost doomed to fail when it comes to communicating academic research? Why do Anna Holm, Josepine Fernow and I try to communicate research without using strange words?

Of course, we have reflected on our use of language. Not only do we want to reach many different groups: the public, patients and their relatives, healthcare staff, policy makers, researchers, geneticists and more. We also want these groups to understand each other a little better. Our common language accommodates more human agreement than we usually believe.

Moreover, ethics research often highlights the difficulties that different groups have in understanding each other. It can be about patients’ difficulties in understanding genetic risk information, or about geneticists’ difficulties in understanding how patients think about genetic risk. It may be about cancer patients’ difficulties in understanding what it means to participate in clinical trials, or about cancer researchers’ difficulties in understanding how patients think.

If ethics identifies our human difficulties in understanding each other as important ethical problems, then research communication will have a particular responsibility for clarifying things. Otherwise, research communication risks creating more communication difficulties, in addition to those identified by ethics! Ethics itself would become a communication problem. We therefore want to write as clearly and simply as we can, to reach the groups that according to the ethicists often fail to reach each other.

We hope that our communication on thought provoking aspects of ethics research stimulates readers to think for themselves about ethical issues. Everyone can wonder. Non-understanding is actually a source of wisdom, if we dare to admit it.

Pär Segerdahl

This post in Swedish

We care about communication - the Ethics Blog

 

 

Why should we care about the environment and climate change?

Jessica Nihlén FahlquistTo most of us, it is self-evident that we, as human beings and societies, should care about the environment and climate change. Greta Thunberg has, in a remarkable way, spurred political interest and engagement in climate change. This effort has affected our thoughts and emotions concerning environmental policy. However, when we dig deeper into the philosophical debate, there are different ideas on why we should care about the environment. That is, even though we agree on the need to care, there are various arguments as to why and how we should do that.

First, some scholars argue that we should care about nature because we need it and what we get from it. Nature is crucial to us, for example, because it provides us with water and food as well as air to breathe. Without nature and a good climate, we simply cannot live on planet Earth. Unless we make a substantial effort, our lifestyle will lead to flooding, unmanageable migration and many other enormous challenges. Furthermore, it will affect poorer people and poorer regions the most, making it a crucial issue of justice.

Second, some philosophers argue that it is wrong to base our concern for nature and the environment on the needs of, and effects on, human beings. The anthropocentric assumptions are wrong, they argue. Even without human beings, nature has a value. Its value is intrinsic and not merely instrumental. Proponents of this view often claim that animals have values, and possibly even rights, that should be protected. They disagree on whether it is individual animals, species or even ecosystems that should be protected.

Environmental philosophy consists of many different theoretical schools, and the notions they defend underlie societal debate, explicitly or merely implicitly. Some notions are based on consequentialist ethics and others on deontological ethics. In addition to these two schools of thought, virtue ethics has become influential in the philosophical debate.

Environmental Virtue Ethics holds that it is inadequate to focus on consequences, duties and rights. Furthermore, it is inadequate to focus on rules and legislation. Our respect for and reverence for nature is based on the virtues we ought to develop as human beings. In addition, society should encourage such virtues. Virtue ethics focuses on the character traits, on the dispositions to act, and on the attitudes and emotions that are relevant to a certain area, in this case the environment. It is a richer, more complex theory than the other two mentioned. Even though virtues were first discussed during Antiquity, and the concept might seem obsolete, they are highly relevant in our time. Through reflection, experience and role models, we can all develop virtues crucial to environmental protection and sustainability. The idea is not only that society needs these virtuous people, but that virtuous human beings blossom as individuals when they develop these virtues. They argue that it is wrong to see nature as a commodity belonging to us. Instead, it is argued, we are part of nature and have a special relationship with it. This relationship should be the focus of the debate.

Whereas Environmental Virtue Ethics focuses on ethical virtues, that is, how we should relate to nature through our development into virtuous individuals, a related school of thought focuses on the aesthetical value of nature. It is pointed out that not only does nature have ethical value, but an aesthetical value in virtue of its beauty. We should spend time in nature in order to fully appreciate its aesthetical value.

All of the mentioned schools of thought agree that we should care about the environment and climate. They also hold that sustainability is an important national and global goal. Interestingly, what is beneficial from a sustainability perspective is not necessarily beneficial to climate changes. For instance, nuclear energy could be considered good for climate change due to its marginal emissions, but it is doubtful that it is good for sustainability considering the problems of nuclear waste.

Finally, it is important to include the discussion of moral responsibility. If we agree that it is crucial to save the environment, then the question arises who should take responsibility for materializing this goal. One could argue that individuals bear a personal responsibility to, for example, reduce consumption and use sustainable transportation. However, one could also argue that the greatest share of responsibility should be taken by political institutions, primarily states. In addition, a great share of responsibility might be ascribed to private actors and industries.

We could also ask whether, and to what extent, responsibility is about blame for past events, for example, the western world causing too much carbon emissions in the past. Alternatively, we could focus on what needs to be done now, regardless of causation and blame. According to this line of thinking, the most important question to ask is who has the resources and capacity to make the necessary changes. The questions of responsibility could be conceptualized as questions of individual versus collective responsibility and backward-looking versus forward-looking responsibility.

As we can see, there are many philosophically interesting aspects and discussions concerning the question why we should care about the environment. Hopefully, these discussions can contribute to making the challenges more comprehensible and manageable. Ideally, they can assist in the tremendous work done by Greta Thunberg and others like her so that it can lead to agreement on what needs to be done by individuals, nations and the world.

Jessica Nihlén Fahlquist

Nihlén Fahlquist, J. 2018. Moral Responsibility and Risk in Modern Society – Examples from emerging technologies, public health and environment. Routledge Earth Scan Risk in Society series: London.

Van de Poel, I. Nihlén Fahlquist, J, Doorn, N., Zwart, S, Royakkers L, di Lima, T. 2011. The problem of many hands: climate change as an example. Science and Engineering Ethics.

Nihlen Fahlquist J. 2009. Moral responsibility for environmental problems – individual or institutional? Journal of Agricultural and Environmental Ethics, Volume 22(2), pp. 109-124.

This post in Swedish

Approaching future issues - the Ethics Blog

 

 

Learning from the difficulties

Pär SegerdahlIn popular scientific literature, research can sometimes appear deceptively simple: “In the past, people believed that … But when researchers looked more closely, they found that …” It may seem as if researchers need not do much more than visit archives or laboratories. There, they take a closer look at things and discover amazing results.

There is nothing wrong with this popular scientific prose. It is exciting to read about new research results. However, the prose often hides the difficulties of the research work, the orientation towards questions and problems. As I said, there is nothing wrong with this. Readers of popular science rarely need to know how physicists or sociologists struggle daily to formulate their questions and delve into the problems. Readers are more interested in new findings about our fascinating world.

However, there are academic fields where the questions affect us all more directly, and where the questions are at the center of the research process from beginning to end. Two examples are philosophy and ethics. Here, identifying the difficult questions can be the important thing. Today, for example, genetics is developing rapidly. That means it affects more people; it affects us all. Genetic tests can now be purchased on the internet and more and more patients may be genetically tested in healthcare to individualize their treatment.

Identifying ethical issues around this development, delving into the problems, becoming aware of the difficulties, can be the main element of ethics research. Such difficulty-oriented work can make us better prepared, so that we can act more wisely.

In addition, ethical problems often arise in the meeting between living human beings and new technological opportunities. Identifying these human issues may require that the language that philosophy and ethics use is less specialized, that it speaks to all of us, whether we are experts or not. Therefore, many of the posts on the Ethics Blog attempt to speak directly to the human being in all of us.

It may seem strange that research that delves into questions can help us act wisely. Do we not rather become paralyzed by all the questions and problems? Do we not need clear ethical guidelines in order to act wisely?

Well, sometimes we need guidelines. But they must not be exaggerated. Think about how much better you function when you do something for the second time (when you become a parent for the second time, for example). Why do we function better the second time? Is it because the second time we are following clear guidelines?

We grow through being challenged by difficulties. Philosophy and ethics delve into the difficulties for this very reason. To help us to grow, mature, become wiser. Individually and together, as a society. I do not know anyone who matured as a human being through reading guidelines.

Pär Segerdahl

This post in Swedish

We like challenging questions - the ethics blog

 

The human being is not only a category

Pär SegerdahlWe often use words as categories, as names of classes of things or individuals in the world. Humans and animals. Englishmen and Germans. Capitalists and Communists. Christians and Muslims. I want to highlight a difficulty we may encounter if we try to handle the problem of human violence from such an outward looking perspective.

Something that easily happens is that we start looking for the ideal subcategory of humans, whom we cannot accuse of any violence. If we only found a truly peaceful group of humans, somewhere in the world, we could generalize it to all humanity. We could create an evidence-based humanity, finally living peacefully. We could wipe out the problem of violence! However, where do we find the nonviolent humans who, on scientific grounds, could guide the rest of humanity to peace?

One problem here is that if we find some peaceful humans, perhaps on the British Isles, or in the Himalayas, then we must convert all other humans on the surface of this planet to the peaceful category. That does not sound promising! On the contrary, it sounds like a recipe for war.

Already the search for evidence seems violent, since it will repeat not just one, but all accusations of violence that ever were directed at groups of people. After all, there are:

  • violent Christians
  • violent Muslims
  • violent Capitalists
  • violent Anti-Capitalists
  • violent Germans
  • violent Englishmen

Moreover, there are violent trombonists. We also know that there are violent democrats, as well as violent anti-democrats. Lately we have been surprised to learn that even Buddhists can persecute humans and burn down temples and mosques. How about that! Even Buddhists are violent. The project to create an evidence-based, peaceful humanity seems hopeless.

However, let us turn this around. After all, we are all humans:

  • Christians are humans
  • Muslims are humans
  • Capitalists are humans
  • Anti-Capitalists are humans
  • Germans are humans
  • Englishmen are humans

Trombonists are humans, as are democrats, anti-democrats and Buddhists. We are all humans. Does it not sound hopeful when we acknowledge the fact that we are all humans? It certainly does sound full of promise. But why?

Is it perhaps because we stop opposing humans and instead speak more grandiosely about the human as one big universal category? I do not think so. After all, the problem was, from the beginning, that there are:

  • violent humans

It is not difficult to distrust the human as a universal category. Would it not be best if the human simply disappeared from this overburdened planet? Is it not horrible that we are all these humans, intruding on nature? In fact, there are those who propose that we should transgress the human category and become post-human. As though the solution were an unborn category.

No, the hope we felt emerged, I think, precisely because we stopped talking about human beings as a category. Notice the word we humans. What does it mean to talk about us humans? I think it means that we no longer speak of the human as a category in the world, not even grandiosely as a universal category. Rather, the human is, more intimately, “all of us,” “you and me,” “each one of us.”

When we talk about the human from within, we do not accuse the human as a worldly category to be violent. Rather, we see the violence in ourselves. I see it in me; you see it in you. We see the violence in each one of us; we see it in all of us. The responsibility thereby naturally becomes our own human responsibility. That is where the hope we felt emanated, I believe. It came from the internal perspective on the human. This nearness to ourselves made acknowledging that we are all humans sound full of promise.

I stop here. I just wanted to remind you of the fact that the human being is not only a worldly category with which to calculate and experiment. The category of the human can make us blind to ourselves as intimately alive, and thereby to the violence in us and to our responsibility for it.

I just hope this reminder did not trigger further violence: “What!? Are you suggesting that the problem lies in me? How impudent! Please, don’t include me in your pathetic we.”

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog

Sharing a blog post on consciousness

Michele Farisco at CRB has written an interesting post for the BMC blog on medicine. He says that “whereas ethical analyses of disorders of consciousness traditionally focus on residual awareness, there may be a case to be made for the ethical relevance of the retained unawareness.”

Interested to read more? Here is a link to the post: On consciousness and the unconscious.

Pär Segerdahl

We recommend readings - the Ethics Blog

Driverless car ethics

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

Can a robot learn to speak?

Pär SegerdahlThere are self-modifying computer programs that “learn” from success and failure. Chess-playing computers, for example, become better through repeated games against humans.

Could a similar robot also learn to speak? If the robot gets the same input as a child gets when it learns to speak, should it not be possible in principle?

Notice how the question zigzags between child and machine. We say that the robot learns. We say that the child gets input. We speak of the robot as if it were a child. We speak of the child as if it were a robot. Finally, we take this linguistic zigzagging seriously as a fascinating question, perhaps even a great research task.

An AI expert and prospective father who dreamed of this great research task took the following ambitious measures. He equipped his whole house with cameras and microphones, to document all parent-child interactions during the child’s first years. Why? He wanted to know exactly what kind of linguistic input a child gets when it learns to speak. At a later stage, he might be able to give a self-modifying robot the same input and test if it also learns to speak.

How did the project turn out? The personal experience of raising the child led the AI ​​expert to question the whole project of teaching a robot to speak. How could a personal experience lead to the questioning of a seemingly serious scientific project?

Here, I could start babbling about how amiably social children are compared to cold machines. How they learn in close relationships with their parents. How they curiously and joyfully take the initiative, rather than calculatingly await input.

The problem is that such babbling on my part would make it seem as if the AI ​​expert simply was wrong about robots and children. That he did not know the facts, but now is more well-informed. It is not that simple. For the idea behind ​​the project presupposed unnoticed linguistic zigzagging. Already in asking the question, the boundaries between robots and children are blurred. Already in the question, we have half answered it!

We cannot be content with responding to the question in the headline with a simple, “No, it cannot.” We must reject the question as nonsense. Deceitful zigzagging creates the illusion that we are dealing with a serious question, worthy of scientific study.

This does not exclude, however, that computational linguistics increasingly uses self-modifying programs, and with great success. But that is another question.

Pär Segerdahl

Beard, Alex. How babies learn – and why robots can’t compete. The Guardian, 3 April 2018

This post in Swedish

We like critical thinking : www.ethicsblog.crb.uu.se

Read this interview with Kathinka Evers!

Through philosophical analysis and development of concepts, Uppsala University contributes significantly to the European Flagship, the Human Brain Project. New ways of thinking about the brain and about consciousness are suggested, which take us beyond oppositions between consciousness and unconsciousness, and between consciousness and matter.

Do you want to know more? Read the fascinating interview with Kathinka Evers: A continuum of consciousness: The Intrinsic Consciousness Theory

Kathinka Evers at CRB in Uppsala leads the work on neuroethics and neurophilosophy in the Human Brain Project.

Pär Segerdahl

We recommend readings - the Ethics Blog

Prepare for robot nonsense

Pär SegerdahlAs computers and robots take over tasks that so far only humans could carry out, such as driving a car, we are likely to experience increasingly insidious uses of language by the technology’s intellectual clergy.

The idea of ​​intelligent computers and conscious robots is for some reason terribly fascinating. We see ourselves as intelligent and conscious beings. Imagine if also robots could be intelligent and aware! In fact, we have already seen them (almost): on the movie screen. Soon we may see them in reality too!

Imagine that artifacts that we always considered dead and mechanical one day acquired the enigmatic character of life! Imagine that we created intelligent life! Do we have enough exclamation marks for such a miracle?

The idea of ​​intelligent life in supercomputers often comes with the idea of a test that can determine if a supercomputer is intelligent. It is as if I wanted to make the idea of ​​perpetual motion machines credible by talking about a perpetuum mobile test, invented by a super-smart mathematician in the 17th century. The question if something is a perpetuum mobile is determinable and therefore worth considering! Soon they may function as engines in our intelligent, robot-driven cars!

There is a famous idea of ​​an intelligence test for computers, invented by the British mathematician, Alan Turing. The test allegedly can determine whether a machine “has what we have”: intelligence. How does the test work? Roughly, it is about whether you can distinguish a computer from a human – or cannot do it.

But distinguishing a computer from a human being surely is no great matter! Oh, I forgot to mention that there is a smoke screen in the test. You neither see, hear, feel, taste nor smell anything! In principle, you send written questions into the thick smoke. Out of the smoke comes written responses. But who wrote/generated the answers? Human or computer? If you cannot distinguish the computer-generated answers from human answers – well, then you had better take protection, because an intelligent supercomputer hides behind the smoke screen!

The test is thus adapted to the computer, which cannot have intelligent facial expressions or look perplexed, and cannot groan, “Oh no, what a stupid question!” The test is adapted to an engineer’s concept of intelligent handling of written symbol sequences. The fact that the test subject is a poor human being who cannot always say who/what “generated” the written answers hides this conceptual fact.

These insidious linguistic shifts are unusually obvious in an article I encountered through a rather smart search engine. The article asks if machines can be aware. And it responds: Yes, and a new Turing test can prove it.

The article begins with celebrating our amazing consciousness as “the ineffable and enigmatic inner life of the mind.” Consciousness is then exemplified by the whirl of thought and sensation that blossoms within us when we finally meet a loved one again, hear an exquisite violin solo, or relish an incredible meal.

After this ecstatic celebration of consciousness, the concept begins to be adapted to computer engineering so that finally it is merely a concept of information processing. The authors “show” that consciousness does not require interaction with the environment. Neither does it require memories. Consciousness does not require any emotions like anger, fear or joy. It does not require attention, self-reflection, language or ability to act in the world.

What then remains of consciousness, which the authors initially made it seem so amazing to possess? The answer in the article is that consciousness has to do with “the amount of integrated information that an organism, or a machine, can generate.”

The concept of consciousness is gradually adapted to what was to be proven. Finally, it becomes a feature that unsurprisingly can characterize a computer. After we swallowed the adaptation, the idea is that we, at the Grand Finale of the article, should once again marvel, and be amazed that a machine can have this “mysterious inner life” that we have, consciousness: “Oh, what an exquisite violin solo, not to mention the snails, how lovely to meet again like this!”

The new Turing test that the authors imagine is, as far as I understand, a kind of picture recognition test: Can a computer identify the content of a picture as “a robbery”? A conscious computer should be able to identify pictorial content as well as a human being can do it. I guess the idea is that the task requires very, very much integrated information. No simple rule of thumb, man + gun + building + terrified customer = robbery, will do the trick. It has to be such an enormous amount of integrated information that the computer simply “gets it” and understands that it is a robbery (and not a five-year-old who plays with a toy gun).

Believing in the test thus assumes that we swallowed the adapted concept of consciousness and are ecstatically amazed by super-large amounts of integrated information as: “the ineffable and enigmatic inner life of the mind.”

These kinds of insidious linguistic shifts will attract us even more deeply as robotics develop. Imagine an android with facial expression and voice that can express intelligence or groan at stupid questions. Then surely, we are dealing an intelligent and conscious machine!

Or just another deceitful smoke screen; a walking, interactive movie screen?

Pär Segerdahl

This post in Swedish

The temptation of rhetoric - the ethics blog

« Older posts Newer posts »