A blog from the Centre for Research Ethics & Bioethics (CRB)

Month: February 2018

Prepare for robot nonsense

Pär SegerdahlAs computers and robots take over tasks that so far only humans could carry out, such as driving a car, we are likely to experience increasingly insidious uses of language by the technology’s intellectual clergy.

The idea of ​​intelligent computers and conscious robots is for some reason terribly fascinating. We see ourselves as intelligent and conscious beings. Imagine if also robots could be intelligent and aware! In fact, we have already seen them (almost): on the movie screen. Soon we may see them in reality too!

Imagine that artifacts that we always considered dead and mechanical one day acquired the enigmatic character of life! Imagine that we created intelligent life! Do we have enough exclamation marks for such a miracle?

The idea of ​​intelligent life in supercomputers often comes with the idea of a test that can determine if a supercomputer is intelligent. It is as if I wanted to make the idea of ​​perpetual motion machines credible by talking about a perpetuum mobile test, invented by a super-smart mathematician in the 17th century. The question if something is a perpetuum mobile is determinable and therefore worth considering! Soon they may function as engines in our intelligent, robot-driven cars!

There is a famous idea of ​​an intelligence test for computers, invented by the British mathematician, Alan Turing. The test allegedly can determine whether a machine “has what we have”: intelligence. How does the test work? Roughly, it is about whether you can distinguish a computer from a human – or cannot do it.

But distinguishing a computer from a human being surely is no great matter! Oh, I forgot to mention that there is a smoke screen in the test. You neither see, hear, feel, taste nor smell anything! In principle, you send written questions into the thick smoke. Out of the smoke comes written responses. But who wrote/generated the answers? Human or computer? If you cannot distinguish the computer-generated answers from human answers – well, then you had better take protection, because an intelligent supercomputer hides behind the smoke screen!

The test is thus adapted to the computer, which cannot have intelligent facial expressions or look perplexed, and cannot groan, “Oh no, what a stupid question!” The test is adapted to an engineer’s concept of intelligent handling of written symbol sequences. The fact that the test subject is a poor human being who cannot always say who/what “generated” the written answers hides this conceptual fact.

These insidious linguistic shifts are unusually obvious in an article I encountered through a rather smart search engine. The article asks if machines can be aware. And it responds: Yes, and a new Turing test can prove it.

The article begins with celebrating our amazing consciousness as “the ineffable and enigmatic inner life of the mind.” Consciousness is then exemplified by the whirl of thought and sensation that blossoms within us when we finally meet a loved one again, hear an exquisite violin solo, or relish an incredible meal.

After this ecstatic celebration of consciousness, the concept begins to be adapted to computer engineering so that finally it is merely a concept of information processing. The authors “show” that consciousness does not require interaction with the environment. Neither does it require memories. Consciousness does not require any emotions like anger, fear or joy. It does not require attention, self-reflection, language or ability to act in the world.

What then remains of consciousness, which the authors initially made it seem so amazing to possess? The answer in the article is that consciousness has to do with “the amount of integrated information that an organism, or a machine, can generate.”

The concept of consciousness is gradually adapted to what was to be proven. Finally, it becomes a feature that unsurprisingly can characterize a computer. After we swallowed the adaptation, the idea is that we, at the Grand Finale of the article, should once again marvel, and be amazed that a machine can have this “mysterious inner life” that we have, consciousness: “Oh, what an exquisite violin solo, not to mention the snails, how lovely to meet again like this!”

The new Turing test that the authors imagine is, as far as I understand, a kind of picture recognition test: Can a computer identify the content of a picture as “a robbery”? A conscious computer should be able to identify pictorial content as well as a human being can do it. I guess the idea is that the task requires very, very much integrated information. No simple rule of thumb, man + gun + building + terrified customer = robbery, will do the trick. It has to be such an enormous amount of integrated information that the computer simply “gets it” and understands that it is a robbery (and not a five-year-old who plays with a toy gun).

Believing in the test thus assumes that we swallowed the adapted concept of consciousness and are ecstatically amazed by super-large amounts of integrated information as: “the ineffable and enigmatic inner life of the mind.”

These kinds of insidious linguistic shifts will attract us even more deeply as robotics develop. Imagine an android with facial expression and voice that can express intelligence or groan at stupid questions. Then surely, we are dealing an intelligent and conscious machine!

Or just another deceitful smoke screen; a walking, interactive movie screen?

Pär Segerdahl

This post in Swedish

The temptation of rhetoric - the ethics blog

Inequalities in healthcare – from denial to greater awareness

Pär SegerdahlSwedish law prescribes healthcare on equal terms for the whole population. Complying with this law is more difficult than one might believe, since discrimination tends to happen unknowingly, under our own radar.

Telephone nursing has been thought to increase equality in healthcare, because it is so easily accessible. However, research has demonstrated inequalities in telephone counseling. Callers are not treated equally.

Given the role of unawareness in the drama, this is not surprising. Despite the best intentions, treating people equally is very difficult in practice. What can we do about it?

If unawareness is a factor and discrimination largely happens unintentionally, I do not think we can conclude that it must be the result of a “bad system.” Even if discrimination arises unintentionally, it is humans who discriminate. Humans are not just their awareness, but also their unawareness.

In an article in the International Journal of Equity in Health, Anna T. Höglund (and four co-authors) investigates awareness of discrimination in healthcare, especially in telephone nursing. Swedish telephone nurses responded to a questionnaire about discrimination and equal treatment. The nurses’ answers could then be analyzed in terms of four concepts: denial, defense, openness and awareness.

Denial: some nurses denied discrimination. Defense: Some acknowledged that care was not always given on equal terms, but said that measures were taken and that the problem was under control. Openness: some of the nurses found the problem important and wished they could learn more about care on equal terms. Awareness: Some clearly saw how discrimination could occur and gave examples of strategies they used to avoid complex discriminatory patterns of which they were aware.

Rather than explaining unintended discrimination as the result of a “bad system,” these four concepts provide us with tools that can help us handle the problem more responsibly.

Anna T. Höglund proposes two complementary ways of viewing the four concepts. You can see them as positions along a line of development where a person can mature and move from denial or defense, through openness, towards the ultimate goal, awareness. But you can also imagine a person moving back and forth between positions, depending on the circumstances.

One recognizes oneself in these positions; unfortunately, not least in the positions denial and defense. The conceptual model developed in the article increases awareness of discrimination as largely a matter of our awareness and unawareness.

The authors add a fifth concept to the model: Action. If I understand them, they do not mean by “action” correcting a “bad system,” thereby controlling the problem. On the contrary, that would appear very much like expressing the defensive position above. (This indicates how much unawareness there is in many bureaucratic attempts to “control” societal problems through “systems,” to which one later refers: “We have taken appropriate measures, the problem is under control!”)

No, we need to continuously work on the problem; continually address ourselves and our patterns of acting. The conceptual model developed in the article gives us some tools.

Pär Segerdahl

Höglund, A.T., Carlsson, M. Holmström, I.K., Lännerström, L. and Kaminsky, E. 2018. From denial to awareness: a conceptual model for obtaining equity in healthcare. International Journal for Equity in Health 17. DOI 10.1186/s12939-018-0723-2

This post in Swedish

We want to be just - the Ethics Blog