Intellectual habits prevent self-examination

March 21, 2018

Pär SegerdahlThe intellect is worldly-minded and extrovert. It is busy with the facts of the world. Even when it turns inwards, towards our own consciousness, of which it is a part, the intellect interprets consciousness as another object in the world.

The intellect can never become aware of itself. It can only expand towards something other than itself.

The Chinese philosopher Confucius gave a wonderful image of a self-examining person: “When the archer misses the center of the target, he turns around and seeks the cause of his failure within himself.”

The intellect is like an archer who cannot turn around. If the intellect were to examine itself, it would interpret itself as another target in the world to hit with its pointed arrows! The intellect is incapable of wisdom and knows nothing about self-knowledge. The intellect can only shoot projectiles at the world; it can only expand and conquer.

I am writing philosophy. That means I always turn around to seek the cause of our failures within ourselves. I rarely shoot arrows, and certainly not at external targets.

At the same time, this inner work meets obstacles in academic habits and ideals, which are largely intellectual and aim at the facts of the world. For example, I cannot examine our ways of thinking without citing literature that supports that these ways of thinking actually occur in the world (in authors x, y, and z, for example).

Such referencing transforms ways of thinking into worldly targets at which I am supposed to shoot. But I wanted to turn around and seek the cause of our failures within ourselves!

What do we truly need today? Something else than just more facts! We need to learn the art of turning around. We need to learn to seek the cause of our failures within ourselves. The persistent shooting of projectiles at the world has become humanity’s most common disease – virtually the human condition.

Do you think that the intellect can shoot itself out of the crises that its own trigger-happiness causes? Do you think it can expand out of the problems that its own expansions produce?

If Elon Musk takes us to Mars, surely he will solve all our problems!

Pär Segerdahl

This post in Swedish

The Ethics Blog - Thinking about thinking


Prepare for robot nonsense

February 26, 2018

Pär SegerdahlAs computers and robots take over tasks that so far only humans could carry out, such as driving a car, we are likely to experience increasingly insidious uses of language by the technology’s intellectual clergy.

The idea of ​​intelligent computers and conscious robots is for some reason terribly fascinating. We see ourselves as intelligent and conscious beings. Imagine if also robots could be intelligent and aware! In fact, we have already seen them (almost): on the movie screen. Soon we may see them in reality too!

Imagine that artifacts that we always considered dead and mechanical one day acquired the enigmatic character of life! Imagine that we created intelligent life! Do we have enough exclamation marks for such a miracle?

The idea of ​​intelligent life in supercomputers often comes with the idea of a test that can determine if a supercomputer is intelligent. It is as if I wanted to make the idea of ​​perpetual motion machines credible by talking about a perpetuum mobile test, invented by a super-smart mathematician in the 17th century. The question if something is a perpetuum mobile is determinable and therefore worth considering! Soon they may function as engines in our intelligent, robot-driven cars!

There is a famous idea of ​​an intelligence test for computers, invented by the British mathematician, Alan Turing. The test allegedly can determine whether a machine “has what we have”: intelligence. How does the test work? Roughly, it is about whether you can distinguish a computer from a human – or cannot do it.

But distinguishing a computer from a human being surely is no great matter! Oh, I forgot to mention that there is a smoke screen in the test. You neither see, hear, feel, taste nor smell anything! In principle, you send written questions into the thick smoke. Out of the smoke comes written responses. But who wrote/generated the answers? Human or computer? If you cannot distinguish the computer-generated answers from human answers – well, then you had better take protection, because an intelligent supercomputer hides behind the smoke screen!

The test is thus adapted to the computer, which cannot have intelligent facial expressions or look perplexed, and cannot groan, “Oh no, what a stupid question!” The test is adapted to an engineer’s concept of intelligent handling of written symbol sequences. The fact that the test subject is a poor human being who cannot always say who/what “generated” the written answers hides this conceptual fact.

These insidious linguistic shifts are unusually obvious in an article I encountered through a rather smart search engine. The article asks if machines can be aware. And it responds: Yes, and a new Turing test can prove it.

The article begins with celebrating our amazing consciousness as “the ineffable and enigmatic inner life of the mind.” Consciousness is then exemplified by the whirl of thought and sensation that blossoms within us when we finally meet a loved one again, hear an exquisite violin solo, or relish an incredible meal.

After this ecstatic celebration of consciousness, the concept begins to be adapted to computer engineering so that finally it is merely a concept of information processing. The authors “show” that consciousness does not require interaction with the environment. Neither does it require memories. Consciousness does not require any emotions like anger, fear or joy. It does not require attention, self-reflection, language or ability to act in the world.

What then remains of consciousness, which the authors initially made it seem so amazing to possess? The answer in the article is that consciousness has to do with “the amount of integrated information that an organism, or a machine, can generate.”

The concept of consciousness is gradually adapted to what was to be proven. Finally, it becomes a feature that unsurprisingly can characterize a computer. After we swallowed the adaptation, the idea is that we, at the Grand Finale of the article, should once again marvel, and be amazed that a machine can have this “mysterious inner life” that we have, consciousness: “Oh, what an exquisite violin solo, not to mention the snails, how lovely to meet again like this!”

The new Turing test that the authors imagine is, as far as I understand, a kind of picture recognition test: Can a computer identify the content of a picture as “a robbery”? A conscious computer should be able to identify pictorial content as well as a human being can do it. I guess the idea is that the task requires very, very much integrated information. No simple rule of thumb, man + gun + building + terrified customer = robbery, will do the trick. It has to be such an enormous amount of integrated information that the computer simply “gets it” and understands that it is a robbery (and not a five-year-old who plays with a toy gun).

Believing in the test thus assumes that we swallowed the adapted concept of consciousness and are ecstatically amazed by super-large amounts of integrated information as: “the ineffable and enigmatic inner life of the mind.”

These kinds of insidious linguistic shifts will attract us even more deeply as robotics develop. Imagine an android with facial expression and voice that can express intelligence or groan at stupid questions. Then surely, we are dealing an intelligent and conscious machine!

Or just another deceitful smoke screen; a walking, interactive movie screen?

Pär Segerdahl

This post in Swedish

The temptation of rhetoric - the ethics blog


New concept of consciousness challenges language

January 31, 2018

Pär SegerdahlA few weeks ago, I recommended an exciting article by Michele Farisco. Now I wish to recommend another article, where Farisco (together with Steven Laureys and Kathinka Evers) argues even more thoroughly for a new concept of consciousness.

The article in Mind & Matter is complex and I doubt that I can do it justice. I have to start out from my own experience. For when Farisco challenges the opposition between consciousness and the unconscious, it resembles something I have written about: the opposition between human and animal.

Oppositions that work perfectly in everyday language often become inapplicable for scientific purposes. In everyday life, the opposition between human and animal is unproblematic. If a child tells us that it saw an animal, we know it was not a human the child saw. For the biologist, however, the idea of ​​the human as non-animal would be absurd. Although it is perfectly in order in everyday language, biology must reject the opposition between human and animal. It hides continuities between us and the other animals.

Farisco says (if I understand him) something similar about neuroscience. Although the opposition between consciousness and the unconscious works in everyday language, it becomes problematic in neuroscience. It hides continuities in the brain’s way of functioning. Neuroscience should therefore view consciousness and the unconscious as continuous forms of the same basic phenomenon in living brains.

If biology talks about the human as one of the animal species, how does Farisco suggest that neuroscience should talk about consciousness? Here we face greater linguistic challenges than when biology considers humans to be animals.

Farico’s proposal is to widen the notion of consciousness to include also what we usually call the unconscious (much like the biologist widens the concept of animals). Farisco thus suggests, roughly, that the brain is conscious as long as it is alive, even in deep sleep or in coma. Note, however, that he uses the word in a new meaning! He does not claim what he appears to be claiming!

The brain works continually, whether we are conscious or not (in the ordinary sense). Most neural processes are unconscious and a prerequisite for consciousness (in the ordinary sense). Farisco suggests that we use the word consciousness for all these processes in living brains. The two states we usually oppose – consciousness and the unconscious – are thus forms of the same basic phenomenon, namely, consciousness in Farisco’s widened sense.

Farisco supports the widened concept of consciousness by citing neuroscientific evidence that I have to leave aside in this post. All I wish to do here is to point out that Farico’s concept of consciousness probably is as logical in neuroscience as the concept of the human as animal is in biology.

Do not let the linguistic challenges prevent you from seeing the logic of Farisco’s proposal!

Pär Segerdahl

Farisco, M., Laureys, S. and Evers, K. 2017. The intrinsic activity of the brain and its relation to levels and disorders of consciousness. Mind and Matter 15: 197-219

This post in Swedish

We recommend readings - the Ethics Blog


Not knowing why

January 17, 2018

Pär SegerdahlOften we do not know why we think as do. We may like a drawing, but we cannot say why we think it is good. We may find it unpleasant that researchers study human embryos in petri dishes and then discard them, but we cannot say why.

Personally, I find not knowing why interesting and I do not mind spending ages without being able to state a single sensible reason. There is something fruitful in it, something secretly promising. But it can also drive people crazy. The strange thing is that you easily satisfy them by giving any idiotic reason, as long as it superficially sounds like “saying why.” It satisfies the intellect, which cannot understand how anyone can think something without a reason. It reminds me of a complaint about the neighbor’s dog: it often barks without reasonable grounds.

I would not be suited to participate in a TV debate program. The strange thing is that in such debates people really do behave like barking dogs, precisely by always giving reasons: “Your opinion is idiotic, because woof-woof, woof-woof!” – Debating is most likely overrated… but why do I think so?

Immediately satisfying the demands of the intellect seems unwise. Apart from committing us to opinions that must be defended, which makes it difficult to change, we are forced to give our thoughts premature form. They are prevented from deepening and surprising us.

A Chinese philosopher said, “To pretend to know when you do not know is a disease.” But the intellect forces us to pretend to know. The intellect goes insane if we do not exhibit this insanity.

Acknowledging that you do not know, and then giving yourself time, that is wisdom.

Pär Segerdahl

This post in Swedish

We challenge habits of thought : the Ethics Blog


The unconscious deserves moral attention

January 10, 2018

Pär SegerdahlLast autumn, Michele Farisco wrote one of the most read posts on The Ethics Blog. The post was later republished by BioEdge.

Today, I want to recommend a recent article where Farisco develops his thinking – read it in the journal, Philosophy, Ethics, and Humanities in Medicine.

The article will certainly receive at least as much attention as the blog post did. Together with Kathinka Evers, Farisco develops a way of thinking about the unconscious that at first seems controversial, but which after careful consideration becomes increasingly credible. That combination is hard to beat.

What is it about? It is about patients with serious brain injuries, perhaps after a traffic accident. Ethical discussions about these patients usually focus on residual consciousness. We think that there is an absolute difference between consciousness and unconsciousness. Only a conscious person can experience well-being. Only a conscious person can have interests. Therefore, a patient with residual consciousness deserves a completely different care than an unconscious patient. A different attention to pain relief, peace and quiet, and stimulation. – Why create a warm and stimulating environment if the patient is completely unaware of it?

In the article, Farisco challenges the absolute difference between consciousness and unconsciousness. He describes neuroscientific evidence that indicates two often-overlooked connections between conscious and unconscious brain processes. The first is that the unconscious (at least partly) has the abilities that are considered ethically relevant when residual consciousness is discussed. The other connection is that conscious and unconscious brain processes are mutually dependent. They shape each other. Even unconsciously, the brain reacts uniquely to the voices of family members.

Farisco does not mean that this proves that we have an obligation to treat unconscious patients as conscious. However, the unconscious deserves moral attention. Perhaps we should strive to assess also retained unconscious abilities. In some cases, we should perhaps play the music the patient loved before the accident.

Pär Segerdahl

Farisco, M. and Evers, K. The ethical relevance of the unconscious. Philosophy, Ethics, and Humanities in Medicine (2017) DOI 10.1186/s13010-017-0053-9

This post in Swedish

We recommend readings - the Ethics Blog


Big questions do not have small answers

December 20, 2017

Pär SegerdahlSome questions we perceive are “bigger” than other questions. What does it mean to live, to be, rather than not to be? When does life begin and when does it end? What is a human being? Does life have a meaning or do we endow it with mere façades of meaning?

We do not expect definitive answers to these questions, except for a joke. They are wonderings that accompany us and occasionally confront us. We may then notice that we have an attitude to them. Perhaps a different attitude today than ten years ago. The attitude is not a definitive answer, not a doctrine about reality that dry investigations could support or falsify.

Bioethics sometimes comes close to these big questions, namely, when scientists study what we can associate with the mystery of living, being, existing. An example is embryonic stem cell research, where scientists harvest stem cells from human embryos. Even proponents of such research may experience that there is something sensitive about the embryo. I would not exist, we would not live, you would not be, unless once upon a time there was an embryo…

The embryo is thus easily associated with the big questions of life. This implies that bioethics has to handle them. How does it approach them?

Usually by seeking specific answers to the questions. Like super-smart lawyers who finally get the hang of these age-old, obscure issues and straighten them out for us.

Do you know, for example, when a human being begins to exist? Two bioethicists combined biological facts with philosophical analysis to provide a definitive answer: A human being begins to exist sixteen days after fertilization.

Incorrect, other bioethicists objected. They too combined biological facts with philosophical analysis, but provided another definitive answer: A human being begins to exist already with fertilization. The only exception is twins. They begin to exist later, but much earlier than sixteen days after fertilization.

The bioethicists I am talking about are proud of their intellectual capacity to provide specific answers to such a big question about human existence. However, if big questions do not have small answers, except for a joke, do they not deliver the answer at the cost of losing the question?

The question I am currently working on is how bioethics can avoid losing the questions we perceive are “bigger” than other questions.

Pär Segerdahl

Smith, B. & Brogaard, B. 2003. Sixteen days. Journal of Medicine and Philosophy 28: 45-78.

Damschen, G., Gómez-Lobo, A. & Schönecker, D. 2006. Sixteen days? A reply to B. Smith and B. Brogaard on the beginning of human individuals. Journal of Medicine and Philosophy 31: 165-175.

This post in Swedish

We think about bioethics : www.ethicsblog.crb.uu.se


Ethics, human rights and responsible innovation

October 31, 2017

josepine-fernow2It is difficult to predict the consequences of developing and using new technologies. We interact with smart devices and intelligent software on an almost daily basis. Some of us use prosthetics and implants to go about our business and most of us will likely live to see self-driving cars. In the meantime, Swedish research shows that petting robot cats looks promising in the care of patients with dementia. Genetic tests are cheaper than ever, and available to both patients and consumers. If you spit in a tube and mail it to a US company, they will tell you where your ancestors are from. Who knows? You could be part sub Saharan African, and part Scandinavian at the same time, and (likely) still be you.

Technologies, new and old, have both ethical and human rights impact. Today, we are closer to scenarios we only pictured in science fiction a few decades ago. Technology develops fast and it is difficult to predict what is on the horizon. The legislation, regulation and ethical guidance we have today was developed for a different future. Policy makers struggle to assess the ethical, legal and human rights impact of new and emerging technologies. These frameworks are challenged when a country like Saudi Arabia, criticized for not giving equal rights to women, offers a robot honorary citizenship. This autumn marks the start of a research initiative that will look at some of these questions. A group of researchers from Europe, Asia, Africa and the Americas join forces to help improve the ethical and legal frameworks we have today.

The SIENNA project (short for Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact) will deliver proposals for professional ethics codes, guidelines for research ethics committees and better regulation in three areas: human genetics and genomics, human enhancement, and artificial intelligence & robotics. The proposals will build on input from stakeholders, experts and citizens. SIENNA will also look at some of the more philosophical questions these technologies raise: Where do we draw the line between health and illness, normality and abnormality? Can we expect intelligent software to be moral? Do we accept giving up some of our privacy to screen our genome for genetic disorders? And if giving up some of our personal liberty is the price we have to pay to interact with machines, are we willing to pay it?

 The project is co-ordinated by the University of Twente. Uppsala University’s Centre for Research Ethics & Bioethics contributes expertise on the ethical, legal and social issues of genetics and genomics, and experience of communicating European research. Visit the SIENNA website at www.sienna-project.eu to find out more about the project and our partners!

Josepine Fernow

The SIENNA projectStakeholder-informed ethics for new technologies with high socio-economic and human rights impact – has received just under € 4 million for a 3,5 year project under the European Union’s H2020 research and innovation programme, grant agreement No 741716.

Disclaimer: This text and its contents reflects only SIENNA’s view. The Commission is not responsible for any use that may be made of the information it contains.

SIENNA project

This post in Swedish

Approaching future issues - the Ethics Blog


%d bloggers like this: