Reality surpasses our concepts

March 20, 2019

Pär SegerdahlAfter thinking for some time about donation of human eggs and embryos to stem cell research, I want to express myself as in the headline. Reality surpasses our concepts of it. This is not as strange as it sounds. For, if our concepts already reflected reality, then no one would need to do research, or to think. Just talking would be sufficient. An endless flood of words could replace all sincere aspirations to understand life and the world.

So what is it about donation to research that makes me want to express myself as in the headline? Everyone knows that blood donation is a gift to patients. This makes blood donation humanly understandable. People want to help fellow human beings in need, even strangers. But what about donation of eggs and embryos to stem cell research? Conceptually, the donation does not go to patients in need, but to researchers. This makes it difficult to understand donation to research. Are we to assume that people feel sorry for researchers and that they therefore want to support them by donating to them? Why do donors support research?

Not only does the concept of “donation to research” make donation difficult to understand from a human point of view. The concept also causes donation to appear suspiciously exploitative. The recipient of the donation is more powerful than the donor is. Moreover, if research results are commercialized, the recipient can make a profit on the work that the donation enables, without the donor receiving any share of it. So not only does literal faith in the concept of “donation to research” make a free will to donate difficult to understand. The donation also looks suspicious. Some argue that we should prevent an increasingly capitalized life science sector from exploiting self-sacrificing donors in this way.

Nevertheless, there are people who freely donate to research. Why? I guess it often is because they use research merely as an intermediary, to be able to give to patients. The patient is equally important in donation to research as in blood donation, although the concept does not reflect this relationship. Let me give an unexpected example of intermediaries.

About one kilogram of bacteria lives in our intestinal tract. Without these bacteria, our bodies would not be able to absorb many of the nutrients in the food we eat. When we swallow the food, these bacteria are in a sense the first diners, and our bodies have to wait patiently until they have finished eating. Even if we know this, we rarely think that we are swallowing food in order to allow bacteria in the stomach to eat first. We eat without being aware of the work that these “intermediaries” in the stomach have to do, in order for the nutrients to become available to the body.

The concept of “eating” does not reflect this relationship between bacteria and us. This is not a shortcoming of the concept. On the contrary, it would be very unpleasant if the concept reflected the bacteria’s work in our guts. Who would then want to say, “Let us sit down and eat”? However, problems arise if we have too much literal faith in concepts. Our vocabulary will then begin to impose limitations on us. Our own language will shrink our otherwise open minds to mental caves, where the words cast shadows on the walls.

Researchers, then, can be seen as intermediaries between donors and patients. I hope I do not upset sensitive minds if I suggest that researchers are the bacteria that we need to make donated material available to future patients’ bodies. That is why people donate to research. They sense, more or less intuitively, that research functions as an intermediary. “Donation to research” is at heart a gift to patients.

It is even more complicated, however, for research alone cannot act as intermediary. The task is too great. For the donation to become a gift to patients, a capitalized life science sector is needed, and a healthcare system, and much else. Moreover, just as the beneficiary function of bacteria in our stomachs requires a diet that regulates the balance between bacteria, this system of intermediaries, extending from donor to patient, needs regulation and monitoring, so that all the actors work harmoniously together. We cannot allow quacks to sell dangerous or inefficient drugs to the sick, and we cannot allow researchers to access donated material in any way they see fit.

Donation to research is a striking example of how reality surpasses our concepts. When we succeed in overcoming our literal faith in concepts – when we discover the way out of the cave and see the light – then donation to research finally becomes humanly understandable. The donor uses research to be able to give to patients. Moreover, donation to research ceases to appear as a suspicious transaction between unequal parties, since the donor uses the relatively powerful direct recipient to give to a more understandable recipient: the patient. Trying to counteract exploitation by paying the donor large sums, or by giving the donor a share of the profit, would tie the donor to the wrong recipient: the one emphasized in the concept.

As mentioned, the donor uses not only research to reach the patient, but a whole system of intermediaries, such as industry, healthcare and governmental control. This system of beneficial societal bacteria is therefore, to some extent, subordinate to the donor’s will to help patients. Or rather, the subordination is an aspect of the relationship, as is bacteria’s subordination to human eating. If we want to, we can always see the opposite aspect as well. Who really eats first and who last? Who really uses whom? The questions lack definitive answers, for the aspects change into one another.

With this post, I wanted to suggest the possibility of a bigger seeing, which we can learn to use wisely in our thinking when we discover how conceptually purified standpoints easily shrink our minds to mental caves.

Pär Segerdahl

This post in Swedish

We think about bioethics :

Genetic risk entails genetic responsibility

March 5, 2019

Pär SegerdahlIntellectual optimists have seen genetic risk information as a human victory over nature. The information gives us power over our future health. What previously would have been our fate, genetics now transforms into matters of personal choice.

Reality, however, is not as rosy as in this dream of intellectual power over life. Where there is risk there is responsibility, Silke Schicktanz writes in an article on genetic risk and responsibility. This is probably how people experience genetic risk information when they face it. Genetic risk gives us new forms of responsibility, rather than liberates us from nature.

Silke Schicktanz describes how responsibility emerges in situations where genetic risk is investigated, communicated and managed. The analysis exceeds what I can reproduce in a short blog post. However, I can give the reader a sense of how genetic risk information entails a broad spectrum of responsibilities. Sometimes in the individual who receives the information. Sometimes in the professional who provides the information. Sometimes in the family affected by the information. The examples are versions of the cases discussed in the article:

Suppose you have become strangely forgetful. You do a genetic test to determine if you have a gene associated with Alzheimer’s disease. You have the gene! The test result immediately makes you responsible for yourself. What can you do to delay or alleviate the disease? What practical measures can be taken at home to help you live with the disease? You can also feel responsibility for your family. Have you transferred the gene to your children and grandchildren? Should you urge them to test themselves? What can they do to protect themselves? The professional who administered the test also becomes responsible. Should she tell you that the validity of the test is low? Maybe you should not have been burdened with such a worrying test result, when the validity so low?

Suppose you have rectum-colon cancer. The surgeon offers you to participate in a research study in which a genetic test of the tumor cells will allow individualized treatment. Here, the surgeon becomes responsible for explaining research in personalized medicine, which is not easy. There is also the responsibility of not presenting your participation in the study as an optimization of your treatment. You yourself may feel a responsibility to participate in research, as patients have done in the past. They contributed to the care you receive today. Now you can contribute to the use genetic information in future cancer care. Moreover, the surgeon may have a responsibility to counteract a possible misunderstanding of the genetic test. You can easily believe that the test says something about disease genes that you may have passed on, and that the information should be relevant to your children. However, the test concerns mutations in the cancer cells. The test provides information only about the tumor.

Suppose you have an unusual neurological disorder. A geneticist informs you that you have a gene sequence that may be the cause of the disease. Here we can easily imagine that you feel responsibility for your family and children. Your 14-year-old son has started to show symptoms, but your 16-year-old daughter is healthy. Should she do a genetic test? You discuss the matter with your ex-partner. You explain how you found the genetic information helpful: you worry less, you have started going on regular check-ups and you have taken preventive measures. Together, you decide to tell your daughter about your test results, so that she can decide for herself if she wants to test herself.

These three examples are sufficient to illustrate how genetic risk entails genetic responsibility. How wonderful it would have been if the information simply allowed us to triumph over nature, without this burdensome genetic responsibility! A pessimist could object that the responsibility becomes overpowering instead of empowering. We must surrender to the course of nature; we cannot control everything but must accept our fate.

Neither optimists nor pessimists tend to be realistic. The article by Silke Schicktanz can help us look more realistically at the responsibilities entailed by genetic risk information.

Pär Segerdahl

Schicktanz, S. 2018. Genetic risk and responsibility: reflections on a complex relationship. Journal of Risk Research 21(2): 236-258

This post in Swedish

We like real-life ethics :

Thesis on reproductive ethics

February 25, 2019

Pär SegerdahlOn Thursday, February 28, Amal Matar defends her thesis in the field of reproductive ethics.

As genetic tests become cheaper and more reliable, the potential use of genetic tests also expands. One use could be offering preconception genetic screening to entire populations. Prospective parents could find out if they are carriers of the same recessive autosomal genetic condition, and could plan future pregnancies. Carriers of such genetic conditions can be healthy, but if both parents have the same predisposition, the risk is 25 percent that their child will have the disease.

Preconception genetic screening is not implemented in Sweden. Would it be possible to do so in the future? What would the ethical and social implications be? Is it likely that preconception genetic screening will be implemented in Sweden? These are some of the questions that Amal Matar examines in her thesis.

Amal Matar’s interviews with Swedish healthcare professionals and policymaking experts indicate that preconception genetic screening will not be implemented in Sweden. The interviewees expressed the opinion that such screening would not satisfy any medical need, would threaten important values ​​in Swedish society and in the healthcare system, and require excessive resources.

Amal Matar defends her thesis in the Uppsala University Main Building (Biskopsgatan 3), room IV, on Thursday, February 28 at 13:00. You find an earlier interview with Amal Matar here. If you want to read the thesis, you find a link below.

Pär Segerdahl

Matar, Amal. 2019. Considering a Baby? Responsible Screening for the Future. Uppsala: Acta Universitatis Upsaliensis

This post in Swedish

Approaching future issues - the Ethics Blog

Patients find misleading information on the internet

October 30, 2018

Pär SegerdahlIn phase 1 clinical studies of substances that might possibly be used to treat cancer in the future, cancer patients are recruited as research participants. These patients almost always have advanced cancer that no longer responds to the standard treatment.

That research participation would affect the cancer is unlikely. The purpose of a phase 1 study is to determine safe dosage range and to investigate side effects and other safety issues. This will then enable proceeding to investigating the effectiveness of the substance on specific forms of cancer, but with other research participants.

Given that patients often seek online information on clinical trials, Tove Godskesen, Josepine Fernow and Stefan Eriksson wanted to investigate the quality of the information that currently is available on the internet about phase 1 clinical cancer trials in Sweden, Denmark and Norway.

The results they report in the European Journal of Cancer Care are quite alarming. The most serious problem, as I understand it, is that the information conceals risks of serious side effects, and in various ways suggests possible positive treatment outcomes. This lack of accurate language is serious. We are dealing with severely ill patients who easily entertain unrealistic hopes for new treatment options.

To give a picture of the problem, I would like to give a few examples of typical phrases that Godskesen, Fernow and Eriksson found in the information on the internet, as well as their suggestions for more adequate wordings. Noticing the contrast between the linguistic usages is instructive.

One problem is that the information speaks of treatment, even though it is about research participation. Instead of writing “If you are interested in the treatment,” you could write “If you want to participate in the research.” Rather than writing “Patients will be treated with X,” you could write “Participants will be given X.”

The substance being tested is sometimes described as a medicine or therapy. Instead, you can write “You will get a substance called X.”

Another problem is that research participation is described as an advantage and opportunity for the cancer patient. Instead of writing “An advantage of study participation is that…,” one could write “The study might lead to better cancer treatments for future patients.” Rather than writing “This treatment could be an opportunity for you,” which is extremely misleading in phase 1 clinical cancer trials, one could more accurately say, “You can participate in this study.”

The authors also tested the readability of the texts they found on the internet. The Danish website had the best readability scores, followed by the Norwegian site The Swedish website got the worst readability scores. The information was very brief and deemed to require a PhD to be understandable.

It is, of course, intelligible that it is hard to speak intelligibly about such difficult things as cancer trials. Not only do the patients recruited as study participants hope for effective treatment. The whole point of the research is effective cancer treatment. This is the ultimate perspective of the research; the horizon towards which the gaze is turned.

The fact, however, is that this horizon is far removed, far away in the future, and is about other cancer patients than those who participate in phase 1 trials. Therefore, it is important not to let this perspective characterize information to patients in whom hope would be unrealistic.

Do not talk about treatments and opportunities. Just say “You can participate in this study.”

Pär Segerdahl

Godskesen, TE, Fernow J, Eriksson S. Quality of online information about phase I clinical cancer trials in Sweden, Denmark and Norway. Eur J Cancer Care. 2018;e12937.

This post in Swedish

We have a clinical perspective :

Swedish policymakers on genetic screening before pregnancy

October 17, 2018

Pär SegerdahlSome genetic diseases do not develop in  the child unless both parents happen to have the same gene. Parents can be healthy and unaware that they have the same non-dominant disease gene. In these cases, the risk that their child develops the disease is 25 percent.

Preconception expanded carrier screening could be offered to entire populations, to make everyone who so wishes more informed about their genetic vulnerabilities and better equipped to plan their partner choice and pregnancies. In Sweden, this is not relevant, but the issue could be considered in the future.

In a new article in the Journal of Community Genetics, Amal Matar (PhD student at CRB) reports an interview study with Swedish policymakers: experts at the Swedish National Council on Medical Ethics, at the Swedish Agency for Health Technology Assessment and Assessment of Social Services, and at the National Board of Health and Welfare. Amal Matar wanted to investigate how these influential experts perceive ethical and social aspects of preconception expanded carrier screening, as a new health technology.

It is exciting to get insight into how Swedish policymakers reason about offering genetic screening before pregnancy. They consider alternative financing, prioritization and costs for healthcare. They discuss Sweden as part of the EU. They reflect on what services the healthcare system needs to offer people, depending on what the test results reveal about them. They talk about the need for more research and public engagement, as well as about long-term societal effects.

Questions about responsibility, both parental and societal, struck me as extra interesting. If friends and relatives test themselves, it may seem irresponsible not to do so. Couples can then feel a social pressure to undergo the test, which makes their voluntariness illusory. The experts also saw problems in actively going out looking for disorders in people who are not sick. Society has a responsibility to help people when they are ill, but looking for disease risks in people without symptoms changes the whole evaluation of the risks and benefits of a health technology.

Amal Matar’s conclusion is that Swedish policymakers believe that preconception expanded carrier screening currently is not appropriate in the Swedish healthcare system. The reason commonly used in favor of screening, that it supports well-informed reproductive decision-making, was considered insufficient by the experts if the screening is financed through taxes. They also saw long-term threats to important values ​​in Swedish healthcare.

Pär Segerdahl

Matar, A., Hansson, M.G. and Höglund, A.T. “A perfect society” – Swedish policymakers’ ethical and social views on preconception expanded carrier screening. Journal of Community Genetics, published online 26 September 2018,

This post in Swedish

Approaching future issues - the Ethics Blog

Driverless car ethics

June 20, 2018

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics :

Prepare for robot nonsense

February 26, 2018

Pär SegerdahlAs computers and robots take over tasks that so far only humans could carry out, such as driving a car, we are likely to experience increasingly insidious uses of language by the technology’s intellectual clergy.

The idea of ​​intelligent computers and conscious robots is for some reason terribly fascinating. We see ourselves as intelligent and conscious beings. Imagine if also robots could be intelligent and aware! In fact, we have already seen them (almost): on the movie screen. Soon we may see them in reality too!

Imagine that artifacts that we always considered dead and mechanical one day acquired the enigmatic character of life! Imagine that we created intelligent life! Do we have enough exclamation marks for such a miracle?

The idea of ​​intelligent life in supercomputers often comes with the idea of a test that can determine if a supercomputer is intelligent. It is as if I wanted to make the idea of ​​perpetual motion machines credible by talking about a perpetuum mobile test, invented by a super-smart mathematician in the 17th century. The question if something is a perpetuum mobile is determinable and therefore worth considering! Soon they may function as engines in our intelligent, robot-driven cars!

There is a famous idea of ​​an intelligence test for computers, invented by the British mathematician, Alan Turing. The test allegedly can determine whether a machine “has what we have”: intelligence. How does the test work? Roughly, it is about whether you can distinguish a computer from a human – or cannot do it.

But distinguishing a computer from a human being surely is no great matter! Oh, I forgot to mention that there is a smoke screen in the test. You neither see, hear, feel, taste nor smell anything! In principle, you send written questions into the thick smoke. Out of the smoke comes written responses. But who wrote/generated the answers? Human or computer? If you cannot distinguish the computer-generated answers from human answers – well, then you had better take protection, because an intelligent supercomputer hides behind the smoke screen!

The test is thus adapted to the computer, which cannot have intelligent facial expressions or look perplexed, and cannot groan, “Oh no, what a stupid question!” The test is adapted to an engineer’s concept of intelligent handling of written symbol sequences. The fact that the test subject is a poor human being who cannot always say who/what “generated” the written answers hides this conceptual fact.

These insidious linguistic shifts are unusually obvious in an article I encountered through a rather smart search engine. The article asks if machines can be aware. And it responds: Yes, and a new Turing test can prove it.

The article begins with celebrating our amazing consciousness as “the ineffable and enigmatic inner life of the mind.” Consciousness is then exemplified by the whirl of thought and sensation that blossoms within us when we finally meet a loved one again, hear an exquisite violin solo, or relish an incredible meal.

After this ecstatic celebration of consciousness, the concept begins to be adapted to computer engineering so that finally it is merely a concept of information processing. The authors “show” that consciousness does not require interaction with the environment. Neither does it require memories. Consciousness does not require any emotions like anger, fear or joy. It does not require attention, self-reflection, language or ability to act in the world.

What then remains of consciousness, which the authors initially made it seem so amazing to possess? The answer in the article is that consciousness has to do with “the amount of integrated information that an organism, or a machine, can generate.”

The concept of consciousness is gradually adapted to what was to be proven. Finally, it becomes a feature that unsurprisingly can characterize a computer. After we swallowed the adaptation, the idea is that we, at the Grand Finale of the article, should once again marvel, and be amazed that a machine can have this “mysterious inner life” that we have, consciousness: “Oh, what an exquisite violin solo, not to mention the snails, how lovely to meet again like this!”

The new Turing test that the authors imagine is, as far as I understand, a kind of picture recognition test: Can a computer identify the content of a picture as “a robbery”? A conscious computer should be able to identify pictorial content as well as a human being can do it. I guess the idea is that the task requires very, very much integrated information. No simple rule of thumb, man + gun + building + terrified customer = robbery, will do the trick. It has to be such an enormous amount of integrated information that the computer simply “gets it” and understands that it is a robbery (and not a five-year-old who plays with a toy gun).

Believing in the test thus assumes that we swallowed the adapted concept of consciousness and are ecstatically amazed by super-large amounts of integrated information as: “the ineffable and enigmatic inner life of the mind.”

These kinds of insidious linguistic shifts will attract us even more deeply as robotics develop. Imagine an android with facial expression and voice that can express intelligence or groan at stupid questions. Then surely, we are dealing an intelligent and conscious machine!

Or just another deceitful smoke screen; a walking, interactive movie screen?

Pär Segerdahl

This post in Swedish

The temptation of rhetoric - the ethics blog

%d bloggers like this: