A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Tag: trust (Page 1 of 2)

The importance of letting things take their time

To be an ethicist and philosopher is to be an advocate for time: “Wait, we need time to think this through.” This idea of letting things take their time rarely gains traction in society. It starts already in school, where the focus is often on being able to calculate quickly and recite as many words as possible in one minute. It then continues at the societal level.

A good example is technological development, which is moving faster than ever. Humans have always used more or less advanced and functional technology, always searching for better ways to solve problems. With the Industrial Revolution, things began to accelerate, and since then, the pace has only increased. We got factories, car traffic, air travel, nuclear power, genetically modified crops, and prenatal diagnostics. We got typewriters, computers, and telephones. We got different ways to play and reproduce music. Now we have artificial intelligence (AI), which it is often said will revolutionize most parts of society.

The development and implementation of AI is progressing at an unparalleled speed. Various government authorities use AI, healthcare allows AI tools to take on more and more tasks. Schools and universities wrestle with the question of how AI should be used by students, teachers, and researchers. Teachers have been left at a loss because AI established itself so quickly, and different teachers draw different boundaries for what counts as cheating, resulting in great uncertainty for students about what applies. People use AI for everything from planning their day to getting help with mental health issues. AI is used as a relationship expert, but also as the very object of romantic or friendship relationships. Today, there are AI systems that can call elderly and sick people to ask how they are feeling, whether they have taken their medication, and perhaps whether they have had any social contact recently.

As with all technology, there are advantages and disadvantages to AI, and it can be used in both good and bad ways. AI can be used to improve life for people and the environment, but like all technology, it can also be harmful to people and the environment. People and societies can do things better and more easily with AI, but like all technology, it can also have negative consequences such as environmental damage, unemployment, and discrimination.

Researchers in the Netherlands have discussed the problems that arise with new technology in terms of “social experiments.” They argue that there is an important difference compared to the careful testing that, for example, new pharmaceuticals undergo before they are approved. New technologies are not tested in such a structured way.

The EU has introduced a basic legal framework for AI (the EU AI Act), which can be seen as an attempt to introduce the new technology in a way that is less experimental on people and societies: more “responsible” and “trustworthy” AI. The new law is criticized by some European tech companies, who claim that the law means we will fall behind countries that have no regulations, such as the USA and China. Doing things in a thoughtful and ethically sound way is apparently considered less important than quickly getting the technology in place. On the contrary, caution is seen as risky, which says something about the concept of risk that currently drives such rapid development that perhaps not even the technology can deliver what the market expects.

Just as with previous important technologies, we need to think things through beforehand. If AI is to help us without harmful consequences, development must be allowed to take its time. This is even more important with AI than with previous technologies because AI has an unusually large potential to affect our lives. Ethical research points to several problems related to justice and trust. One problem is that we cannot explain why AI in, for example, healthcare reaches a certain conclusion about a specific individual. With previous technology, someone human being – if not the user, then at least the developer – has always been able to explain the causality in the system. Can we trust a technology in healthcare that we cannot control or explain in essential ways?

There are technology optimists and technology pessimists. Some are enthusiastic about new technologies and believe it is the solution to all our problems. Others think the precautionary principle should apply to all new technology and do not want to accept any risks at all. Instead, we should see the middle way. The middle way consists of letting things take their time to show their real possibilities beyond the optimists’ and pessimists’ preconceived notions. Advocating an ethical approach is not about stopping development but about slowing down the process. We need time to reflect on where it might be appropriate to introduce AI and where we should refrain from using the technology. We should also consider how the AI we choose to use is introduced in a good way so that we have time to detect risks of injustice, discrimination, and reduced trust and can minimize them.

It is not easy and not popular to be the one who says, “Wait, we need to think this through.” Yet it is so important that we take the time. We must think ahead so that things do not go wrong when they could so easily have gone right. It might be worth considering what could happen if we learned in school that it is more important to do things right than to do them quickly.

Jessica Nihlén Fahlquist

Written by…

Jessica Nihlén Fahlquist, senior lecturer in biomedical ethics and associate professor in practical philosophy at the Centre for Research Ethics & Bioethics.

This post in Swedish

Approaching future issues

Responsibly planned research communication

Academic research is driven by dissemination of results to peers at conferences and through publication in scientific journals. However, research results belong not only to the research community. They also belong to society. Therefore, results should reach not only your colleagues in the field or the specialists in adjacent fields. They should also reach outside the academy.

Who is out there? A homogeneous public? No, it is not that simple. Communicating research is not two activities: first communicating the science to peers and then telling the popular scientific story to the public. Outside the academy, we find engineers, entrepreneurs, politicians, government officials, teachers, students, research funders, taxpayers, healthcare professionals… We are all out there with our different experiences, functions and skills.

Research communication is therefore a strategically more complicated task than just “reaching the public.” Why do you want to communicate your results; why are they important? Who will find your results important? How do you want to communicate them? When is the best time to communicate? There is not just one task here. You have to think through what the task is in each particular case. For the task varies with the answers to these questions. Only when you can think strategically about the task can you communicate research responsibly.

Josepine Fernow is a skilled and experienced research communications officer at CRB. She works with communication in several research projects, including the Human Brain Project and STARBIOS2. In the latter project, about Responsible Research and Innovation (RRI), she contributes in a new book with arguments for responsibly planned research communication: Achieving impact: some arguments for designing a communications strategy.

Josepine Fernow’s contribution is, in my view, more than a convincing argument. It is an eye-opening text that helps researchers see more clearly their diverse relationships to society, and thereby their responsibilities. The academy is not a rock of knowledge in a sea of ​​ignorant lay people. Society consists of experienced people who, because of what they know, can benefit from your research. It is easier to think strategically about research communication when you survey your relations to a diversified society that is already knowledgeable. Josepine Fernow’s argumentation helps and motivates you to do that.

Josepine Fernow also warns against exaggerating the significance of your results. Bioscience has potential to give us effective treatments for serious diseases, new crops that meet specific demands, and much more. Since we are all potential beneficiaries of such research, as future patients and consumers, we may want to believe the excessively wishful stories that some excessively ambitious researchers want to tell. We participate in a dangerous game of increasingly unrealistic hopes.

The name of this dangerous game is hype. Research hype can make it difficult for you to continue your research in the future, because of eroded trust. It can also make you prone to take unethical shortcuts. The “huge potential benefit” obscures your judgment as a responsible researcher.

In some research fields, it is extra difficult to avoid research hype, as exaggerated hopes seem inscribed in the very language of the field. An example is artificial intelligence (AI), where the use of psychological and neuroscientific vocabulary about machines can create the impression that one has already fulfilled the hopes. Anthropomorphic language can make it sound as if some machines already thought like humans and functioned like brains.

Responsible research communication is as important as difficult. Therefore, these tasks deserve our greatest attention. Read Josepine Fernow’s argumentation for carefully planned communication strategies. It will help you see more clearly your responsibility.

Finally, a reminder for those interested: the STARBIOS2 project organizes its final event via Zoom on Friday, May 29, 2020.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Fernow, J. (2019). Note #11: Achieving impact: Some arguments for designing a communications strategy, In A. Declich (Ed.), RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project, (pp. 177-180). Uppsala University. ISBN: 978-91-506-2811-1

We care about communication

This post in Swedish

Proceed carefully with vaccine against covid-19

Pharmaceutical companies want to quickly manufacture a vaccine against covid-19, with human testing and launch in the market as soon as possible. In a debate article, Jessica Nihlén Fahlquist at CRB warns of the risk of losing the larger risk perspective: “Tests on people and a potential premature mass vaccination entail risks. It is easy to forget about similar situations in the past,” she writes.

It may take time for side effects to appear. Unfortunately, it therefore also takes time to develop new safe vaccines. We need to develop a vaccine, but even with new vaccines, caution is needed.

The article is in Swedish. If you want to Google translate: Proceed carefully with vaccine against covid-19

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

We have a clinical perspective

Fear of the unknown produces ghosts

Pär SegerdahlWhat really can start feverish thought activity is facing an unclear threat. We do not really see what it is, so we fill the contours ourselves. At the seminar this week, we discussed what I think was such a case. A woman decided to test if she possibly had calcium deficiency. To her surprise, the doctor informed her that she suffered from a disease, osteoporosis, characterized by increased risk of bone fractures.

She already had experienced the problem. A hug could hurt her ribs and she had broken a shoulder when pushing the car. However, she felt no fear until she was informed that she suffered from a disease that meant increased risk of bone fracture.

I do not mean she had no reason to be worried. However, her worries seem to have become nightmarish.

Presumably, she already understood that she had to be careful in some situations. However, she interpreted the “risk factor” that she was informed about as an invisible threat. It is like a ghost, she says. She began to compare her body with a house where the foundation dissolves; a house which might therefore collapse. She began to experience great danger in every activity.

Many who are diagnosed with osteoporosis do not get fractures. If you get fractures, they do not have to be serious. However, the risk of fractures is greater in this group and if you get a hip fracture, that is a big problem. The woman in the example, however, imagined her “risk factor” as a ghost that constantly haunted her.

I now wonder: Are ethical debates sometimes are about similar ghost images? Most of us do not really know what embryo research is, for example, it seems vaguely uncanny. When we hear about it, we fill the contours: the embryo is a small human. Immediately, the research appears nightmarish and absolute limits must be drawn. Otherwise, we end up on a slippery slope where human life might degenerate, as the woman imagined her body might collapse.

I also wonder: If debates sometimes are about feverishly produced ghost images, how should we handle these ghosts? With information? But it was information that produced the ghosts. With persistent logical counter arguments? But the ghosts are in the feverish reasoning. Should we really continue to fill the contours of these images, as if we corrected bad sketches? Is it not taking ghosts too seriously? Is it not like trying to wake up yourself in a dream?

Everything started with the unclear threat. The rest were dreamlike consequences. We probably need to reflect more cautiously on the original situation where we experienced the first vague threat. Why did we react as did? We need to treat the problem in its more moderate beginning, before it developed its nightmarish dimensions.

This is not to say that we have no reason to be concerned.

Pär Segerdahl

Reventlow, S., Hvas, A. C., Tulinius, C. 2001. “In really great danger.” The concept of risk in general practice. Scandinavian Journal of Primary Health Care 19: 71-75

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

Sliding down along the slippery slope

Pär SegerdahlDebates on euthanasia, abortion or embryonic stem cell research frequently invoke slippery slope arguments. Here is an example of such reasoning:

Legalizing physician-assisted suicide (PAS) at the end of life pushes healthcare morality in a dangerous direction. Soon, PAS may be practiced even on people who are not at the end of life and who do not request it. Even if this does not happen, the general population’s trust in healthcare will erode. Therefore, PAS must be forbidden.

Reasoning about the future is important. We need to assess consequences of allowing new practices. However, how do we assess the future in a credible way?

In an article in Medicine, Health Care and Philosophy, Gert Helgesson, Niels Lynøe and Niklas Juth argue that many slippery slope arguments are not empirically substantiated, but are based on value-impregnated factual assumptions. Anyone who considers PAS absolutely wrong considers it as a fatal step in a dangerous direction. Therefore, it is assumed that taking such a step will be followed by further steps in the same dangerous direction. If you chose the wrong path, you end up further and further away in the wrong direction. It seems inevitable that a first step is followed by a second step…

The problem is that this prophesying is based on the original moral interpretation. Anyone who is not convinced of the fatality of a “first” step does not have a tendency to see it as a “first step” with an inherent tendency to lead to a “second step” and finally to disaster.

Thinking in terms of the slippery slope can sometimes be experienced as if you yourself were on the slippery slope. Your thoughts slide toward the daunting precipice. Perhaps the article by Helgesson, Lynøe and Juth contains an analysis of this phenomenon. The slippery slope has become a vicious circle where the prophesying of disastrous consequences is steered by the moral interpretation that one defends with reference to the slippery slope.

Slippery slope arguments are not wrong in themselves. Sometimes development is on a slippery slope. However, this form of reasoning requires caution, for sometimes it is our thoughts that slide down along the slippery slope.

And that can have consequences.

Pär Segerdahl

Helgesson, G., Lynøe, N., Juth, N. 2017. Value-impregnated factual Claims and slippery slope arguments. Medicine, Health Care and Philosophy 20: 147-150.

This post in Swedish

Approaching future issues - the Ethics Blog

Consent based on trust rather than information?

Pär SegerdahlConsent to research participation has two dimensions. On the one hand, the researcher wants to do something with the participant: we don’t know what until the researcher tells. To obtain consent, the researcher must provide information about what will be done, what the purpose is, what the risks and benefits are – so that potential participants can decide whether to consent or not.

On the other hand, potential participants would hardly believe the information and consider consenting, if they didn’t trust the researcher or the research institution. If trust is strong, they might consent even without considering the information. Presumably, this occurs often.

The fact that consent can be given based on trust has led to a discussion of trust-based consent as more or less a separate form of consent, next to informed consent. An article in the journal Bioethics, for example, argues that consent based on trust is not morally inferior to consent based on information. Consent based on trust supports autonomy, voluntariness, non-manipulation and non-exploitation as much as consent based on information does, the authors argue.

I think it is important to highlight trust as a dimension of consent to research participation. Consent based on trust need not be morally inferior to consent based on careful study of information.

However, I get puzzled over the tendency to speak of trust-based consent as almost a separate form of consent, next to informed consent. That researchers consider ethical aspects of planned research and tell about them seems to be a concrete way of manifesting responsibility, respect and trustworthiness.

Carefully planning and going through the consent procedure is an ethical practice that can make us better humans: we change through what we do. It also opens up for respondents to say, “Thank you, I trust you, I don’t need to know more, I will participate.” Information and trust go hand in hand. There is dynamic interplay between them.

I guess that a background to talk of trust-based consent as almost a separate form of consent is another tendency: the tendency to purify “information” as cognitive and to idealize humans as rational decision makers. In addition, there is a tendency to regiment the information that “must” be provided.

This tendency to abstract and regulate “information” has made informed consent into what sometimes is perceived as an empty, bureaucratic procedure. Nothing that makes us better humans, in other words!

It would be unfortunate if we established two one-dimensional forms of consent instead of seeing information and trust as two dimensions of consent to research.

Another article in Bioethics presents a concrete model of trust-based consent to biobank research. Happily, the model includes willingly telling participants about biobank research. Among other things, one explains why one cannot specify which research projects will use the donated biological samples, as this lies in the future. Instead, one gives broad information about what kind of research the biobank supports, and one informs participants that they can limit the use of the material they donate if they want to. And one tells about much more.

Information and trust seem here to go hand in hand.

Pär Segerdahl

Halmsted Kongsholm, N. C., Kappel, K. 2017. Is consent based on trust morally inferior to consent based on information? Bioethics. doi: 10.1111/bioe.12342

Sanchini, V. et al. 2016. A trust-based pact in research biobanks. From theory to practice. Bioethics 4: 260-271. doi: 10.1111/bioe.12184

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

The apparent academy

Pär SegerdahlWhat can we believe in? The question acquires new urgency when the IT revolution makes it easier to spread information through channels that obey other laws than those hitherto characterizing journalism and academic publishing.

The free flow of information online requires a critical stance. That critical stance, however, requires a certain division of labor. It requires access to reliable sources: knowledge institutions like the academy and probing institutions like journalism.

But what happens to the trustworthiness of these institutions if they drown in the sea of impressively designed websites? What if IT entrepreneurs start what appear to be academic journals, but publish manuscripts without serious peer review as long as the researchers are paying for the service?

This false (or apparent) academy is already here. In fact, just as I write this, I get by email an offer from one of these new actors. The email begins, “Hello Professor,” and then promises unlikely quick review of manuscripts and friendly, responsive staff.

What can we do? Countermeasures are needed if what we call critical reflection and knowledge should retain their meaning, rather than serve as masks for something utterly different.

One action was taken on The Ethics Blog. Stefan Eriksson and Gert Helgesson published a post where they tried to make researchers more aware of the false academy. Apart from discussing the phenomenon, they listed deceptive academic journals to which unsuspecting bioethicists may submit papers (deceived by appearances). They also listed journals that take academic publishing seriously. The lists will be updated annually.

In an article in Medicine, Health Care and Philosophy (published by Springer), Eriksson and Helgesson deepen their examination of the false academy. Several committed researchers have studied the phenomenon and the article describes and discusses what we know about these questionable activities. It also proposes a list of characteristics of problematic journals, like unspecified editorial board, non-academic advertisement on the website, and spamming researchers with offers to submit manuscripts (like the email I received).

Another worrying trend, discussed in the article, is that even some traditional publishers begin to embrace some of the apparent academy’s practices (for they are profitable). Such as publishing limited editions of very expensive anthologies (which libraries must buy), or issuing journals that appear to be peer reviewed medical journals, but which (secretly) are sponsored by drug companies.

The article concludes with tentative suggestions on countermeasures, ranging from the formation of committees that keep track of these actors to stricter legislation and development of software that quickly identifies questionable publications in researchers’ publication lists.

The Internet is not just a fast information channel, but also a place where digital appearance gets followers and becomes social reality.

Pär Segerdahl

Eriksson, S. & Helgesson, G. 2016. “The false academy: predatory publishing in science and bioethics.” Medicine, Health Care and Philosophy, DOI 10.1007/s11019-016-9740-3

This post in Swedish

Approaching future issues - the Ethics Blog

Trust, responsibility and the Volkswagen scandal

Jessica Nihlén FahlquistVolkswagen’s cheating with carbon emissions attracted a lot of attention this autumn. It has been suggested that the cheating will lead to a decrease in trust for the company, but also for the industry at large. That is probably true. But, we need to reflect on the value of trust, what it is and why it is needed. Is trust a means or a result?

It would seem that trust has a strong instrumental value since it is usually discussed in business-related contexts. Volkswagen allegedly needs people’s trust to avoid losing money. If customers abandon the brand due to distrust, fewer cars will be sold.

This discussion potentially hides the real issue. Trust is not merely a means to create or maintain a brand name, or to make sure that money keeps coming in. Trust is the result of ethically responsible behaviour. The only companies that deserve our trust are the ones that behave responsibly. Trust, in this sense, is closely related to responsibility.

What is responsibility then? One important distinction to make is the one between backward-looking and forward-looking responsibility. We are now looking for the one who caused the problem, who is to blame and therefore responsible for what happened. But responsibility is not only about blame. It is also a matter of looking ahead, preventing wrongful actions in the future and doing one’s utmost to make sure the organisation, of which one is a member, behaves responsibly.

One problem in our time is that so many activities take place in such large contexts. Organisations are global and complex and it is hard to pinpoint who is responsible for what. All the individuals involved only do a small part, like cogs in a wheel. When a gigantic actor like Volkswagen causes damage to health or the environment, it is almost impossible to know who caused what and who should have acted otherwise. In order to avoid this, we need individuals who take responsibility and feel responsible. We should not conceive of people as powerless cogs in a wheel. The only companies who deserve our trust are the ones in which individuals at all levels take responsibility.

What is most important now is not that the company regains trust. Instead, we should demand that the individuals at Volkswagen raise their ethical awareness and start acting responsibly towards people, society and the environment. If they do that, trust will eventually be a result of their responsible behaviour.

Jessica Nihlén Fahlquist

(This text was originally published in Swedish, in the magazine, Unionen, industri och teknik, December 2015.)

Further reading:

Nihlén Fahlquist, J. 2015. “Responsibility as a virtue and the problem of many hands,” In: Ibo van de Poel, Lambèr Royakkers, Sjoerd Zwart. Moral Responsibility in Innovation Networks. Routledge.

Nihlén Fahlquist J. 2006. “Responsibility ascriptions and Vision Zero,” Accident Analysis and Prevention 38, pp. 1113-1118.

Van de Poel, I. and Nihlén Fahlquist J. 2012. “Risk and responsibility.” In: Sabine Roeser, Rafaela Hillerbrand, Martin Peterson, Per Sandin Handbook of Risk Theory, 2012, Springer, Dordrecht.

Nihlén Fahlquist J. 2009. “Moral responsibility for environmental problems – individual or institutional?” Journal of Agricultural and Environmental Ethics 22(2), pp. 109-124.

This post in Swedish

We challenge habits of thought : the Ethics Blog

Biobank news: ethics and law

The second issue of the newsletter from CRB and BBMRI.se is now available:

This April issue contains four interesting news items about:

  1. New international research cooperation on genetic risk information.
  2. The new Swedish law on registers for research on heritage, environment and health.
  3. The legislative process of developing a European data protection regulation.
  4. A new article on trust and ethical regulation.

You’ll also find a link to a two-page PDF-version of the newsletter.

Pär Segerdahl

We recommend readings - the Ethics Blog

Research ethics as moral assurance system

PÄR SEGERDAHL Associate Professor of Philosophy and editor of The Ethics BlogModern society seems to be driven by skepticism. As philosophers systematically doubted the senses by enumerating optical and other illusions, our human ability to think for ourselves and take responsibility for our professional activities is doubted by enumerating scandals and cases of misconduct in the past.

The logic is simple: Since human practices have a notorious tendency to slide into the ditch – just think of scandals x, y and z! – we must introduce assurance systems that guarantee that the practices remain safely on the road.

In such a spirit of systematic doubt, research ethics developed into what resembles a moral assurance system for research. With reference to past scandals and atrocities, an extra-legal regulatory system emerged with detailed steering documents (ethical guidelines), overseeing bodies (research ethics committees), and formal procedures (informed consent).

The system is meant to secure ethical trustworthiness.

The trustwortiness of the assurance system is questioned in a new article in Research Ethics, written by Linus Johansson together with Stefan Eriksson, Gert Helgesson and Mats G. Hansson.

Guidelines, review and consent aren’t questioned as such, however. (There are those who want to abolish the system altogether.) The problem is rather the institutionalized distrust that makes the system more and more formalized, like following a checklist in a mindless bureaucracy.

The logic of distrust demands a system that does not rely on the human abilities that are doubted. That would be self-contradictory. But thereby the system does not support human abilities to think for ourselves and take responsibility.

The logic demands a system where humans become what they are feared being.

The cold logic of distrust is what needs to be overcome. Can we abstain from demanding more detailed guidelines and more thorough control, next time we hear about a scandal?

The logic of skepticism is not easily overcome.

Pär Segerdahl

We challenge habits of thought : the Ethics Blog

« Older posts