A research blog from the Centre for Resarch Ethics & Bioethics (CRB)

Category: In the research debate (Page 16 of 37)

Science and society: a changing framework and the role of RRI (by Daniele Mezzana)

The STARBIOS2 project has carried out its activities in a context of the profound transformations that affect contemporary societies, and now we are all facing the Covid-19 pandemic. Science and society have always coevolved, they are interconnected entities, but their relationship is changing and it has been for some time. This shift from modern to so-called postmodern society affects all social institutions in similar ways, whether their work is in politics, religion, family, state administration, or bioscience.

We can find a wide range of phenomena connected to this trend in the literature, for instance: globalization; weakening of previous social “structures” (rules, models of action, values and beliefs); more capacity and power of individuals to think and act more freely (thanks also to new communication technologies); exposure to risks of different kinds (climate change, weakening of welfare, etc.); great social and cultural diversification; and weakening of traditional boundaries and spheres of life, etc.

In this context, we are witnessing the diminishing authority and prestige of all political, religious, even scientific institutions, together with a decline in people’s trust towards these institutions. One example would be the anti-vaccination movement.

Meanwhile, scientific research is also undergoing profound transformations, experiencing a transition that has been examined in various ways and called various names. At the heart of this transformation is the relationship between research and the society it belongs to. We can observe a set of global trends in science.

Such trends include the increasing relationship between universities, governments and industries; the emergence of approaches aimed at “opening” science to society, such as citizen science; the diffusion of cooperative practices in scientific production; the increasing relevance of transdisciplinarity; the increasing expectation that scientific results have economic, social, and environmental impacts; the increasingly competitive access to public funds for research; the growing importance attached to quantitative evaluation systems based on publications, often with distorting effects and questionable results; and the emergence on the international economic and technological scene of actors such as India, China, Brazil, South Africa and others. These trends produce risks and opportunities for both science and society.

Critical concerns for science includes career difficulties for young researchers and women in the scientific sector; the cost of publishing and the difficulties to publish open access; and the protection of intellectual property rights.

Of course, these trends and issues manifest in different ways and intensities according to the different political, social and cultural contexts they exist in.

After the so-called “biological revolution” and within the context of the “fourth industrial revolution” and with “converging technologies” like genetics, robotics, info-digital, neurosciences, nanotechnologies, biotechnologies, and artificial intelligence, the biosciences are at a crossroads in its relationship to society.

In this new context, more and more knowledge is produced and technological solutions developed require a deeper understanding of their status, limits, and ethical and social acceptability (take organoids, to name one example). Moreover, food security, clean energy transition, climate change, and pandemics are all challenges where bioscience can play a crucial role, while new legal, ethical, and social questions that need to be dealt with arise.

These processes have been running for years, albeit in different ways, and national and international decision-makers have been paying attention. Various forms of governance have been developed and implemented over time, to re-establish and harmonize the relationship between scientific and technological research and the rest of society, including more general European strategies and approaches such as Smart Specialization, Open Innovation, Open Science and Responsible Research and Innovation as well as strategies related to specific social aspects of science (such as ethics or gender).

Taking on an approach such as RRI is not simply morally recommendable, but indispensable for attempting a re-alignment between scientific research and the needs of society. Starting from the areas of the life of the scientific communities that are most crucial to science-society relations (The 5+1 RRI keys: Science education, Gender equality, Public engagement, Ethics, Open access, and the cross-cutting sixth key: Governance) and taking the four RRI dimensions into account (anticipation, inclusiveness, responsiveness, and reflexivity) can provide useful guidance for how to activate and drive change in research organisations and research systems.

We elaborate and experiment, in search of the most effective and most relevant solution. While at the same time, there is a need to encourage mainstreaming of the most substantial solutions, to root them more deeply and sustainably in the complex fabric of scientific organisations and networks. Which leads us to ask ourselves: in this context, how can we mainstream RRI and its application in the field of bioscience?

Based on what we know, and on experiences from the STARBIOS2 project, RRI and similar approaches need to be promoted and supported by specific policies and contextualised on at least four levels.

  • Organizational contextualization
    Where mainstreaming takes place through the promotion of a greater embedment of RRI, or similar approaches, within the individual research organizations such as universities, national institutes, private centres, etc.
  • Disciplinary or sectoral contextualization
    Where mainstreaming consists of adapting the responsible research and innovation approach to a specific discipline − for example, biotechnology − or to an entire “sector” in a broad sense, such as bioscience.
  • Geopolitical and cultural contextualization
    Where mainstreaming aims to identify forms of adaptation, or rather reshaping, RRI or similar approaches, in various geopolitical and cultural contexts, taking into account elements such as the features of the national research systems, the economy, territorial dynamics, local philosophy and traditions, etc.
  • Historical contextualization
    Where RRI mainstreaming is related to the ability of science to respond to the challenges that history poses from time to time − and of which the COVID-19 pandemic is only the last, serious example − and to prevent them as much as possible.

During the course of the STARBIOS2 project, we have developed a set of guidelines and a sustainable model for RRI implementation in bioscience research institutions. Over the course of 4 years, 6 bioscience research institutions in Europe, and 3 outside Europe, worked together to achieve structural change towards RRO in their own research institutions with the goal of achieving responsible biosciences. We were looking forward to revealing and discussing our results in April, but with the Covid-19 outbreak, neither that event nor our Cape Town workshop was a possibility. Luckily, we have adapted and will now share our findings online, at our final event on 29 May. We hope to see you there.

For our final remark, as the Covid-19 pandemic is challenging our societies, our political and economic systems, we recognise that scientists are also being challenged. By the corona virus as well as by contextual challenges. The virus is testing their ability to play a key role to the public, to share information and to produce relevant knowledge. But when we go back to “normal”, the challenge of changing science-society relations will persist. And we will remain convinced that RRI and similar approaches will be a valuable contribution to addressing these challenges, now and in the future.

Daniele Mezzana

Written by…

Daniele Mezzana, a social researcher working in the STARBIOS2 project (Structural Transformation to Attain Responsible BIOSciences) as part of the coordination team at University of Rome – Tor Vergata.

This text is based on the Discussion Note for the STARBIOS2 final event on 29 May 2020. 

STARBIOS2 logo

The STARBIOS2 project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 709517. The contents of this text and the view expressed are the sole responsibility of the author and under no circumstances can be regarded as reflecting the position of the European Union.

We recommend readings

We do not know if cancer patients receive better treatment by participating in clinical trials

How do we know? That is the recurring question in a scientific culture. Do we have support for what we claim or is it just an opinion? Is there evidence?

The development of new cancer treatments provides many examples of the recurring question. The pharmaceutical company would like to be able to claim that the new treatment is more effective than existing alternatives and that the dosages recommended give good effect without excessive side effects. However, first we must answer the question, How do we know?

It is not enough to ask the question just once. We must repeat the question for every aspect of the treatment. Any claim on efficacy, side effects and dosages must be supported by answers to the question. How do we arrive at these answers? How do we check that it is not mere opinions? Through clinical trials conducted with cancer patients who agree to be research subjects.

A new research ethical study shows, however, that an ethically sensitive claim is often repeated in cancer research, without first asking and answering the question “How do we know?” in a satisfying way. Which claim? It is the claim that cancer patients are better off as participants in clinical trials than as regular patients who receive standard treatment. The claim is ethically sensitive because it can motivate patients to participate in trials.

In a large interview study, the authors first investigated whether the claim occurs among physicians and nurses working with clinical trials. Then, through a systematic literature review, they examined whether there is scientific evidence supporting the claim. The startling answer to the questions is: Yes, the claim is common. No, the claim lacks support.

Patients recruited for clinical trials are thus at risk of being misled by the common but unfounded opinion that research participation means better treatment. Of course, it is conceivable that patients who participate in trials will at least get indirect positive effects through increased attention: better follow-ups, more sample taking, closer contacts with physicians and nurses. However, indirect positive effects on outcomes should have been visible in the literature study. Regarding subjective effects, it is pointed out in the article that such effects will vary with the patients’ conditions and preferences. It is not always positive for a very sick patient to provide the many samples that research needs. In general, then, we cannot claim that research participation has indirect positive effects.

This is how the authors, including Tove Godskesen and Stefan Eriksson at CRB, reason in the clearly written article in BMC Cancer: Are cancer patients better off if they participate in clinical trials? A mixed methods study. Tove Godskesen was the leader of the study.

An ethically important conclusion drawn in the article is the following. If we suggest to patients who consent to participation in trials that research means better treatment, then they receive misleading information. Instead, altruistic research participation should be emphasized. By participating in studies, patients support new knowledge that can enable better cancer treatments for future patients.

The article examines a case where the question “How do we know?” has the answer, “We do not know, it is just an opinion.” Then at least we know that we do not know! How do we know? Through the studies presented in the article – read it!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Zandra Engelbak Nielsen, Stefan Eriksson, Laurine Bente Schram Harsløf, Suzanne Petri, Gert Helgesson, Margrete Mangset and Tove E. Godskesen. Are cancer patients better off if they participate in clinical trials? A mixed methods study. BMC Cancer 20, 401 (2020). https://doi.org/10.1186/s12885-020-06916-z

We have a clinical perspective

This post in Swedish

Inspiration for responsible research and innovation

Our attitude to science is changing. Can we talk solemnly about it anymore as a unified endeavor, or even about sciences? It seems more apt to talk about research activities that produce useful and applicable knowledge.

Science has been dethroned, it seems. In the past, we revered it as free and independent search for the truth. We esteemed it as our tribunal of truth, as the last arbiter of truth. Today, we demand that it brings benefits and adapts to society. The change is full of tension because we still want to use scientific expertise as a higher intellectual authority. Should we bow to the experts or correct them if they do not deliver the “right knowledge” or the “desirable facts”?

Responsible Research and Innovation (RRI) is an attempt to manage this risky change, adapting science to new social requirements. As you hear from the name, RRI is partly an expression of the same basic attitude change. One could perhaps view RRI as the responsible dethroning of science.

Some mourn the dethroning, others rejoice. Here I just want to link RRI to the changed attitude to science. RRI handles a change that is basically affirmed. The ambiguous attitude to scientific expertise, mentioned above, shows how important it is that we take responsibility for people’s trust in what is now called research and innovation. For why should we listen to representatives of a sector with such unholy designation?

RRI is introduced in European research within the Horizon 2020 programme. Several projects are specifically about implementing and studying RRI. Important aspects of RRI are gender equality, open access publishing, science education, research communication, public engagement and ethics. It is about adapting research and innovation to a society with new hopes and demands on what we proudly called science.

A new book describes experiences of implementing RRI in a number of bioscience organizations around the world. The book is written within the EU-project, STARBIOS2. In collaboration with partners in Europe, Africa and the Americas, this project planned and implemented several RRI initiatives and reflected on the work process. The purpose of STARBIOS2 has been to change organizations durably and structurally. The book aims to help readers formulate their own action plans and initiate structural changes in their organizations.

The cover describes the book as guidelines. However, you will not find formulated guidelines. What you will find, and which might be more helpful, is self-reflection on concrete examples of how to work with RRI action plans. You will find suggestions on how to emphasize responsibility in research and development. Thus, you can read about efforts to support gender equality, improve exchange with the public and with society, support open access publication, and improve ethics. Read and be inspired!

Finally, I would like to mention that the Ethics Blog, as well as our ethics activities here at CRB, could be regarded as examples of RRI. I plan to return later with a post on research communication.

The STARBIOS2 project is organising a virtual final event on 29 May! Have a look at the preliminary programme!

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Declich, Andrea. 2019. RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project.

We recommend readings

This post in Swedish

Anthropomorphism in AI can limit scientific and technological development

Anthropomorphism almost seems inscribed in research on artificial intelligence (AI). Ever since the beginning of the field, machines have been portrayed in terms that normally describe human abilities, such as understanding and learning. The emphasis is on similarities between humans and machines, while differences are downplayed. Like when it is claimed that machines can perform the same psychological tasks that humans perform, such as making decisions and solving problems, with the supposedly insignificant difference that machines do it “automated.”

You can read more about this in an enlightening discussion of anthropomorphism in and around AI, written by Arleen Salles, Kathinka Evers and Michele Farisco, all at CRB and the Human Brain Project. The article is published in AJOB Neuroscience.

The article draws particular attention to so-called brain-inspired AI research, where technology development draws inspiration from what we know about the functioning of the brain. Here, close relationships are emphasized between AI and neuroscience: bonds that are considered to be decisive for developments in both fields of research. Neuroscience needs inspiration from AI research it is claimed, just as AI research needs inspiration from brain research.

The article warns that this idea of ​​a close relationship between the two fields presupposes an anthropomorphic interpretation of AI. In fact, brain-inspired AI multiplies the conceptual double exposures by projecting not only psychological but also neuroscientific concepts onto machines. AI researchers talk about artificial neurons, synapses and neural networks in computers, as if they incorporated artificial brain tissue into the machines.

An overlooked risk of anthropomorphism in AI, according to the authors, is that it can conceal essential characteristics of the technology that make it fundamentally different from human intelligence. In fact, anthropomorphism risks limiting scientific and technological development in AI, since it binds AI to the human brain as privileged source of inspiration. Anthropomorphism can also entice brain research to uncritically use AI as a model for how the brain works.

Of course, the authors do not deny that AI and neuroscience mutually support each other and should cooperate. However, in order for cooperation to work well, and not limit scientific and technological development, philosophical thinking is also needed. We need to clarify conceptual differences between humans and machines, brains and computers. We need to free ourselves from the tendency to exaggerate similarities, which can be more verbal than real. We also need to pay attention to deep-rooted differences between humans and machines, and learn from the differences.

Anthropomorphism in AI risks encouraging irresponsible research communication, the authors further write. This is because exaggerated hopes (hype) seem intrinsic to the anthropomorphic language. By talking about computers in psychological and neurological terms, it sounds as if these machines already essentially functioned as human brains. The authors speak of an anthropomorphic hype around neural network algorithms.

Philosophy can thus also contribute to responsible research communication about artificial intelligence. Such communication draws attention to exaggerated claims and hopes inscribed in the anthropomorphic language of the field. It counteracts the tendency to exaggerate similarities between humans and machines, which rarely go as deep as the projected words make it sound.

In short, differences can be as important and instructive as similarities. Not only in philosophy, but also in science, technology and responsible research communication.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles, Kathinka Evers & Michele Farisco (2020) Anthropomorphism in AI, AJOB Neuroscience, 11:2, 88-95, DOI: 10.1080/21507740.2020.1740350

We recommend readings

This post in Swedish

What is a moral machine?

I recently read an article about so-called moral robots, which I found clarifying in many ways. The philosopher John-Stewart Gordon points out pitfalls that non-ethicists – robotics researchers and AI programmers – may fall into when they try to construct moral machines. Simply because they lack ethical expertise.

The first pitfall is the rookie mistakes. One might naively identify ethics with certain famous bioethical principles, as if ethics could not be anything but so-called “principlism.” Or, it is believed that computer systems, through automated analysis of individual cases, can “learn” ethical principles and “become moral,” as if morality could be discovered experientially or empirically.

The second challenge has to do with the fact that the ethics experts themselves disagree about the “right” moral theory. There are several competing ethical theories (utilitarianism, deontology, virtue ethics and more). What moral template should programmers use when getting computers to solve moral problems and dilemmas that arise in different activities? (Consider self-driving cars in difficult traffic situations.)

The first pitfall can be addressed with more knowledge of ethics. How do we handle the second challenge? Should we allow programmers to choose moral theory as it suits them? Should we allow both utilitarian and deontological robot cars on our streets?

John-Stewart Gordon’s suggestion is that so-called machine ethics should focus on the similarities between different moral theories regarding what one should not do. Robots should be provided with a binding list of things that must be avoided as immoral. With this restriction, the robots then have leeway to use and balance the plurality of moral theories to solve moral problems in a variety of ways.

In conclusion, researchers and engineers in robotics and AI should consult the ethics experts so that they can avoid the rookie mistakes and understand the methodological problems that arise when not even the experts in the field can agree about the right moral theory.

All this seems both wise and clarifying in many ways. At the same time, I feel genuinely confused about the very idea of ​​”moral machines” (although the article is not intended to discuss the idea, but focuses on ethical challenges for engineers). What does the idea mean? Not that I doubt that we can design artificial intelligence according to ethical requirements. We may not want robot cars to avoid collisions in city traffic by turning onto sidewalks where many people walk. In that sense, there may be ethical software, much like there are ethical funds. We could talk about moral and immoral robot cars as straightforwardly as we talk about ethical and unethical funds.

Still, as I mentioned, I feel uncertain. Why? I started by writing about “so-called” moral robots. I did so because I am not comfortable talking about moral machines, although I am open to suggestions about what it could mean. I think that what confuses me is that moral machines are largely mentioned without qualifying expressions, as if everyone ought to know what it should mean. Ethical experts disagree on the “right” moral theory. However, they seem to agree that moral theory determines what a moral decision is; much like grammar determines what a grammatical sentence is. With that faith in moral theory, one need not contemplate what a moral machine might be. It is simply a machine that makes decisions according to accepted moral theory. However, do machines make decisions in the same sense as humans do?

Maybe it is about emphasis. We talk about ethical funds without feeling dizzy because a stock fund is said to be ethical (“Can they be humorous too?”). There is no mythological emphasis in the talk of ethical funds. In the same way, we can talk about ethical robot cars without feeling dizzy as if we faced something supernatural. However, in the philosophical discussion of machine ethics, moral machines are sometimes mentioned in a mythological way, it seems to me. As if a centaur, a machine-human, will soon see the light of day. At the same time, we are not supposed to feel dizzy concerning these brave new centaurs, since the experts can spell out exactly what they are talking about. Having all the accepted templates in their hands, they do not need any qualifying expressions!

I suspect that also ethical expertise can be a philosophical pitfall when we intellectually approach so-called moral machines. The expert attitude can silence the confusing questions that we all need time to contemplate when honest doubts rebel against the claim to know.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gordon, J. Building Moral Robots: Ethical Pitfalls and Challenges. Sci Eng Ethics 26, 141–157 (2020).

We recommend readings

This post in Swedish

Herb Terrace about the chimpanzee Nim – do you see the contradiction?

Have you seen small children make repeated attempts to squeeze a square object through a round hole (plastic toy for the little ones)? You get puzzled: Do they not see that it is impossible? The object and the hole have different shapes!

Sometimes adults are just as puzzling. Our intellect does not always fit reality. Yet, we force our thoughts onto reality, even when they have different shapes. Maybe we are extra stubborn precisely when it is not possible. This post is about such a case.

Herb Terrace is known as the psychologist who proved that apes cannot learn language. He himself tried to teach sign language to the chimpanzee Nim, but failed according to his own judgement. When Terrace took a closer look at the videotapes, where Nim interacted with his human sign-language teachers, he saw how Nim merely imitated the teachers’ signs, to get his reward.

I recently read a blog post by Terrace where he not only repeats the claim that his research demonstrates that apes cannot learn language. The strange thing is that he also criticizes his own research severely. He writes that he used the wrong method with Nim, namely, that of giving him rewards when the teacher judged that he made the right signs. The reasoning becomes even more puzzling when Terrace writes that not even a human child could learn language with such a method.

To me, this is as puzzling as a child’s insistence on squeezing a square object through a round hole. If Terrace used the wrong method, which would not work even on a human child, then how can he conclude that Project Nim demonstrates that apes cannot learn language? Nevertheless, he insists on reasoning that way, without feeling that he contradicts himself. Nor does anyone who read him seem to experience any contradiction. Why?

Perhaps because most of us think that humans cannot teach animals anything at all, unless we train them with rewards. Therefore, since Nim did not learn language with this training method, apes cannot learn language. Better methods do not work on animals, we think. If Terrace failed, then everyone must fail, we think.

However, one researcher actually did try a better method in ape language research. She used an approach to young apes that works with human children. She stopped training the apes via a system of rewards. She lived with the apes, as a parent with her children. And it succeeded!

Terrace almost never mentions the name of the successful ape language researcher. After all, she used a method that is impossible with animals: she did not train them. Therefore, she cannot have succeeded, we think.

I can tell you that the name of the successful researcher is Sue Savage-Rumbaugh. To see a round reality beyond a square thinking, we need to rethink our thought pattern. If you want to read a book that tries to do such rethinking about apes, humans and language, I recommend a philosophical self-critique that I wrote with Savage-Rumbaugh and her colleague William Fields.

To philosophize is to learn to stop imposing our insane thoughts on reality. Then we finally see reality as it is.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Segerdahl, P., Fields, W. & Savage-Rumbaugh, S. 2005. Kanzi’s Primal Language. The Cultural Initiation of Primates into Language. Palgrave Macmillan.

Understanding enculturated apes

This post in Swedish

What shall we eat? An ethical framework for food choices (By Anna T. Höglund)

To reflect ethically on what we eat has been part of Western culture for centuries. In pre-modern times, the focus was mainly on the consumption of food, although it varied whether the emphasis was on the amount of food one should eat (as in ancient Greece) or on what kind of food one was allowed to eat (as in the Old Testament).

Modern food ethics has instead focused on the production of food, emphasizing aspects of animal ethics and environmental ethics. In a new article, I take a broader perspective and discuss both the production and consumption of food and further incorporate the meal as an important part of my food ethics analysis.

I identify four affected parties in relation to the production and consumption of food, namely, animals, nature, producers and consumers. What ethical values can be at stake for these parties?

For animals, an important value is welfare; not being exposed to pain or stress, but provided opportunities for natural behavior. For nature, important values are low negative impact on the environment and sustainable climate. For producers, ethical values at stake concern fair salaries and safe working conditions. For consumers, finally, important values are access to healthy food and the right to autonomous food choices. Apart from that, food can also be seen as an important value in pursuit of a good life.

Evidently, several ethical values are at stake when it comes to the production and consumption of food. Furthermore, these values often conflict when food choices are to be made. In such situations, a thorough weighing of values must be performed in order to find out which value should be given priority over another.

A problem with today’s food debate is that we tend to concentrate on one value at a time, without putting it in the perspective of other aspects. The question of how our food choices affect the climate has gained a lot of interest, at the expense of almost all other aspects of food ethics.

Many have learned that beef production can affect the climate negatively, since grazing cattle give rise to high levels of methane. They therefore choose to avoid that kind of meat. On the other hand, grazing animals can contribute to biodiversity as they keep the landscape open, which is good for the environment. Breeding chickens produces low levels of methane, but here the challenges concern animal welfare, natural behavior and the use of chemicals in the production of bird feed.

To replace meat with vegetables can be good for your health, but imported fruits and vegetables can be produced using toxins if they are not organically farmed. Long transports can also affect the climate negatively.

For these reasons, it can be ethically problematic to choose food based on only one perspective. Ethics is not that simple. We need to develop our ability to identify what values are at stake when it comes to food, and find good reasons for why we choose one sort of food instead of another. In the article, I develop a more comprehensive food ethical outlook by combining four well-known ethical concepts, namely, duties, consequences, virtues and care.

Duties and consequences are often part of ethical arguments. However, by including also virtues and care in my reasoning, the meal and the sense of community it gives rise to appear as important ethical values. Unfortunately, the latter values are at risk today when more and more people have their own individualized food preferences. During a meal, relations are developed, which the ethics of care emphasizes, but the meal is also an arena for developing virtues, such as solidarity, communication and respect.

It is hard to be an ethically aware consumer today, partly because there are so many aspects to take into account and partly because it is difficult to get reliable and trustworthy information upon which we can base our decisions. However, that does not mean that it is pointless to reflect on what is good and right when it comes to food ethical dilemmas.

If we think through our food choices thoroughly and avoid wasting food, we can do a lot to reach well-grounded food choices. Apart from that, we also need brave political decisions that can reduce factory farming, toxins, transports and emissions, and support small-scale and organic food production. Through such efforts, we might all feel a little more secure in the grocery shop, when we reflect on the question: What shall we eat?

Anna T. Höglund

Written by…

Anna T. Höglund, who is Associate Professor of Ethics at the Centre for Research Ethics & Bioethics and recently wrote a book on food ethics.

Höglund, Anna T. (2020) What shall we eat? An ethical framework for well-grounded food choices. Journal of Agricultural and Environmental Ethics. DOI: 10.1007/s10806-020-09821-4

We like real-life ethics

This post in Swedish

Exactly when does a human being actually come into existence?

The one who prepares the food may announce, “The food is ready now!” when the food is ready. However, when exactly is the food actually ready? When the kitchen timer rings? The potatoes are cooked then. Or when the saucepan is removed from the stove? The cooking ends then. Or when the saucepan is emptied of water? The potatoes are separated from the cooking medium then. Or when the potatoes are carried to the table? The food will be available to the guests around the table then. However, is the food actually available for eating before it is on the plate? Should not each guest say, “The food is ready now,” when the food is on the plate? However, if the food is too hot, is it actually ready? Should not someone around the table say when you no longer burn your tongue, “The food is ready now”?

Yes, exactly when is the food actually ready? You probably notice that the question is treacherous. The very asking, “exactly when, actually?” systematically makes every answer wrong, or not exactly right. The question is based on rejecting the answer. It is based on suggesting another, smarter way to answer. Which is not accepted because an even smarter way to answer is suggested. And so on. Questions that systematically reject the answer are not any questions. They can appear to be profound because no ordinary human answer is accepted. They can appear to be at a high intellectual level, because the questioner seems to demand nothing less than the exact and actual truth. Such extremely curious questions are usually called metaphysical.

However, we hardly experience the question about exactly when the food actually is ready as important and deep. We see the trick. The question is like a stubborn teenager who just discovered how to quibble. However, sometimes these verbally treacherous questions can appear on the agenda and be perceived as important to answer. In bioethics, the question about the beginning of a human being has become such a question. Exactly when does a human being actually come into existence?

Why is this question asked in bioethics? The reason is, of course, that there are ethical and legal limits to what medical researchers are permitted to do with human beings. The question of what counts as a human being then acquires significance. When does a fertilized egg become a human? Immediately? After a number of days? The question will determine what researchers are permitted to do with human embryos. Can they harvest stem cells from embryos and destroy them? There is disagreement about this.

When people disagree, they want to convince each other by debating. The issue of the beginning of a human being has been debated for decades. The problem is that the question is just as treacherous as the question about exactly when the food actually is ready. In addition, the apparent depth and inquisitiveness of the question serves as intellectual allurement. We seem to be able to determine exactly who is actually right. The Holy Grail of all debates!

The crucial moment never comes. The Holy Grail is constantly proving to be an illusion, since the question systematically rejects every answer by proposing an even smarter answer, just like the question about food. The question of the beginning of a human being has now reached such levels of cleverness that it cannot be rendered in ordinary human words. Philosophers earn their living as intellectual advocates who give debating clients strategic advice on metaphysical loopholes that will allow them to avoid the opponent’s latest clever argument. Listen to such metaphysical advice to debaters who want to argue that a human being comes into existence exactly at conception and not a day later:

”Given the twinning argument, the conceptionist then faces a choice between perdurantist conceptionism and exdurantist conceptionism, and we argue that, apart from commitments beyond the metaphysics of embryology, they should prefer the latter over the former.”

Do you feel like reading more? If so, read the article and judge for yourself the depth and seriousness of the question. Personally, I wish for more mature ways to deal with bioethical conflicts than through metaphysical advice to stubborn debaters.

 

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Efird, D, Holland, S. Stages of life: A new metaphysics of conceptionism. Bioethics. 2019; 33: 529– 535. https://doi.org/10.1111/bioe.12556

We like challenging questions

This post in Swedish

When order creates disorder

Scientific publications often have more than one author. The authorship order then becomes a sensitive issue for academics, since it counts. A good author position counts as good scientific merit. The authorship order also determines the funding allocation to the author’s university department. A good author position gives more money to the department.

The only problem is that there is no proper authorship order. Different research areas have their own traditions, which change over time. For example, as scientific articles are written jointly by more and more co-authors, the last positions are becoming increasingly important, as they are more visible than the cluster in the middle. Suddenly, you can feel proud to be the second to last among 20 authors.

However, does the expert who assesses your application believe that it is a merit that you are second to last in the author list? Does your university think that such a position should motivate more money to your department than a position in the middle?

When everyone wants to count on an order that does not really exist, it is understandable if administrative efforts are made to regulate authorship order. In an article in the journal Research Ethics, Gert Helgesson exemplifies how a Swedish university introduced its own new rules for the allocation of financial resources based on, among other things, position in the author list.

Gert Helgesson warns that such an administratively imposed order easily creates more disorder. Although it is only meant to regulate the allocation of funds, it can contribute to a local tradition concerning which author positions are considered desirable. The fragmentation increases rather than decreases.

To count or not to count, that is the question. It leads us right into this maze.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Gert Helgesson. “Authorship order and effects of changing bibliometrics practices.” Research Ethics. First Published January 21, 2020, https://doi.org/10.1177/1747016119898403

We recommend readings

This post in Swedish

Neuroethics as foundational

As neuroscience expands, the need for ethical reflection also expands. A new field has emerged, neuroethics, which celebrated its 15th anniversary last year. This was noted in the journal AJOB Neuroscience through an article about the area’s current and future challenges.

In one of the published comments, three researchers from the Human Brain Project and CRB emphasize the importance of basic conceptual analysis in neuroethics. The new field of neuroethics is more than just a kind of ethical mediator between neuroscience and society. Neuroethics can and should contribute to the conceptual self-understanding of neuroscience, according to Arleen Salles, Kathinka Evers and Michele Farisco. Without such self-understanding, the ethical challenges become unclear, sometimes even imaginary.

Foundational conceptual analysis can sound stiff. However, if I understand the authors, it is just the opposite. Conceptual analysis is needed to make concepts agile, when habitual thinking made them stiff. One example is the habitual thinking that facts about the brain can be connected with moral concepts, so that, for example, brain research can explain to us what it “really” means to be morally responsible for our actions. Such habitual thinking about the role of the brain in human life may suggest purely imaginary ethical concerns about the expansion of neuroscience.

Another example the authors give is the external perspective on consciousness in neuroscience. Neuroscience does not approach consciousness from a first-person perspective, but from a third-person perspective. Neuroscience may need to be reminded of this and similar conceptual limitations, to better understand the models that one develops of the brain and human consciousness, and the conclusions that can be drawn from the models.

Conceptual neuroethics is needed to free concepts from intellectual deadlocks arising with the expansion of neuroscience. Thus, neuroethics can contribute to deepening the self-understanding of neuroscience as a science with both theoretical and practical dimensions. At least that is how I understand the spirit of the authors’ comment in AJOB Neuroscience.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Emerging Issues Task Force, International Neuroethics Society (2019) Neuroethics at 15: The Current and Future Environment for Neuroethics, AJOB Neuroscience, 10:3, 104-110, DOI: 10.1080/21507740.2019.1632958

Arleen Salles, Kathinka Evers & Michele Farisco (2019) The Need for a Conceptual Expansion of Neuroethics, AJOB Neuroscience, 10:3, 126-128, DOI: 10.1080/21507740.2019.1632972

We like ethics

This post in Swedish

« Older posts Newer posts »