A blog from the Centre for Research Ethics & Bioethics (CRB)

Author: Pär Segerdahl (Page 2 of 32)

We shape the societies that shape us: our responsibility for human nature

Visionary academic texts are rare – texts that shed light on how research can contribute to the perennial human issues. In an article in the philosophical journal Theoria, however, Kathinka Evers opens up a novel visionary perspective on neuroscience and tragic aspects of the human condition.

For millennia, sensitive thinkers have been concerned about human nature. Undoubtedly, we humans create prosperity and security for ourselves. However, like no other animal, we also have an unfortunate tendency to create misery for ourselves (and other life forms). The 20th century was extreme in both directions. What is the mechanism behind our peculiar, large-scale, self-injurious behavior as a species? Can it be illuminated and changed?

As I read her, Kathinka Evers asks essentially this big human question. She does so based on the current neuroscientific view of the brain, which she argues motivates a new way of understanding and approaching the mechanism of our species’ self-injurious behavior. An essential feature of the neuroscientific view is that the human brain is designed to never be fully completed. Just as we have a unique self-injurious tendency as a species, we are born with uniquely incomplete brains. These brains are under construction for decades and need good care throughout this time. They are not formed passively, but actively, by finding more or less felicitous ways of functioning in the societies to which we expose ourselves.

Since our brains shape our societies, one could say that we build the societies that build us, in a continual cycle. The brain is right in the middle of this sensitive interaction between humans and their societies. With its creative variability, the human brain makes many deterministic claims on genetics and our “innate” nature problematic. Why are we humans the way we are? Partly because we create the societies that create us as we are. For millennia, we have generated ourselves through the societies that we have built, ignorant of the hyper-interactive organ in the middle of the process. It is always behind our eyes.

Kathinka Evers’ point is that our current understanding of the brain as inherently active, dynamic and variable, gives us a new responsibility for human nature. She expresses the situation technically as follows: neuroscientific knowledge gives us a naturalistic responsibility to be epigenetically proactive. If we know that our active and variable brains support a cultural evolution beyond our genetic heritage, then we have a responsibility to influence evolution by adapting our societies to what we know about the strengths and weaknesses of our brains.

The notion of ​​a neuroscientific responsibility to design societies that shape human nature in desired ways may sound like a call for a new form of social engineering. However, Kathinka Evers develops the notion of ​​this responsibility in the context of a conscientious review of similar tendencies in our history, tendencies that have often revolved around genetics. The aim of epigenetic proaction is not to support ideologies that have already decided what a human being should be like. Rather, it is about allowing knowledge about the brain to inspire social change, where we would otherwise ignorantly risk recreating human misery. Of course, such knowledge presupposes collaboration between the natural, social and human sciences, in conjunction with free philosophical inquiry.

The article mentions juvenile violence as an example. In some countries, there is a political will to convict juvenile delinquents as if they were adults and even place them in adult prisons. Today, we know that during puberty, the brain is in a developmental crisis where important neural circuits change dramatically. Young brains in crisis need special care. However, in these cases they risk ending up in just the kind of social environments that we can predict will create more misery.

Knowledge about the brain can thus motivate social changes that reduce the peculiar self-injuring behavior of humanity, a behavior that has worried sensitive thinkers for millennia. Neuroscientific self-awareness gives us a key to the mechanism behind the behavior and a responsibility to use it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Kathinka Evers. 2020. The Culture‐Bound Brain: Epigenetic Proaction Revisited. Theoria. doi:10.1111/theo.12264

This post in Swedish

We like challenging questions

An ideology that is completely foreign to my ideology

I read a newspaper editorial that criticized ideological elements in school teaching. The author had visited the website of one of the organizations hired by the schools and found clear expressions of a view of society based on ideological dogmas of a certain kind.

The criticism may well have been justified. What made me think was how the author explained the problem. It sounded as if the problem was that the ideology in question was foreign to the author’s own ideology: “foreign to me and most other …-ists”.

I was sad when I read this. It made it appear as if it was our human destiny to live trapped in ideological labyrinths, alien to each other. If we are foreign to an ideology, does it really mean nothing more than that the ideology is foreign to our own ideology?

Can we free ourselves from the labyrinths of ideology? Or would it be just a different ideology: “We anti-ideologues call for a fight against all ideologies”!? Obviously, it is difficult to fight all ideologies without becoming ideological yourself. Even peace movements bear the seeds of new conflicts. Which side for peace are you on?

Can we free ourselves by strictly sticking to the facts and nothing but the facts? Sticking to the facts is important. One problem is that ideologies already love to refer to facts, to strengthen the ideology and present it as the truth. Pointing out facts provides ammunition for even more ideological debate, of which we will soon become an engaged party: “We rationalists strongly oppose all ideologically biased descriptions of reality”!?

Can the solution be to always acknowledge ideological affiliation, so that we spread awareness of our ideological one-sidedness: “Hello, I represent the national organization against intestinal lavage – a practice that we anti-flushers see as a violation of human dignity”!? It can be good to inform others about our motives, so that they are not misled into believing what we say. However, it hardly shows a more beautiful aspect of humanity, but reinforces the image that conflicting forms of ideological one-sidedness are our destiny.

However, if we now see the problem clearly, if we see how every attempt to solve the problem recreates the problem, have we not opened ourselves to our situation? Have we not seen ourselves with a gaze that is no longer one-sided? Are we not free?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

This post in Swedish

Thinking about thinking

What is required of an ethics of artificial intelligence?

I recently highlighted criticism of the ethics that often figures in the field of artificial intelligence (AI). An ethics that can handle the challenges that AI presents us with requires more than just beautifully formulated ethical principles, values ​​and guidelines. What exactly is required of an ethics of artificial intelligence?

Michele Farisco, Kathinka Evers and Arleen Salles address the issue in the journal Science and Engineering Ethics. For them, ethics is not primarily principles and guidelines. Ethics is rather an ongoing process of thinking: it is continual ethical reflection on AI. Their question is thus not what is required of an ethical framework built around AI. Their question is what is required of in-depth ethical reflection on AI.

The authors emphasize conceptual analysis as essential in all ethical reflection on AI. One of the big difficulties is that we do not know exactly what we are discussing! What is intelligence? What is the difference between artificial and natural intelligence? How should we understand the relationship between intelligence and consciousness? Between intelligence and emotions? Between intelligence and insightfulness?

Ethical problems about AI can be both practical and theoretical, the authors point out. They describe two practical and two theoretical problems to consider. One practical problem is the use of AI in activities that require emotional abilities that AI lacks. Empathy gives humans insight into other humans’ needs. Therefore, AI’s lack of emotional involvement should be given special attention when we consider using AI in, for example, child or elderly care. The second practical problem is the use of AI in activities that require foresight. Intelligence is not just about reacting to input from the environment. A more active, foresighted approach is often needed, going beyond actual experience and seeing less obvious, counterintuitive possibilities. Crying can express pain, joy and much more, but AI cannot easily foresee less obvious possibilities.

Two theoretical problems are also mentioned in the article. The first is whether AI in the future may have morally relevant characteristics such as autonomy, interests and preferences. The second problem is whether AI can affect human self-understanding and create uncertainty and anxiety about human identity. These theoretical problems undoubtedly require careful analysis – do we even know what we are asking? In philosophy we often need to clarify our questions as we go along.

The article emphasizes one demand in particular on ethical analysis of AI. It should carefully consider morally relevant abilities that AI lacks, abilities needed to satisfy important human needs. Can we let a cute kindergarten robot “comfort” children when they scream with joy or when they injure themselves so badly that they need nursing?

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Farisco, M., Evers, K. & Salles, A. Towards establishing criteria for the ethical analysis of Artificial Intelligence. Science and Engineering Ethics (2020). https://doi.org/10.1007/s11948-020-00238-w

This post in Swedish

We want solid foundations

Unethical research papers should be retracted

Articles that turn out to be based on fraudulent or flawed research are, of course, retracted by the journals that published them. The fact that there is a clearly stated policy for retracting fraudulent research is extremely important. Science as well as its societal applications must be able to trust that published findings are correct and not fabricated or distorted.

However, how should we handle articles that turn out to be based on unethical research? For example, research on the bodies of executed prisoners? Or research that exposes participants to unreasonable risks? Or research supported by unacceptable sources of funding?

In a new article, William Bülow, Tove E. Godskesen, Gert Helgesson and Stefan Eriksson examine whether academic journals have clearly formulated policies for retracting papers that are based on unethical research. The review shows that many journals lack such policies. This introduces arbitrariness and uncertainty into the system, the authors argue. Readers cannot trust that published research is ethical. They also do not know on what grounds articles are retracted or remain in the journal.

To motivate a clearly stated policy, the authors discuss four possible arguments for retracting unethical research papers. Two arguments are considered particularly conclusive. The first is that such a policy communicates that unethical research is unacceptable, which can deter researchers from acting unethically. The second argument is that journals that make it possible to complete unethical research by publishing it and that benefit from it become complicit in the unethical conduct.

Retraction of research papers is a serious matter and very compromising for researchers. Therefore, it is essential to clarify which forms and degrees of unethical conduct are sufficient to justify retraction. The authors cite as examples research based on serious violations of human rights, unfree research and research with unacceptable sources of funding.

The article concludes by recommending scientific journals to introduce a clearly stated policy for retracting unethical research: as clear as the policy for fraudulent research. Among other things, all retractions should be marked in the journal and the reasons behind the retractions should be specified in terms of both the kind and degree of unethical conduct.

For more details on the policy recommendation, read the article in the Journal of Medical Ethics.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Bülow, W., Godskesen, T. E., Helgesson, G., Eriksson, S. Why unethical papers should be retracted. Journal of Medical Ethics, Published Online First: 13 August 2020. doi: 10.1136/medethics-2020-106140

This post in Swedish

We care about communication

Ethics as renewed clarity about new situations

An article in the journal Big Data & Society criticizes the form of ethics that has come to dominate research and innovation in artificial intelligence (AI). The authors question the same “framework interpretation” of ethics that you could read about on the Ethics Blog last week. However, with one disquieting difference. Rather than functioning as a fence that can set the necessary boundaries for development, the framework risks being used as ethics washing by AI companies that want to avoid legal regulation. By referring to ethical self-regulation – beautiful declarations of principles, values ​​and guidelines – one hopes to be able to avoid legal regulation, which could set important limits for AI.

The problem with AI ethics as “soft ethics legislation” is not just that it can be used to avoid necessary legal regulation of the area. The problem is above all, according to the SIENNA researchers who wrote the article, that a “law conception of ethics” does not help us to think clearly about new situations. What we need, they argue, is an ethics that constantly renews our ability to see the new. This is because AI is constantly confronting us with new situations: new uses of robots, new opportunities for governments and companies to monitor people, new forms of dependence on technology, new risks of discrimination, and many other challenges that we may not easily anticipate.

The authors emphasize that such eye-opening AI ethics requires close collaboration with the social sciences. That, of course, is true. Personally, I want to emphasize that an ethics that renews our ability to see the new must also be philosophical in the deepest sense of the word. To see the new and unexpected, you cannot rest comfortably in your professional competence, with its established methods, theories and concepts. You have to question your own disciplinary framework. You have to think for yourself.

Read the article, which has already attracted well-deserved attention.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Anaïs Rességuier, Rowena Rodrigues. 2020. AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society

This post in Swedish

We like critical thinking

Ethical frameworks for research

The word ethical framework evokes the idea of ​​something rigid and separating, like the fence around the garden. The research that emerges within the framework is dynamic and constantly new. However, to ensure safety, it is placed in an ethical framework that sets clear boundaries for what researchers are allowed to do in their work.

That this is an oversimplified picture is clear after reading an inventive discussion of ethical frameworks in neuroscientific research projects, such as the Human Brain Project. The article is written by Arleen Salles and Michele Farisco at CRB and is published in AJOB Neuroscience.

The article questions not only the image of ethical frameworks as static boundaries for dynamic research activities. Inspired by ideas within so-called responsible research and innovation (RRI), the image that research can be separated from ethics and society is also questioned.

Researchers tend to regard research as their own concern. However, there are tendencies towards increasing collaboration not only across disciplinary boundaries, but also with stakeholders such as patients, industry and various forms of extra-scientific expertise. These tendencies make research an increasingly dispersed, common concern. Not only in retrospect in the form of applications, which presupposes that the research effort can be separated, but already when research is initiated, planned and carried out.

This could sound threatening, as if foreign powers were influencing the free search for truth. Nevertheless, there may also be something hopeful in the development. To see the hopeful aspect, however, we need to free ourselves from the image of ethical frameworks as static boundaries, separate from dynamic research.

With examples from the Human Brain Project, Arleen Salles and Michele Farisco try to show how ethical challenges in neuroscience projects cannot always be controlled in advance, through declared principles, values ​​and guidelines. Even ethical work is dynamic and requires living intelligent attention. The authors also try to show how ethical attention reaches all he way into the neuroscientific issues, concepts and working conditions.

When research on the human brain is not aware of its own cultural and societal conditions, but takes them for granted, it may mean that relevant questions are not asked and that research results do not always have the validity that one assumes they have.

We thus have good reasons to see ethical and societal reflections as living parts of neuroscience, rather than as rigid frameworks around it.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Arleen Salles & Michele Farisco (2020) Of Ethical Frameworks and Neuroethics in Big Neuroscience Projects: A View from the HBP, AJOB Neuroscience, 11:3, 167-175, DOI: 10.1080/21507740.2020.1778116

This post in Swedish

We like real-life ethics

Working online during the pandemic: recommendations from the Human Brain Project

The covid-19 pandemic forced many of us to work online from home. The change contained surprises, both positive and negative. We learned that it is possible to have digital staff meetings, seminars and coffee breaks, and that working from home can sometimes mean less interference than working in the office. We also discovered how much better the office chair and desk are, how difficult it is to try to be professional online from an untidy home, and that working from home often means more interference than working in the office!

The European Human Brain Project (HBP) has extensive experience of collaborating digitally, with regular online meetings. This is how they worked long before the pandemic struck, since the project is a collaboration between more than 100 partner institutions in almost 20 countries, also outside Europe. As part of the project’s investment in responsible research and innovation, special efforts are now being made to digitally include everyone, when so much of the work has moved to the internet.

In the Journal of Responsible Technology, Karin Grasenick and Manuel Guerrero from HBP formulate recommendations based on experiences from the project. Their recommendations concern four areas: How do we facilitate social and family life? How do we reduce stress and anxiety? How do we handle career stages, roles and responsibilities? How do we support team spirit and virtual cooperation?

Read the concise article! You will recognize your work situation and be inspired by the suggestions. Even after the pandemic, online collaboration will occur.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Karin Grasenick,  Manuel Guerrero, Responsible Research and Innovation& Digital Inclusiveness during Covid-19 Crisis in the Human Brain Project (HBP), Journal of Responsi-ble Technology(2020), doi: https://doi.org/10.1016/j.jrt.2020.06.001

We like ethics

This post in Swedish

Ethical fitness apps for high performance morality

In an unusually rhetorical article for being in a scientific journal, the image is drawn of a humanity that frees itself from moral weakness by downloading ethical fitness apps.

The authors claim that the maxim “Know thyself!” from the temple of Apollo at Delphi is answered today more thoroughly than ever. Never has humanity known more about itself. Ethically, we are almost fully educated. We also know more than ever about the moral weaknesses that prevent us from acting in accordance with the ethical principles that we finally know so well. Research is discovering more and more mechanisms in the brain and in our psychology that affect humanity’s moral shortcomings.

Given this enormous and growing self-knowledge, why do we not develop artificial intelligence that supports a morally limping humanity? Why spend so much resources on developing even more intelligent artificial intelligence, which takes our jobs and might one day threaten humanity in the form of uncontrollable superintelligence? Why do we behave so unwisely when we could develop artificial intelligence to help us humans become superethical?

How can AI make morally weak humans super-ethical? The authors suggest a comparison with the fitness apps that help people to exercise more efficiently and regularly than they otherwise would. The authors’ suggestion is that our ethical knowledge of moral theories, combined with our growing scientific knowledge of moral weaknesses, can support the technological development of moral crutches: wise objects that support people precisely where we know that we are morally limping.

My personal assessment of this utopian proposal is that it might easily be realized in less utopian form. AI is already widely used as a support in decision-making. One could imagine mobile apps that support consumers to make ethical food choices in the grocery shop. Or computer games where consumers are trained to weigh different ethical considerations against each another, such as animal welfare, climate effects, ecological effects and much more. Nice looking presentations of the issues and encouraging music that make it fun to be moral.

The philosophical question I ask is whether such artificial decision support in shops and other situations really can be said to make humanity wiser and more ethical. Imagine a consumer who chooses among the vegetables, eagerly looking for decision support in the smartphone. What do you see? A human who, thanks to the mobile app, has become wiser than Socrates, who lived long before we knew as much about ourselves as we do today?

Ethical fitness apps are conceivable. However, the risk is that they spread a form of self-knowledge that flies above ourselves: self-knowledge suspiciously similar to the moral vice of self-satisfied presumptuousness.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Pim Haselager & Giulio Mecacci (2020) Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly, AJOB Neuroscience, 11:2, 113-119, DOI: 10.1080/21507740.2020.1740353

The temptation of rhetoric

This post in Swedish

Autonomous together

Autonomy is such a cherished concept in ethics that I hardly dare to write about it. The fact that the concept cherishes the individual does not make my task any easier. The slightest error in my use of the term, and I risk being identified as an enemy perhaps not of the people but of the individual!

In ethics, autonomy means personal autonomy: individuals’ ability to govern their own lives. This ability is constantly at risk of being undermined. It is undermined if others unduly influence your decisions, if they control you. It is also undermined if you are not sufficiently well informed and rational. For example, if your decisions are based on false or contradictory information, or if your decisions result from compulsions or weakness of the will. It is your faculty of reason that should govern your life!

In an article in BMC Medical Ethics, Amal Matar, who has a PhD at CRB, discusses decision-making situations in healthcare where this individual-centered concept of autonomy seems less useful. It is about decisions made not by individuals alone, but by people together: by couples planning to become parents.

A couple planning a pregnancy together is expected to make joint decisions. Maybe about genetic tests and measures to be taken if the child risks developing a genetic disease. Here, as always, the healthcare staff is responsible for protecting the patients’ autonomy. However, how is this feasible if the decision is not made by individuals but jointly by a couple?

Personal autonomy is an idealized concept. No man is an island, it is said. This is especially evident when a couple is planning a life together. If a partner begins to emphasize his or her personal autonomy, the relationship probably is about to disintegrate. An attempt to correct the lack of realism in the idealized concept has been to develop ideas about relational autonomy. These ideas emphasize how individuals who govern their lives are essentially related to others. However, as you can probably hear, relational autonomy remains tied to the individual. Amal Matar therefore finds it urgent to take a further step towards realism concerning joint decisions made by couples.

Can we talk about autonomy not only at the level of the individual, but also at the level of the couple? Can a couple planning a pregnancy together govern their life by making decisions that are autonomous not only for each one of them individually, but also for them together as a couple? This is Amal Matar’s question.

Inspired by how linguistic meaning is conceptualized in linguistic theory as existing not only at the level of the word, but also at the level of the sentence (where words are joined together), Amal Matar proposes a new concept of couple autonomy. She suggests that couples can make joint decisions that are autonomous at both the individual and the couple’s level.

She proposes a three-step definition of couple autonomy. First, both partners must be individually autonomous. Then, the decision must be reached via a communicative process that meets a number of criteria (no partner dominates, sufficient time is given, the decision is unanimous). Finally, the definition allows one partner to autonomously transfer aspects of the decision to the other partner.

The purpose of the definition is not a philosophical revolution in ethics. The purpose is practical. Amal Matar wants to help couples and healthcare professionals to speak realistically about autonomy when the decision is a couple’s joint decision. Pretending that separate individuals make decisions in parallel makes it difficult to realistically assess and support the decision-making process, which is about interaction.

Amal Matar concludes the article, written together with Anna T. Höglund, Pär Segerdahl and Ulrik Kihlbom, with describing two cases. The cases show concretely how her definition can help healthcare professionals to assess and support autonomous decision-making at the level of the couple. In one case, the couple’s autonomy is undermined, in the other case, probably not.

Read the article as an example of how we sometimes need to modify cherished concepts to enable a realistic use of them. 

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Matar, A., Höglund, A.T., Segerdahl, P. and Kihlbom, U. Autonomous decisions by couples in reproductive care. BMC Med Ethics 21, 30 (2020). https://doi.org/10.1186/s12910-020-00470-w

We like challenging questions

This post in Swedish

Responsibly planned research communication

Academic research is driven by dissemination of results to peers at conferences and through publication in scientific journals. However, research results belong not only to the research community. They also belong to society. Therefore, results should reach not only your colleagues in the field or the specialists in adjacent fields. They should also reach outside the academy.

Who is out there? A homogeneous public? No, it is not that simple. Communicating research is not two activities: first communicating the science to peers and then telling the popular scientific story to the public. Outside the academy, we find engineers, entrepreneurs, politicians, government officials, teachers, students, research funders, taxpayers, healthcare professionals… We are all out there with our different experiences, functions and skills.

Research communication is therefore a strategically more complicated task than just “reaching the public.” Why do you want to communicate your results; why are they important? Who will find your results important? How do you want to communicate them? When is the best time to communicate? There is not just one task here. You have to think through what the task is in each particular case. For the task varies with the answers to these questions. Only when you can think strategically about the task can you communicate research responsibly.

Josepine Fernow is a skilled and experienced research communications officer at CRB. She works with communication in several research projects, including the Human Brain Project and STARBIOS2. In the latter project, about Responsible Research and Innovation (RRI), she contributes in a new book with arguments for responsibly planned research communication: Achieving impact: some arguments for designing a communications strategy.

Josepine Fernow’s contribution is, in my view, more than a convincing argument. It is an eye-opening text that helps researchers see more clearly their diverse relationships to society, and thereby their responsibilities. The academy is not a rock of knowledge in a sea of ​​ignorant lay people. Society consists of experienced people who, because of what they know, can benefit from your research. It is easier to think strategically about research communication when you survey your relations to a diversified society that is already knowledgeable. Josepine Fernow’s argumentation helps and motivates you to do that.

Josepine Fernow also warns against exaggerating the significance of your results. Bioscience has potential to give us effective treatments for serious diseases, new crops that meet specific demands, and much more. Since we are all potential beneficiaries of such research, as future patients and consumers, we may want to believe the excessively wishful stories that some excessively ambitious researchers want to tell. We participate in a dangerous game of increasingly unrealistic hopes.

The name of this dangerous game is hype. Research hype can make it difficult for you to continue your research in the future, because of eroded trust. It can also make you prone to take unethical shortcuts. The “huge potential benefit” obscures your judgment as a responsible researcher.

In some research fields, it is extra difficult to avoid research hype, as exaggerated hopes seem inscribed in the very language of the field. An example is artificial intelligence (AI), where the use of psychological and neuroscientific vocabulary about machines can create the impression that one has already fulfilled the hopes. Anthropomorphic language can make it sound as if some machines already thought like humans and functioned like brains.

Responsible research communication is as important as difficult. Therefore, these tasks deserve our greatest attention. Read Josepine Fernow’s argumentation for carefully planned communication strategies. It will help you see more clearly your responsibility.

Finally, a reminder for those interested: the STARBIOS2 project organizes its final event via Zoom on Friday, May 29, 2020.

Pär Segerdahl

Written by…

Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.

Fernow, J. (2019). Note #11: Achieving impact: Some arguments for designing a communications strategy, In A. Declich (Ed.), RRI implementation in bioscience organisations: Guidelines from the STARBIOS2 project, (pp. 177-180). Uppsala University. ISBN: 978-91-506-2811-1

We care about communication

This post in Swedish

« Older posts Newer posts »