A blog from the Centre for Research Ethics & Bioethics (CRB)

Year: 2018 (Page 2 of 4)

What does the order of authors mean?

Pär SegerdahlHow should we interpret the sequence of author names in academic publications? Does it inform us about how much each author contributed to the publication?

After reading an article on the topic by Gert Helgesson and Stefan Eriksson, I realize that authorship order is a very disorderly matter. The first and last positions are often counted as the most important. But not always. To my surprise, not even a first position necessarily signifies first authorship. Sometimes, the asterisk after the author whose contact details are given is interpreted as a sign of first authorship. Sometimes the asterisk means that this author is subordinate and handles all practicalities associated with the publication.

Sometimes the second position is of particular importance. Sometimes not. Sometimes the next to last position has a particular interpretation. Sometimes another. Helgesson and Eriksson talk about group traditions and describe conventions in different scientific fields. Are there really no guidelines to follow? No, actually not. Author guidelines at most recommend authors to agree well in advance on the order of authors. However, since the guidelines do not specify what the order signifies, the meaning of the agreed upon authorship order is unclear!

Considering how meritorious authorship is in academic competition for positions and grants, this lack of order is surprising. Is the question too sensitive? Will an overly clear order lead to time-consuming quarrels between authors about who should stand first, last, second place, second to last, with asterisk, without asterisk, and so forth?

Helgesson and Eriksson discuss different proposals for clarifying authorship order. One proposal they encountered is that the first and last positions each render 40% of the total value of the paper. The remaining 20% ​​is shared equally by the authors in the intermediate positions. For five authors, authorship value would thus be divided: 40, 6.7, 6.7, 6.7 and 40%. This type of proposal is dismissed, because fixed values ​​would be fair only if work efforts actually happened to be distributed just that way (which is unlikely).

A more flexible system could be to provide actual percentages, on a case-by-case basis. But how are actual percentages determined? Different authors contribute qualitatively differently: by designing the study; by analyzing data; by drafting the paper. What kind of contribution has most weight?

Another suggestion is not to assign a relative value to the authors’ contributions. Instead, one specifies what each one contributed. Contributorship instead of authorship, where the contribution is described in absolute terms rather than relative. For example: “contributed to designing the study,” “contributed to data analysis,” “contributed to drafting the paper.” A problem with this proposal, Helgesson and Eriksson point out, is that it in fact says very little about absolute contributions. “Contributed to designing the study” can mean both substantial and lightweight contributions.

The article ends by taking a step back. For perhaps we took a step in the wrong direction when we required a more orderly authorship order? The problem about the meaning of the sequence of author names presupposes an individualistic and competitive outlook on science. Today, there are also other tendencies, which may be more worthwhile, such as striving to make science open and socially responsive. Perhaps we should avoid attaching too much importance to authorship order?

Should our focus be on collective contributions to science, with and for society, rather than on individual merit ​​in the competition for employment and funding?

Thus the article ends, with a question calling for more contemplation.

Pär Segerdahl

Helgesson, G. & Eriksson, S. Authorship order. Learned Publishing, 2018, doi: 10.1002/leap1191

This post in Swedish

We want to be just - the Ethics Blog

Nurses’ vulnerable position when care and research coincide

Pär SegerdahlA new article highlights ethical challenges that nurses face in their profession when more and more clinical trials are conducted on cancer patients.

Nursing alone is stressful. Studies have shown how heavy workload and being pressed for time can cause moral blindness and emotional immunization among nurses. In clinical trials, the situation is even more complicated, due to dual professional roles. The nurses have to accommodate both the values of care and the values of research. Caring for cancer patients coincides with recruiting patients as research participants and coordinating clinical trials on them according to detailed research protocols.

The article by Tove Godskesen et al. describes challenges faced by nurses burdened with this dual professional identity. The most difficult challenges concern cancer patients near the end of life, who no longer respond to the standard therapy. They often hope desperately that research participation will give them access to the next generation of cancer drugs, which may work more efficiently on them. This unrealistic hope creates difficulties for the nurses. They must recruit cancer patients to clinical trials, while the patients often are so terminally ill that they, from a perspective of caring, perhaps rather should be allowed to end their lives in peace and quiet.

An additional complication, next to the heavy workload in nursing and the dual identity as a nurse in the service of research, is that the number of clinical trials increases. There is a political ambition to accelerate the development, to support the Nordic pharmaceutical industry. This means that more and more nurses are engaged to coordinate trials: a task for which they rarely were trained, for which they hardly have time to prepare, and over which they lack power, given their position in the hierarchy of healthcare.

In view of the political ambition to increase the number of clinical trials, there should be a corresponding ambition to support the increasing number of nurses who will have to assume dual professional roles. Godskesen’s study indicates that there is a lack of systematic strategies to handle the situation. Nurses who coordinate trials on patients support each other, to the best of their abilities, over a quick cup of coffee.

Godskesen recommends more strategic training and better support for nurses working with clinical trials. For the nurses’ sake, and not least for the sake of patient safety.

Pär Segerdahl

Tove E. Godskesen, Suzanne Petri, Stefan Eriksson, Arja Halkoaho, Margrete Mangset, Merja Pirinen, Zandra Engelbak Nielsen. 2018. When Nursing Care and Clinical Trials Coincide: A Qualitative Study of the Views of Nordic Oncology and Hematology Nurses on Ethical Work Challenges. Journal of Empirical Research on Human Research Ethics. doi.org/10.1177/1556264618783555

This post in Swedish

We have a clinical perspective : www.ethicsblog.crb.uu.se

Sharing a blog post on consciousness

Michele Farisco at CRB has written an interesting post for the BMC blog on medicine. He says that “whereas ethical analyses of disorders of consciousness traditionally focus on residual awareness, there may be a case to be made for the ethical relevance of the retained unawareness.”

Interested to read more? Here is a link to the post: On consciousness and the unconscious.

Pär Segerdahl

We recommend readings - the Ethics Blog

Ethical competence for the decision not to resuscitate

Pär SegerdahlSometimes, physicians have to decide that a cancer patient has such a poor prognosis that he or she should not be resuscitated through cardiopulmonary rescue, if discovered with cardiac arrest. The procedure is violent and would in these cases cause unnecessary suffering.

The situation is stressful for the healthcare team no matter which decision is taken. Providing violent cardiopulmonary rescue to a terminally ill cancer patient can be perceived as poor care at the end of life. At the same time, one wishes of course to treat the patient, so the decision to not resuscitate can be stressful, too. The decision requires ethical competence.

Mona Pettersson, PhD student at CRB, examines in her dissertation the decision not to resuscitate patients in the fields of oncology and hematology. In an article in BMC Medical Ethics, she describes physicians and nurses’ reflections on ethical competence in relation to the decision not to resuscitate. Even if the physician takes the decision, the nurses are involved in the highest degree. They have responsibility for the care of the patient and of the relatives.

The ethical difficulties concern not just the decision itself. The difficulties also concern how patients and relatives are informed about the decision, as well as how the entire healthcare team is informed, involved and functions. What competence is required to ethically handle this care decision? How can such ethical competence be supported?

According to Pettersson, ethical competence involves both personal qualities and knowledge, as well as ability to reflect on how decisions best are made and implemented. In practice, all this interacts. For example, a physician may have knowledge that the patient should be informed about the decision not to resuscitate. At the same time, after reflection, the physician may choose not to inform, or choose to inform the patient using other words.

The physicians and nurses in Mona Pettersson’s study expressed that their ethical competence would be supported by greater opportunities for reflection and discussion of ethics near the end of life within oncology and hematology. This is because healthcare is always situated. The ethical difficulties have a definite context. Healthcare professionals are not ethically competent in general. Their ethical competence is linked to their specific professional practices, which moreover differ for physicians and nurses.

If you want to read more about Mona Pettersson’s dissertation, please read the presentation of her at CRB’s website: Healthcare, ethics and resuscitation.

Pär Segerdahl

Pettersson, M., Hedström. M and Höglund, A. T. Ethical competence in DNR decisions – a qualitative study of Swedish physicians and nurses working in hematology and oncology care. BMC Medical Ethics (2018) 19:63. htdoi.org/10.1186/s12910-018-0300-7

This post in Swedish

We have a clinical perspective : www.ethicsblog.crb.uu.se

 

Philosophy in responsible research and innovation

Pär SegerdahlThe honorable discipline philosophy is hardly anything we associate with groundbreaking research and innovation. Perhaps it is time we began to see a connection.

To begin with, we can let go of the image of philosophy as an honorable discipline. Instead, let us talk about the spirit of philosophy. People who think for themselves, as philosophers do, rarely find themselves at home within the narrow bounds of disciplines and fields. Not even if they are called philosophical. On the contrary, if such a person encounters boundaries that restrict her thought, she investigates the boundaries. And removes them, if necessary.

Forget the reverent representation of philosophy as an honorable discipline.

The spirit of philosophy is to avoid discipline, submission, tradition and all forms of dependence. Someone who functions as a loyal representative of a philosophical school is hardly a genuine thinker. A philosopher is someone who, in a spirit of absolute independence, questions everything that makes a pretense of being true, right and correct. Therefore, it has been said that one cannot learn philosophy, only to philosophize. As soon as a philosophy crystallizes, the philosophical spirit awakens and investigates the boundaries of what usually turns out to be a fad that attracts insecure intellects who shun independent thinking. No system of thought restricts a freely investigating thinker. Especially not the philosophy that is in fashion.

How does this spirit of philosophy connect to research and innovation? The connection I see is different than you probably guess. It is not about boosting the development by removing all boundaries, but about taking responsibility for the development. Philosophical thinking does not resemble an overheated research field’s fast flow of ideas, or an entrepreneur’s grandiose visions for the future. On the contrary, a philosopher takes a step back to calmly investigate the flow of ideas and visions.

Philosophy’s freedom is basically a responsibility.

Responsible Research and Innovation has become an important political theme for the European Commission. This responsibility is understood as an interactive process that engages social actors, researchers and innovators. Together, they are supposed to work towards ethically permissible research activities and products. This presupposes addressing also underlying societal visions, norms and priorities.

For this to work, however, separate actors cannot propagate separate interests. You need to take a step back and make yourself independent of your own special interests. You need to make yourself independent of yourself! Reflect more open-mindedly than you were disciplined to function, and see beyond the bounds of your fragmentary little field (and self). This spacious spirit of philosophy needs to be awakened: the freedom of thought that is basically the responsibility of thought.

Concrete examples of what this means are given in the journal, Neuroethics. In an article, Arleen Salles, Kathinka Evers and Michele Farisco describe the role that philosophical reflection currently plays in the European Flagship, the Human Brain Project. Here, philosophy and neuroethics are no mere appendages of neuroscientific research. On the contrary, by reflecting on central concepts in the project, philosophers contribute to the overall self-understanding in the project. Not by imposing philosophy as a special interest, or as a competing discipline with its own concepts, but by open-mindedly reflecting on neuroscientific concepts, clarifying the questions they give rise to.

The authors describe three areas where philosophy contributes within the Human Brain Project, precisely through awakening the spirit of philosophy. First, conceptual questions about connections between the brain and human identity. Secondly, conceptual questions about connections between the brain and consciousness; and between consciousness and unconsciousness. Thirdly, conceptual questions about links between neuroscientific research and political initiatives, such as poverty reduction.

Let us drop the image of philosophy as a discipline. For we need the spirit of philosophy.

Pär Segerdahl

Salles, A., Evers, K. & Farisco, M. Neuroethics (2018). https://doi.org/10.1007/s12152-018-9372-9

(By the way, anyone can philosophize. If you have the spirit, you are a philosopher. A demanding education in philosophy as a separate discipline can actually be an obstacle that you have to overcome.)

This post in Swedish

We transgress disciplinary borders - the Ethics Blog

Intellectual asceticism

Pär SegerdahlWe dismiss the magician’s claim to be in touch with the spirit world. We dismiss the priest’s claim to be in touch with the divine. We do not believe in supernatural contact with a purer world beyond this one!

Nevertheless, similar claims permeate our enlightened rationalist tradition. Even philosophers promised contact with a purer sphere. The difference is that they described the pure sphere in intellectual terms. The promised control of “concepts,” “categories,” “principles” and so on. They lived, like monks and magicians, as ascetics. They sought power over life itself, but they did it through intellectual self-discipline.

If you want to think about asceticism as a trait of our philosophical tradition, you may want to take a look at an article I wrote: Intellectual asceticism and hatred of the human, the animal, and the material.

In the article, I try to show that philosophy’s infamous anthropocentrism is illusory. Philosophers never idealized the human. They idealized something much more exclusive. They idealized the ascetically purified intellect.

Pär Segerdahl

Segerdahl, P. 2018. Intellectual asceticism and hatred of the human, the animal, and the material. Nordic Wittgenstein Review 7 (1): 43-58. DOI 10.15845/nwr.v7i1.3494

This post in Swedish

We recommend readings - the Ethics Blog

Driverless car ethics

Pär SegerdahlSelf-driving robot cars are controlled by computer programs with huge amounts of traffic rules. But in traffic, not everything happens smoothly according to the rules. Suddenly a child runs out on the road. Two people try to help a cyclist who collapsed on the road. A motorist tries to make a U-turn on a too narrow road and is stuck, blocking the traffic.

Assuming that the robots’ programs are able to categorize traffic situations through image information from the cars’ cameras, the programs must select the appropriate driving behavior for the robot cars. Should the cars override important traffic rules by, for example, steering onto the sidewalk?

It is more complicated than that. Suppose that an adult is standing on the sidewalk. Should the adult’s life be compromised to save the child? Or to save the cyclist and the two helpful persons?

The designers of self-driving cars have a difficult task. They must program the cars’ choice of driving behavior in ethically complex situations that we call unexpected, but the engineers have to anticipate far in advance. They must already at the factory determine how the car model will behave in future “unexpected” traffic situations. Maybe ten years later. (I assume the software is not updated, but also updated software anticipates what we normally see as unexpected events.)

On a societal level, one now tries to agree on ethical guidelines for how future robot cars should behave in tragic traffic situations where it may not be possible to completely avoid injuries or fatal casualties. A commission initiated by the German Ministry for Transportation, for example, suggests that passengers of robot cars should never be sacrificed to save a larger number of lives in the traffic situation.

Who, by the way, would buy a robot car that is programmed to sacrifice one’s life? Who would choose such a driverless taxi? Yet, as drivers we may be prepared to sacrifice ourselves in unexpected traffic situations. Some researchers decided to investigate the matter. You can read about their study in ScienceDaily, or read the research article in Frontiers in Behavioral Neuroscience.

The researchers used Virtual Reality (VR) technology to expose subjects to ethically difficult traffic situations. Thereafter, they studied the subjects’ choice of traffic behavior. The researchers found that the subjects were surprisingly willing to sacrifice themselves to save others. But they also took into consideration the age of potential victims and were prepared to steer onto the sidewalk to minimize the number of traffic victims. This is contrary to norms that we hold important in society, such as the idea that age discrimination should not occur and that the lives of innocent people should be protected.

In short, humans are inclined to drive their cars politically incorrectly!

Why was the study done? As far as I understand, because the current discussion about ethical guidelines does not take into account empirical data on how living drivers are inclined to drive their cars in ethically difficult traffic situations. The robot cars will make ethical decisions that can make the owners of the cars dissatisfied with their cars; morally dissatisfied!

The researchers do not advocate that driverless cars should respond to ethically complex traffic situations as living people do. However, the discussion about driverless car ethics should take into account data on how living people are inclined to drive their cars in traffic situations where it may not be possible to avoid accidents.

Let me complement the empirical study with some philosophical reflections. What strikes me when I read about driverless car ethics is that “the unexpected” disappears as a living reality. A living driver who tries to handle a sudden traffic situation manages what very obviously is happening right now. The driverless car, on the other hand, takes decisions that tick automatically, as predetermined as any other decision, like stopping at a red light. Driverless car ethics is just additional software that the robot car is equipped with at the factory (or when updating the software).

What are the consequences?

A living driver who suddenly ends up in a difficult traffic situation is confronted – as I said – with what is happening right now. The driver may have to bear responsibility for his actions in this intense moment during the rest of his life. Even if the driver rationally sacrifices one life to save ten, the driver will bear the burden of this one death; dream about it, think about it. And if the driver makes a stupid decision that takes more lives than it saves, it may still be possible to reconcile with it, because the situation was so unexpected.

This does not apply, however, to the robot car that was programmed at the factory according to guidelines from the National Road Administration. We might want to say that the robot car was preprogrammed to sacrifice our sister’s life, when she stood innocently on the sidewalk. Had the car been driven by a living person, we would have been angry with the driver. But after some time, we might be able to start reconciling with the driver’s behavior. Because it was such an unexpected situation. And the driver is suffering from his actions.

However, if it had been a driverless car that worked perfectly according to the manufacturer’s programs and the authorities’ recommendations, then we might see it as a scandal that the car was preprogrammed to steer onto the sidewalk, where our sister stood.

One argument for driverless cars is that, by minimizing the human factor, they can reduce the number of traffic accidents. Perhaps they can. But maybe we are less accepting as to how they are programmed to save lives in ethically difficult situations. Not only are they preprogrammed so that “the unexpected” disappears as a reality. They do not bear the responsibility that living people are forced to bear, even for their rational decisions.

Well, we will probably find ways to implement and accept the use of driverless cars. But another question still concerns me. If the present moment disappears as a living reality in the ethics software of driverless cars, has it not already disappeared in the ethics that prescribes right and wrong for us living people?

Pär Segerdahl

This post in Swedish

We like real-life ethics : www.ethicsblog.crb.uu.se

Can neuroscience and moral education be united? (By Daniel Pallarés Domínguez)

Daniel Pallarés DomínguezPeople have started to talk about neuroeducation, but what is it? Is it just another example of the fashion of adding the prefix neuro- to the social sciences, like neuroethics, neuropolitics, neuromarketing and neurolaw?

Those who remain sceptical consider it a mistake to link neuroscience with education. However, for some authors, neuroscience can provide useful knowledge about the brain, and they see neuroeducation as a young field of study with many possibilities.

From its birth in the decade of the brain (1990), neuroeducation has been understood as an interdisciplinary field that studies developmental learning processes in the human brain. It is one of the last social neurosciences to be born. It has the progressive aim of improving learning-teaching methodologies by applying the results of neuroscientific research.

Neuroscientific research already plays an important role in education. Taking into account the neural bases of human learning, neuroeducation looks not only for theoretical knowledge but also for practical implications, such as new teaching methodologies, and it reviews classical assumptions about learning and studies disorders of learning. Neuroeducation studies offer possibilities such as early detection of special learning needs or even monitoring and comparing different teaching methodologies implemented in school.

Although neuroeducation primarily focuses on disorders of learning, especially in mathematics and language (dyscalculia and dyslexia), can it be extended to other areas? If neuroscience can shed light on the development of ethics in the brain, can such explorations form the basis of a new form of neuroeducation, moral neuroeducation, which studies the learning or development of ethics?

Before introducing a new term (moral neuroeducation), prudence and critical discussion are needed. First, what would the goal of moral neuroeducation be? Should it consider moral disorders in the brain or just immoral behaviours? Second, neuroscientific knowledge is used in neuroeducation to help design practices that allow more efficient teaching to better develop students’ intellectual potentials throughout their training process. Should this be the goal also of moral neuroeducation? Should we strive for greater efficiency in teaching ethics? If so, what is the ethical competence we should try to develop in students?

It seems that we still need a critical and philosophical approach to the promising union of neuroscience and moral education. In my postdoctoral project, Neuroethical Bases for Moral Neuroeducation, I will contribute to developing such an approach.

Daniel Pallarés Domínguez

My postdoctoral research at the Centre for Research Ethics and Bioethics (CRB) is linked to a research project funded by the Ministry of Economy and Competitiveness in Spain. That project is entitled, Moral Neuroeducation for Applied Ethics [FFI2016-76753-C2-2-P], and is led by Domingo García-Marzá.

We care about education

Can a robot learn to speak?

Pär SegerdahlThere are self-modifying computer programs that “learn” from success and failure. Chess-playing computers, for example, become better through repeated games against humans.

Could a similar robot also learn to speak? If the robot gets the same input as a child gets when it learns to speak, should it not be possible in principle?

Notice how the question zigzags between child and machine. We say that the robot learns. We say that the child gets input. We speak of the robot as if it were a child. We speak of the child as if it were a robot. Finally, we take this linguistic zigzagging seriously as a fascinating question, perhaps even a great research task.

An AI expert and prospective father who dreamed of this great research task took the following ambitious measures. He equipped his whole house with cameras and microphones, to document all parent-child interactions during the child’s first years. Why? He wanted to know exactly what kind of linguistic input a child gets when it learns to speak. At a later stage, he might be able to give a self-modifying robot the same input and test if it also learns to speak.

How did the project turn out? The personal experience of raising the child led the AI ​​expert to question the whole project of teaching a robot to speak. How could a personal experience lead to the questioning of a seemingly serious scientific project?

Here, I could start babbling about how amiably social children are compared to cold machines. How they learn in close relationships with their parents. How they curiously and joyfully take the initiative, rather than calculatingly await input.

The problem is that such babbling on my part would make it seem as if the AI ​​expert simply was wrong about robots and children. That he did not know the facts, but now is more well-informed. It is not that simple. For the idea behind ​​the project presupposed unnoticed linguistic zigzagging. Already in asking the question, the boundaries between robots and children are blurred. Already in the question, we have half answered it!

We cannot be content with responding to the question in the headline with a simple, “No, it cannot.” We must reject the question as nonsense. Deceitful zigzagging creates the illusion that we are dealing with a serious question, worthy of scientific study.

This does not exclude, however, that computational linguistics increasingly uses self-modifying programs, and with great success. But that is another question.

Pär Segerdahl

Beard, Alex. How babies learn – and why robots can’t compete. The Guardian, 3 April 2018

This post in Swedish

We like critical thinking : www.ethicsblog.crb.uu.se

Bioethics dissolving misdirected worldliness

Pär SegerdahlWhen we feel low, we often make the mistake of scanning the external environment to find the cause of our state of mind out there. One could speak of the depressed person’s misdirected worldliness. We are convinced that something in the world makes us depressed. We exclude that we ourselves play a role in the drama: “I am depressed because he/she/they/society is so damned…”

The depressed person naturally believes that the way to happiness lies in eliminating the external cause of the depression: “If I just could be spared from dealing with him/her/them/society, I would feel a lot better.” That is what the depressed person’s worldliness looks like. We are unable to turn around and see (and treat) the emergence of the problem within ourselves.

Xenophobia might be a manifestation of the depressed person’s misunderstanding of life. We could speak of the insecure person’s misdirected worldliness. One scans the external environment to find the cause of one’s insecurity in the world. When one “finds” it, one apparently “proves” it beyond doubt. The moment one thinks about immigration, one is attacked by strong feelings of insecurity: no doubt, that’s the cause! The alternative possibility that one carries the insecurity within oneself is excluded: “I’m suffering because society is becoming increasingly insecure.”

Finally, one makes politics of the difficulty of scrutinizing oneself. One wants to eliminate the external cause of the insecurity one feels: “If we stop immigration, society will become safer and I will feel more secure!” That is what the insecure person’s misdirected worldliness looks like.

You might be surprised that even anti-xenophobia can exhibit the depressed person’s misunderstanding of life. If we lack a deep understanding of how xenophobia can arise within a human being, we will believe that there are evil people who in their stupidity spread fake statistics about increasing social insecurity. These groups must be eliminated, we think: “If there were no xenophobic groups in society, then I would feel much better.” That is what the good activist’s worldliness can look like.

Like that we go on and on, in our misdirected worldliness, because we fail to see our own role in the drama. We make politics of our inner states, which flood the world as if they were facts that should appear in the statistics. (Therefore, we see them in the statistics.)

Now you may be surprised again, because even bioethics can exhibit the depressed person’s misunderstanding of life. I am thinking of the tendency to make ethics an institution that maintains moral order in society. Certainly, biomedical research needs regulation, but sometimes regulation runs the errands of a misdirected worldliness.

A person who feels moral unease towards certain forms of research may think, “If researchers did not kill human embryos, I would feel a lot better.” Should we make policy of this internal state by banning embryonic stem cell research? Or would that be misdirected projection of an inner state on the world?

I cannot answer the question in this post; it requires more attention. All I dare to say is that we, more often than we think, are like depressed people who seek the cause of our inner states in the world. Just being able to ask if we manifest the depressed person’s misunderstanding of life is radical enough.

I imagine a bioethics that can ask the self-searching question and seek practical ways to handle it within ourselves. So that our inner states do not flood the world.

Pär Segerdahl

This post in Swedish

We think about bioethics : www.ethicsblog.crb.uu.se

« Older posts Newer posts »