To be an ethicist and philosopher is to be an advocate for time: “Wait, we need time to think this through.” This idea of letting things take their time rarely gains traction in society. It starts already in school, where the focus is often on being able to calculate quickly and recite as many words as possible in one minute. It then continues at the societal level.
A good example is technological development, which is moving faster than ever. Humans have always used more or less advanced and functional technology, always searching for better ways to solve problems. With the Industrial Revolution, things began to accelerate, and since then, the pace has only increased. We got factories, car traffic, air travel, nuclear power, genetically modified crops, and prenatal diagnostics. We got typewriters, computers, and telephones. We got different ways to play and reproduce music. Now we have artificial intelligence (AI), which it is often said will revolutionize most parts of society.
The development and implementation of AI is progressing at an unparalleled speed. Various government authorities use AI, healthcare allows AI tools to take on more and more tasks. Schools and universities wrestle with the question of how AI should be used by students, teachers, and researchers. Teachers have been left at a loss because AI established itself so quickly, and different teachers draw different boundaries for what counts as cheating, resulting in great uncertainty for students about what applies. People use AI for everything from planning their day to getting help with mental health issues. AI is used as a relationship expert, but also as the very object of romantic or friendship relationships. Today, there are AI systems that can call elderly and sick people to ask how they are feeling, whether they have taken their medication, and perhaps whether they have had any social contact recently.
As with all technology, there are advantages and disadvantages to AI, and it can be used in both good and bad ways. AI can be used to improve life for people and the environment, but like all technology, it can also be harmful to people and the environment. People and societies can do things better and more easily with AI, but like all technology, it can also have negative consequences such as environmental damage, unemployment, and discrimination.
Researchers in the Netherlands have discussed the problems that arise with new technology in terms of “social experiments.” They argue that there is an important difference compared to the careful testing that, for example, new pharmaceuticals undergo before they are approved. New technologies are not tested in such a structured way.
The EU has introduced a basic legal framework for AI (the EU AI Act), which can be seen as an attempt to introduce the new technology in a way that is less experimental on people and societies: more “responsible” and “trustworthy” AI. The new law is criticized by some European tech companies, who claim that the law means we will fall behind countries that have no regulations, such as the USA and China. Doing things in a thoughtful and ethically sound way is apparently considered less important than quickly getting the technology in place. On the contrary, caution is seen as risky, which says something about the concept of risk that currently drives such rapid development that perhaps not even the technology can deliver what the market expects.
Just as with previous important technologies, we need to think things through beforehand. If AI is to help us without harmful consequences, development must be allowed to take its time. This is even more important with AI than with previous technologies because AI has an unusually large potential to affect our lives. Ethical research points to several problems related to justice and trust. One problem is that we cannot explain why AI in, for example, healthcare reaches a certain conclusion about a specific individual. With previous technology, someone human being – if not the user, then at least the developer – has always been able to explain the causality in the system. Can we trust a technology in healthcare that we cannot control or explain in essential ways?
There are technology optimists and technology pessimists. Some are enthusiastic about new technologies and believe it is the solution to all our problems. Others think the precautionary principle should apply to all new technology and do not want to accept any risks at all. Instead, we should see the middle way. The middle way consists of letting things take their time to show their real possibilities beyond the optimists’ and pessimists’ preconceived notions. Advocating an ethical approach is not about stopping development but about slowing down the process. We need time to reflect on where it might be appropriate to introduce AI and where we should refrain from using the technology. We should also consider how the AI we choose to use is introduced in a good way so that we have time to detect risks of injustice, discrimination, and reduced trust and can minimize them.
It is not easy and not popular to be the one who says, “Wait, we need to think this through.” Yet it is so important that we take the time. We must think ahead so that things do not go wrong when they could so easily have gone right. It might be worth considering what could happen if we learned in school that it is more important to do things right than to do them quickly.

Written by…
Jessica Nihlén Fahlquist, senior lecturer in biomedical ethics and associate professor in practical philosophy at the Centre for Research Ethics & Bioethics.
Approaching future issues



