Sometimes you read articles at the intersection of philosophy and science that contain really exciting visionary thoughts, which are at the same time difficult to really understand and assess. The technical elaboration of the thoughts grows as you read, and in the end you do not know if you are capable of thinking independently about the ideas or if they are about new scientific findings and trends that you lack the expertise to judge.
Today I dare to recommend the reading of such an article. The post must, of course, be short. But the fundamental ideas in the article are so interesting that I hope some readers of this post will also become readers of the article and make a serious attempt to understand it.
What is the article about? It is about an alternative approach to the highest aims and claims in artificial intelligence. Instead of trying to create machines that can do what humans can do, machines with higher-level capacities such as consciousness and morality, the article focuses on the possibility of creating machines that augment the intelligence of already conscious, morally thinking humans. However, this idea is not entirely new. It has existed for over half a century in, for example, cybernetics. So what is new in the article?
Something I myself was struck by was the compassionate voice in the article, which is otherwise not prominent in the AI literature. The article focuses not on creating super-smart problem solvers, but on strengthening our connections with each other and with the world in which we live. The examples that are given in the article are about better moral considerations for people far away, better predictions of natural disasters in a complex climate, and about restoring social contacts in people suffering from depression or schizophrenia.
But perhaps the most original idea in the article is the suggestion that the development of these human self-augmenting machines would draw inspiration from how the brain already maintains contact with its environment. Here one should keep in mind that we are dealing with mathematical models of the brain and with innovative ways of thinking about how the brain interacts with the environment.
It is tempting to see the brain as an isolated organ. But the brain, via the senses and nerve-paths, is in constant dynamic exchange with the body and the world. You would not experience the world if the world did not constantly make new imprints in your brain and you constantly acted on those imprints. This intense interactivity on multiple levels and time scales aims to maintain a stable and comprehensible contact with a surrounding world. The way of thinking in the article reminds me of the concept of a “digital twin,” which I previously blogged about. But here it is the brain that appears to be a neural twin of the world. The brain resembles a continuously updated neural mirror image of the world, which it simultaneously continuously changes.
Here, however, I find it difficult to properly understand and assess the thoughts in the article, especially regarding the mathematical model that is supposed to describe the “adaptive dynamics” of the brain. But as I understand it, the article suggests the possibility of recreating a similar dynamic in intelligent machines, which could enhance our ability to see complex patterns in our environment and be in contact with each other. A little poetically, one could perhaps say that it is about strengthening our neural twinship with the world. A kind of neural-digital twinship with the environment? A digitally augmented neural twinship with the world?
I dare not say more here about the visionary article. Maybe I have already taken too many poetic liberties? I hope that I have at least managed to make you interested to read the article and to asses it for yourself: Augmenting Human Selves Through Artificial Agents – Lessons From the Brain.
Well, maybe one concluding remark. I mentioned the difficulty of sometimes understanding and assessing visionary ideas that are formulated at the intersection of philosophy and science. Is not that difficulty itself an example of how our contact with the world can sometimes weaken? However, I do not know if I would have been helped by digital intelligence augmentation that quickly took me through the philosophical difficulties that can arise during reading. Some questions seem to essentially require time, that you stop and think!
Giving yourself time to think is a natural way to deepen your contact with reality, known by philosophers for millennia.
Written by…
Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.
Northoff G, Fraser M, Griffiths J, Pinotsis DA, Panangaden P, Moran R and Friston K (2022) Augmenting Human Selves Through Artificial Agents – Lessons From the Brain. Front. Comput. Neurosci. 16:892354. doi: 10.3389/fncom.2022.892354
We recommend readings
0 Comments
1 Pingback