This is an age when Artificial Intelligence (AI) is literally exploding and invading almost every aspect of our lives. From entertainment to work, from economics to medicine, from education to marketing, we deal with a number of disparate AI systems that make our lives much easier than a few years ago, but also raise new ethical issues or emphasize old, still open questions.
A basic fact about AI is that it is progressing at an impressive pace, while still being limited with regard to various specific contexts and goals. We often read, also in non-specialized journals, that AI systems are not robust (meaning they are not good at dealing with datasets too much different from the one they have been trained with, so that the risk of cyber-attacks is still pretty high), not fully transparent, and limited in their capacity to generalize, for instance. This suggests that the reliability of AI systems, in other words the possibility to use them for achieving different goals, is limited, and we should not blindly trust them.
A strategy increasingly chosen by AI researchers in order to improve the systems they develop is taking inspiration from biology, and specifically from the human brain. Actually, this is not really new: already the first wave of AI took inspiration from the brain, which was (and still is) the most familiar intelligent system in the world. This trend towards brain-inspired AI is gaining much more momentum today, for two main reasons among others: big data and the very powerful technology to handle big data. And yet, brain-inspired AI raises a number of questions of an even deeper nature, which urge us to stop and think.
Indeed, when compared to the human brain, present AI reveals several differences and limitations with regards to different contexts and goals. For instance, present Machine Learning cannot generalize the abilities it achieves on the basis of specific data in order to use them in different settings and for different goals. Also, AI systems are fragile: a slight change in the characteristics of processed data can have catastrophic consequences. These limitations are arguably dependent on both how AI is conceived (technically speaking: on its underlying architecture), and on how it works (on its underlying technology). I would like to introduce some reflections about the choice to use the human brain as a model for improving AI, including the apparent limitations of this choice to use the brain as a model.
Very roughly, AI researchers are looking at the human brain to infer operational principles and then translate them into AI systems and eventually make these systems better in a number of tasks. But is a brain-inspired strategy the best we can choose? What justifies it? In fact, there are already AI systems that work in ways that do not conform to the human brain. We cannot exclude a priori that AI will eventually develop more successfully along lines that do not fully conform to, or that even deviate from, the way the human brain works.
Also, we should not forget that there is no such thing as the brain: there is a huge diversity both among different people and within the brain itself. The development of our brains reflects a complex interplay between our genetic make-up and our life experiences. Moreover, the brain is a multilevel organ with different structural and functional levels.
Thus, claiming a brain-inspired AI without clarifying which specific brain model is used as a reference (for instance, the neurons’ action potentials rather than the connectomes’ network) is possibly misleading if not nonsensical.
There is also a more fundamental philosophical point worth considering. Postulating that the human brain is paradigmatic for AI risks to implicitly endorse a form of anthropocentrism and anthropomorphism, which are both evidence of our intellectual self-centeredness and of our limited ability to think beyond what we think we are.
While pragmatic reasons might justify the choice to take the brain as a model for AI (after all, for many aspects, the brain is the most efficient intelligent system that we know in nature), I think we should avoid the risk of translating this legitimate technical effort into a further narcissistic, self-referential anthropological model. Our history is already full of such models, and they have not been ethically or politically harmless.
Written by…
Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.
Approaching future issues
0 Comments
2 Pingbacks