Three contemporary trends create great challenges for researchers. First, science is expected to become increasingly open, among other things by making collected data available to new users and new purposes. At the same time, data protection laws are being strengthened to protect privacy. Finally, artificial intelligence finds new ways to reveal the individuals behind data, where this was previously impossible.
Neuroimaging is an example of how open science, stronger data protection legislation and more powerful AI challenge the research community. You may not think that you can identify the person whose brain is imaged by using a magnetic camera? But the image actually also depicts the shape of the skull and face, including any scars. You could thus recognize the person. In order to be able to share neuroimaging data without revealing the person, it has hitherto been assumed sufficient to remove the shape of the skull and face in the images, or to make the contours blurry. The problem is the third trend: more powerful AI.
AI can learn to identify people, where human eyes fail. Brain images where the shape of the skull and face has been made unrecognizable often turn out to contain enough information for self-learning face recognition programs to be able to identify people in the defaced images. AI can thus re-identify what had been de-identified. In addition, the anatomy of the brain itself is individual. Just as our fingers have unique fingerprints, our brains have unique “brainprints.” This makes it possible to link neuroimaging data to a person, namely, if you have previously identified neuroimaging data from the person. For example, via another database, or if the person has spread their brain images via social media so that “brainprint” and person are connected.
Making the persons completely unidentifiable would change the images so drastically that they would lose their value for research. The three contemporary trends – open science, stronger data protection legislation and more powerful AI – thus seem to be on a collision course. Is it at all possible to share scientifically useful neuroimaging data in a responsible way, when AI seems to be able to reveal the people whose brains have been imaged?
Well, everything unwanted that can happen does not have to happen. If the world were as insidiously constructed as in a conspiracy theory, no safety measures in the world could save us from the imminent end of the world. On the contrary, such totalized safety measures would definitely undermine safety, which I recently blogged about.
So what should researchers do in practice, when building international research infrastructures to share neuroimaging data (according to the first trend above)? A new article in Neuroimage: Reports, presents a constructive proposal. The authors emphasize, among other things, increased and continuously updated awareness among researchers about realistic data protection risks. Researchers doing neuroimaging need to be trained to think in terms of data protection and see this as a natural part of their research.
Above all, the article proposes several concrete measures to technically and organizationally build research infrastructures where data protection is included from the beginning, by design and by default. Because completely anonymized neuroimaging data is an impossibility (such data would lose its scientific value), pseudonymization and encryption are emphasized instead. Furthermore, technical systems of access control are proposed, as well as clear data use agreements that limit what the user may do with the data. Moreover, of course, informed consent from participants in the studies is part of the proposed measures.
Taken together, these safety measures, built-in from the beginning, would make it possible to construct research infrastructures that satisfy stronger data protection rules, even in a world where artificial intelligence can in principle see what human eyes cannot see. The three contemporary trends may not be on a collision course, after all. If data protection is built in from the beginning, by design and by default, researchers can share data without being forced to destroy the scientific value of the images, and people may continue to want to participate in research.
Written by…
Pär Segerdahl, Associate Professor at the Centre for Research Ethics & Bioethics and editor of the Ethics Blog.
Damian Eke, Ida E.J. Aasebø, Simisola Akintoye, William Knight, Alexandros Karakasidis, Ezequiel Mikulan, Paschal Ochang, George Ogoh, Robert Oostenveld, Andrea Pigorini, Bernd Carsten Stahl, Tonya White, Lyuba Zehl. “Pseudonymisation of neuroimages and data protection: Increasing access to data while retaining scientific utility,” Neuroimage: Reports, 2021,Volume 1, Issue 4
Approaching future issues
Recent Comments