The Datafied Citizen and AI Error
Have you ever been worried – as you were passing through a border control – that your social media data would be scrutinized to determine if you could enter the country or not? I have; many times. So I was not surprised when last year a Palestinian Harvard Student was not allowed into the U.S. because of the social media posts of his friends. That is what it means to be living in a world in which we are all becoming datafied citizens; a world where the data traces we produce – and others who can be associated to us produce – are made to speak for and about us in public and determine our civic rights.
One of the most deeply problematic aspects of the techno-historical transformation that we are experiencing, lies not only in the fact that individual data is constantly surveilled and monetized, but that companies use data traces to construct individuals as ‘data subjects’, and make data-driven decisions about their lives. Hence we are witnessing the rise of new type of public self, the datafied citizen. In contrast to the digital citizen, who uses online technologies (and especially social media) to self-construct in public (Isin and Ruppert, 2015), the datafied citizen is defined by the narratives produced through the processing of data traces; it is the product of practices of data inferral and digital profiling (Barassi, 2016, 2017; Hintz et al, 2018).
The datafied citizen is often governed by, what Cheney – Lippold (2017) described as ius algorithmi, the law of algorithms. According to him, the law of algorithms is similar to other types of laws that control citizenship (e.g. ius soli, for which citizenship is granted on the basis of birth territory or ius sanguinis, for which citizenship is granted on the basis of hereditary blood ties). Although ius algorithmi may not provide individuals with passports, it determines what rights they have access to on the basis of their datafied behavior.
We live in a world where a plurality of agents can cross-reference large amounts of our personal data and profile us in often obscure ways. They use the data that we produce – as well as the data others produce about us – to track us throughout our lives so that they can find out our behavioral patterns. With this data they make assumptions about psychological tendencies and construct narratives about who we are. What is becoming increasingly clear, however, is that, as citizens, we have no control over the narratives produced through private algorithmic profiling and AI systems, even when these narratives are discriminatory and wrong (Eubanks, 2018).
I have been working on the idea of the datafied citizen for 5 years now, and just written a book which shows how, today, citizens are being datafied from before birth and is titled Child | Data | Citizen: How Tech Companies are Profiling Us from Birth (MIT Press, 2020). During my research on the datafication of children, I came to the conclusion that even if tech-companies are tracking citizens from before birth and are bringing different forms of highly contextual data together (educational data, health data, home life data, social media data etc.) under unique ID profiles, this does not imply that these profiles are accurate, explainable or fair.
Hence it was thanks to the Child | Data | Citizen project that I came to the conclusion that we need more in-depth research into algorithmic fallacy when it comes to human profiling. I also came to the conclusion that we need a serious debate as a society, and question whether it is fair and right to use AI technologies and systems to profile humans. It was explore these questions about algorithmic profiling and human rights that I designed the Human Error Project.