Alexa Hagerty |
-
03.07.2024-29.07.2024
Algorithmic Expectations
Artificial intelligence, an umbrella term for a technologies such as predictive algorithms, automated decision making systems, machine vision, and biometric surveillance, is increasingly deployed in a wide range of applications in everyday life including public health interventions. Until very recently, impact assessments of artificial intelligence were largely confined to technical analysis. Only in the past five years has there been sustained analysis of the social implications of these systems. Ethnographic approaches are uniquely suited to better understanding how these systems are shaping social worlds (Royer 2020). This chapter contributes to emerging ethnographic and socially contextualized studies of AI technologies, specifically in the Global South.
Influential studies in North America have documented continuities between algorithmic harms and historical forms of exclusion. For example, Virginia Eubanks coined the term ‘digital poorhouse’ to describe similarities between punitive treatment of the poor in the 19th century and contemporary forms of algorithmic surveillance and exclusion perpetuated by the use of predictive risk models and automated decision making technologies deployed in U.S. welfare services. Like this example, the links between puericulture and the predictive platform for teenage pregnancy reveal connections between historical exclusions and contemporary harms that are not neatly causal but nor are they merely metaphorical.
Instead, they speak to affinities and intertextualities as well as material links and ideological continuities between emerging technologies and histories of marginalization. The Plataforma Tecnológica de Intervención Social shares features with other AI systems, both in technological design and potential social harm, but is also uniquely situated in an Argentine socio-historical context, linked with the ‘stolen babies’ of the dictatorship, ‘green wave’ abortion rights movement, and histories of Latin American eugenics such as immigration policies promoting blanquismo, ‘whitening,’ the population.
This chapter, and the book more generally, is well-positioned to contribute to the public debate about the ever-increasing deployment of AI technologies in public health interventions. Public conversation is imperative to inform emerging legislation such as the European Union Artificial Intelligence Act (EU AI Act). While the move to regulate an industry that has been self-regulated is important, it is not clear whether the EU AI Act is adequate to achieve the aim of protecting fundamental human rights. Analysts have pointed out that the Act considers individual harm while largely ignoring social harm. This project adds to our understanding of the social impacts of AI technologies through contextual analysis which reveals that while AI systems are new, the social logics encoded in them are not.
American diplomat and political scientist Madeleine Albright famously observed that “institutions designed in the nineteenth century are using technology from the twentieth century to try and solve the challenges of the twenty-first century.” This project complicates this claim by asserting that nineteenth century ideologies are encoded in our twenty-first century technologies with potentially perilous consequences for public health and social equity.