Code Dependent: Living in the Shadow of AI
Author: Madhumita Murgia
Publisher:Picador
Pages: 336
Price: Rs 699
Everyone who’s reading this has had some degree of engagement with artificial intelligence or AI, even if they don’t know it. Apart from the integration of ChatGPT and other large language models into all sorts of applications, we’ve gotten used to Alexa, Google and Siri. We use Uber, play AI-run games and use neural net trading programs. You may also have been offered a credit card or a personal loan by AI!
The book’s focus is the impact AI has on everyday life as it rapidly becomes part of normal existence. AI uses the data we all spew out, in unimaginable, almost magical, ways often to our benefit, but also often not. This is ultimately about people — though the book looks at them through the lens of data — and the influence AI crunching that data has on their lives. The tech is dealt with in that context.
The advent of AI represents a new stage in “tech colonialism”. Cheap labour in the Third World does the scutwork of labelling and annotating data, while large tech companies running the algorithms generate massive profits. One of Madhumita Murgia’s points of reference, for example, is the Nigerian firm that does data-annotation for OpenAI and how its workers are treated.
One of the strengths and weaknesses of AI is that it does things its creators don’t understand. This can result in fantastic breakthroughs where AI figures out intractable problems such as protein-folding, or learns to manage magnetic fields in nuclear fusion reactors. It can also translate into absurdities, where AI finds ridiculous or outright harmful correlations. For example, an algorithm tasked to sift medical data about pneumonia and Covid-19 sorted only on the basis of age differences.
That “black box” quality makes AI a very dangerous tool when it comes to profiling people because AI isn’t great at explaining how it reaches conclusions. Another of the cited examples is that of ProKid, an algorithmic profiling software, used by the Dutch police to predict “propensity to commit crime” based on data from previous contacts with the police, addresses, relationships and “roles as witness or victim.” This flagged hundreds of innocent youngsters. Teenaged girls from low-income groups in Argentina were flagged into databases because AI believed they were at risk of pregnancy. Young boys of colour and immigrants are treated as criminals by AI profiling. Similarly, given credit score data, or scholastic data, AI amplifies existing biases pertaining to gender, race and caste inequalities.
There’s a lot of new material cited here across various fields as well as feedback from interviews with affected people. The author met gig workers, tech workers, healthcare professionals, teenagers and activists, including many from marginalised communities at the bottom end of the AI tech value-chain in places such as Nigeria, Bulgaria, Kenya and China. Non-technical writing about AI and its impacts can swing from the wildly optimistic to the apocalyptic. Yes, AI could trigger a nuclear holocaust, or enable genocide, or repression on monstrous scales, as it has in Gaza, or in the Xinjiang Region of China, where it has been weaponised against the Uyghur community. It may also solve a lot of problems concerning climate change and healthcare.
But the daily impacts of AI are more mundane than a nuclear holocaust. Take the tectonic shifts it may cause in employment patterns, for instance. This book does readers a service by its focus on the less spectacular, and while its tone is generally pessimistic, it is not all doom and gloom.
The chapter plan is designed to provide a broad-spectrum of narratives as headings like “Your Livelihood”, “Your Body,” “Your Health,” and “Your Freedom,” would indicate. While regulation is discussed, along with pathways to regulation, the presentation of the viewpoints of “victims” is also offered in a personalised way.
The “anecdata” is important in that it can evoke empathy in a way the data does not. Interviews with Uber drivers, doctors, researchers, teenagers, and mothers give us nuanced narratives about the harms AI can cause. Women have had their lives destroyed by pornographic deep fakes. Gig workers, delivery drivers, and similar platform workers are cheated at worst and, at best, underpaid. Repressive regimes and wannabe repressive regimes use facial recognition as a tool for targeting activists — India’s farmer agitation comes up in this context.
The exploration of “surveillance capitalism”, “data colonialism” and the labour dynamics within the AI/IT industry as well as its enormous and growing impact on macro-labour dynamics are all important themes. Other worrying aspects that Ms Murgia documents are the feedback loops where AI could strengthen existing biases and the force multiplier it provides for police states.
I had mixed feelings about the epilogue. It reiterates many of the earlier points at what seems like excessive length. But it does also put down some important questions readers need to ponder. This book isn’t balanced in the sense that it focuses much more on mundane, potential harms than on all the potential benefits. But we do need to think about the possibility that the harms arising from AI deployment may outweigh the benefits. This is well-researched and well-written and it deserves to be essential reading.