“Neither artificial nor intelligent”: making the materialities of AI visible

Our postdoctoral researcher Olga Dovbysh wrote a book review, where she discusses the materiality and power dimension of artificial intelligence. AI and algorithmic computation are a new hope to solve the humans’ challenges for a better world. What are the less visible sides of the AI and its impacts on environment? 

Book review: Atlas of AI by Kate Crawford, New Haven, CT, Yale University Press, 2021, 336 pp., $28.00 (hardback), ISBN: 9780300209570

Artificial intelligence (AI) has been on the research agenda since 1960-s. However, since the mid-2000s we observe a wave of hype about AI and machine learning techniques: they have been rapidly expanding in various fields and industries. Today, AI systems are perceived as rational, neutral, reliable, comparable or even superior to human intelligence. From smart homes to smart cities and from search engines to social media platforms, AI driven algorithms condition more and more spheres of people’s lives and affect how we eat, sleep, consume, make decisions and vote, get bank credits or social subsidies, come under legal investigation and even when we die (Tangerman 2019).

Scholarly and public discussions on possible negative effects and dark sides of the AI are still minor in the overall claim for AI-driven solutionism for a better world. The recent book by Kate Crawford, Research Professor at USC Annenberg and Senior Principal Researcher at Microsoft Research New York, brings a not-so-pleasant and sobering perspective on the future of AI-driven social, political, cultural, and economic worlds. Crawford argues “AI is neither artificial nor intelligent” (p. 8). 

Instead of seeing the AI as purely technical domain, the author focuses on two aspects, that make AI fundamentally political: materiality and power. Both aim to contest two widespread abstractions about AI: its immateriality and neutrality. As disparate maps collected in one atlas give us an image of the globe, the six chapters of this book (Earth, Labor, Data, Classification, Affect, State), Introduction, Conclusion and Coda are written to critically contest and contextualise the AI as “a multitude of interlaced systems of power” (p. 12). 

Each chapter starts from a captivating story of author’s own ethnographic observations followed by literature-based review of the chapter’s central problem and bring a reader to a more general questions and concerns. Crawford visits underground lake of lithium in Nevada, Amazon’s fulfilment center in New Jersey, studies dataset of the National Institute of Standards and Technology, used for facial recognition software, reads Edward Snowden’s archives. All these cases open for a reader a vivid map of political interventions, which affect the justified and neutral reflections of the world, AI systems are suggested to perform. 

Environmental effects of AI is one of the core parts of the book. Focusing on specific materialities of AI, Crawford analyses and visualises how things, places, and people as non-human and human parts of AI operate within broader systems of power. From this perspective, AI is seen as a very material object with significant environmental footprint: “relying on manufacturing, transportation, and physical work; data centers and the undersea cables, <…>, personal devices and their raw components; transmission signals passing through the air; datasets produced by scraping the internet; and continual computational cycles” (p. 49). Such broad understanding unpacks the AI systems and algorithmic computation as very resource demanding and environmentally unclean activities. It contrasts to the public image of sustainable and “green” industry, promoted by the tech corporations.

In fact, the environmental responsibility of the (big) tech companies, some of which deploy AI systems at a planetary scale, is less discussed and studied. For now, their environmental footprint is significantly lower than for instance for manufacturing companies that are still responsible for one-fifth of carbon emissions (World Economic Forum 2022). However, the forecast is that the energy consumption by tech industry will demonstrate the exponential growth in the coming years (Andrae 2017). There are cases of when the localities refuse to accept new applications for data centers construction because of the lack energy in the area (Swinhoe 2022). 

A broader perspective on tech companies includes also their collaboration with other industries. For instance, Microsoft, that committed to carbon negativity by 2030, simultaneously has a contract with ExxonMobil to locate hard-to-find oil fields. This contract could alone “lead to emissions greater than 20% of Microsoft’s annual carbon footprint” (Greenpeace report 2020). Therefore, the understanding and problematisation of the environmental impact of AI is the first steps, the next ones should be: how to assess and measure this impact, how it should be regulated at the national and global level (see for instance Climate Neutral Data Centre Pact[1]), how does this affect the appropriateness of the tech companies and the advanced computation in society?

In FLOWISION, we strive to understand the visibility of fossil and renewable energy, and waste as they traverse through society in Russia and Finland and how these commodities are being (de)politicized. Studying AI as material and highly political thing opens an important avenue to understand the actual (environmental, economic, cultural, etc.) impact of these technologies and the companies and organisations, standing behind them. Moreover, in the time of Russian-Ukrainian war other political and ethical dimensions come to stage. For instance, can heating from Russian data center still be used to produce heat for Finnish households (Velkova 2021)? What has more negative impact under current circumstances: continuation of Russian data processing with the use of Finnish energy or higher carbon emission after returning to wood chips and pellets heat production (Svahn 2022)?

“Atlas of AI” is a great endeavour to see and ponder on the non-mainstream understanding of AI as material and highly politicised technology. The author doesn’t offer ready-made answers about the future of AI and the algorithmic computation but claims for the re-thinking of the dogma of inevitability. The renewed politics of refusal is needed: the emphasis should be moved from the questions of where else AI can be applied to why AI ought to be applied and who will really benefit from it.

References

Andrae A. (2017). Total Consumer Power Consumption Forecast. Nordic Digital Business Summit. Retrieved from: https://www.researchgate.net/publication/320225452_Total_Consumer_Power_Consumption_Forecast

Greenpeace report (19 March 2020). Oil in the cloud: how tech companies are helping big oil profit from climate destruction. Retrieved from: https://www.greenpeace.org/usa/reports/oil-in-the-cloud/  

Svahn N. (18 March 2022). Mäntsälä laski lämmityksensä venäläisen datakeskuksen varaan – vihreä ratkaisu mureni yhdessä yössä, eikä kukaan tiedä, mitä ensi talvena tapahtuu. Yle.fi Retrieved from: https://yle.fi/uutiset/3-12360900

Swinhoe D. (2022). EirGrid says no new applications for data centers in Dublin until 2028 – report. DCD Data Center Dynamics. Retrieved from: https://www.datacenterdynamics.com/en/news/eirgrid-says-no-new-applications-for-data-centers-in-dublin-till-2028/

Tangerman V. (11 November 2019). This AI knows when you’ll die and its creators don’t know how. Neoscope. Retrieved from: https://futurism.com/neoscope/ai-knows-when-youll-die-how

World Economic Forum (23 March 2022). Reducing the carbon footprint of the manufacturing industry through data sharing. Retrieved from: https://www.weforum.org/impact/carbon-footprint-manufacturing-industry/

Velkova, J. (2021). Thermopolitics of data: cloud infrastructures and energy futures. Cultural studies35(4-5), 663-683. 


[1] https://www.climateneutraldatacentre.net/