AI’s Hidden Costs: Justice, Exploitation & the Future of Tech

by Rachel Kim – Technology Editor

A classroom discussion at Boston College, sparked by a professor’s opening question about feelings toward artificial intelligence, has illuminated growing concerns about the technology’s ethical implications, particularly its impact on vulnerable populations and the labor practices underpinning its development.

The conversation, recounted by a student who requested anonymity, quickly moved from the conveniences of generative AI – streamlining tasks and aiding study – to a more critical examination of its potential for weaponization and inherent injustices. This internal debate mirrors a broader, increasingly urgent discourse surrounding AI’s societal costs, even as investment in the sector continues to surge.

A central concern raised was the human cost of “data labeling,” the process of preparing raw data for AI training. While Large Language Models (LLMs) like ChatGPT don’t utilize raw data directly, their functionality relies on vast quantities of labeled information, often sourced from workers in the Global South earning as little as $2 per hour. Reports have surfaced detailing exploitative conditions, with data labelers in Kenya describing their work as “modern-day slavery” in a recent open letter to President Joe Biden.

Beyond labor practices, the environmental impact of AI infrastructure is also drawing scrutiny. While precise figures remain elusive – American AI data centers are shielded by “trade secrets” – projections indicate that generative AI could consume as much energy as 22 percent of U.S. Households by 2028. These data centers also place significant strain on water resources, particularly in arid regions. Microsoft’s planned expansion in Maricopa County, Arizona, for example, would require an estimated 1 million gallons of water daily per building for cooling purposes.

The environmental burden is compounded by the reliance on diesel generators to power these facilities, releasing significantly higher levels of nitrogen oxides than traditional power plants. These generators are often located in economically disadvantaged communities, contributing to “digital smog” and noise pollution. Despite these concerns, states continue to offer tax breaks to attract AI companies, prioritizing economic development over environmental and public health considerations.

These issues were framed within the context of distributive justice, drawing on the philosophical framework of John Rawls, which emphasizes the importance of basic rights, equal opportunity, and benefits for the disadvantaged. Participants in the Boston College discussion argued that residents should have greater input on the development of AI infrastructure in their communities, workers deserve fair compensation, and citizens should have a clear understanding of how their data is being used and its potential health impacts.

The discussion also referenced the work of journalist Karen Hao, author of Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, who characterizes AI companies as empires with quasi-religious ambitions. Hao suggests that OpenAI may overestimate its capabilities, despite plans to spend $1 trillion in the coming years, following $20 billion in annualized revenue last year. Concerns were raised that the current investment boom represents a “bubble” that could burst, leaving companies $800 billion short by 2030, even with AI-related savings.

The student who shared details of the discussion expressed concern that the current power dynamic within the AI industry allows companies to exploit limited resources and mistreat workers with little accountability. Even those who believe they are using generative AI responsibly, they argued, are contributing to a fundamentally unjust system. The imperative, they said, is to mandate transparency and accountability through legislation.

The debate also touched on the increasing integration of AI into surveillance technologies. Reports indicate that Immigration and Customs Enforcement (ICE) is increasingly relying on AI-powered facial recognition and phone-hacking software, raising concerns about privacy and civil liberties, both domestically and abroad. Military interest in AI further underscores the potential for the technology to be used in ways that endanger lives.

While systemic change is seen as crucial, participants emphasized the power of individual choices. Shifting the conversation and opting out of excessive AI usage, even if difficult, are seen as important steps toward challenging the prevailing narrative and advocating for a more just and equitable future.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.