Sam Altman’s Troubling Comparison of AI and Humans | The Atlantic

by Emma Walker – News Editor

At an artificial intelligence summit in New Delhi last Friday, OpenAI CEO Sam Altman defended his company’s substantial energy consumption by comparing the resources required to train AI models to the energy needed to “train a human.” Altman argued that, on an energy-efficiency basis, AI has already caught up with humans when considering the energy used to answer a single question.

“It too takes a lot of energy to train a human,” Altman said, according to reports. “It takes, like, 20 years of life and all of the food you eat during that time before you acquire smart. And not only that, it took, like, the remarkably widespread evolution of the hundred billion people that have ever lived and learned not to get eaten by predators and learned how to, like, figure out science and whatever to produce you, and then you took whatever, you know, you took.”

Altman’s comments, widely reported by outlets including Yahoo Finance and Fortune, drew immediate criticism for what many saw as a misleading and ultimately dismissive response to legitimate concerns about the environmental impact of increasingly powerful AI systems. The core concern, critics argue, isn’t the resources needed to create intelligence, but the contribution of AI infrastructure to climate change.

The debate over AI’s energy footprint comes as companies like OpenAI are rapidly expanding their data center capacity. OpenAI is constructing large-scale data centers, such as the “Stargate” facility, which require significant power. Other data centers are also building private, gas-fired power plants, collectively capable of generating enough electricity to power dozens of American cities and emitting substantial greenhouse gases, according to reports.

Altman’s framing—equating the development of AI with the evolution of humanity—is not isolated. Dario Amodei, CEO of Anthropic, a leading AI company and competitor to OpenAI, made a similar analogy at the same India AI Summit. Amodei likened the training of AI models to human evolution and everyday learning. This mindset, observers note, extends to product development, with Anthropic even studying whether its chatbot, Claude, is conscious or can experience “distress,” and allowing the chatbot to terminate conversations deemed “persistently harmful or abusive” based on “risks to model welfare.”

The tendency to anthropomorphize AI, some experts suggest, may stem from a belief that these systems are genuinely comparable to humans, or from a calculated marketing strategy. Both possibilities, however, raise alarms. A genuine conviction that AI represents a higher power could justify prioritizing technological advancement over environmental or human concerns. Altman himself has stated his belief that superintelligence is only a few years away.

Even if the comparison is purely a public relations tactic, it is a potentially damaging one, particularly as OpenAI is reportedly in the midst of a fundraising round that could value the company at over $800 billion, approaching the market capitalization of Walmart. Even as tech companies often express a desire to develop AI for the benefit of humanity, the comparison to human life underscores a disconnect between the industry and the realities of human existence. The process of “training a human” – living a life – involves struggle, the acceptance of failure, and the pursuit of wonder, elements absent from the instant, efficient processes of generative AI.

OpenAI did not respond to a request for comment regarding Altman’s remarks.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.