Technology? Rethinking Superintelligence" />
Is AI Just Another Technology? Rethinking Superintelligence
Table of Contents
- Is AI Just Another Technology? Rethinking Superintelligence
- Is AI Just Another Technology? Rethinking Superintelligence
- The AI Hype Machine: A Necessary Reality Check
- ‘AI as Normal Technology’: A New Perspective
- Challenging the Superintelligence Narrative
- Beyond Intelligence: The Importance of Environmental Control
- Real dangers and Necessary Prescriptions
- Controlling AI: A Path Forward
- FAQ: Understanding AI as Normal Technology
CITY — May 6, 2024 —
A new perspective on the burgeoning field of artificial intelligence (AI) suggests we’ve entered a period of hype, calling for a more grounded approach.Arvind Narayanan and Sayash Kapoor‘s upcoming book, “AI as normal technology,” seeks to reframe the way the technology is understood. It challenges the prevailing superintelligence narrative that has taken hold. The authors make a compelling argument, requiring us to consider how the development of AI affects society.
Is AI Just Another Technology? Rethinking Superintelligence
The AI Hype Machine: A Necessary Reality Check
Artificial intelligence, a field simultaneously novel adn established, has become a focal point of both immense anticipation and considerable apprehension. Over the past several years, the “AI hype” has been amplified by technological advancements and substantial social investment. Even presidential candidates are making AI investment a cornerstone of their platforms.
Did you know? The term “artificial intelligence” was coined in 1956 at the Dartmouth Workshop,marking the formal beginning of AI research.
However, this enthusiasm warrants careful examination.The media plays a significant role in this self-reinforcing cycle of AI hype. Recognizing this, some researchers are advocating for a more grounded perspective.
‘AI as Normal Technology’: A New Perspective
Arvind Narayanan and sayash Kapoor, researchers addressing the same concerns, have presented a compelling argument in their upcoming book, ‘AI is normal technology.’ This work challenges the prevailing narrative of AI as an extraordinary, almost sentient entity.
The core idea is that AI should be viewed as a general-purpose technology, akin to the Internet, computers, or electricity. While its rapid progress is indeed remarkable, it remains, fundamentally, just another technology.
Pro Tip: When evaluating AI claims, consider the historical context of other technological advancements. How were they initially perceived, and what were the eventual societal impacts?
Challenging the Superintelligence Narrative
The authors argue that AI is increasingly perceived as a separate species, even possessing autonomy and the potential for superintelligence that could threaten humanity. Their book aims to counter this narrative by framing AI as a normal technology.
Why is this distinction significant? The authors contend that focusing on AI as a unique entity can be misleading. As they explain, understanding AI requires a shift in perspective:
The definition of terms of artificial intelligence and human intellectual ability may be helpful when the terminology is defined in the concept of understanding and imagination. Though, it can be interfering with the principles of AI and what mechanisms that actually bring about social, economy, technology, culture, and political ripple effects.
Rather of focusing solely on intellectual ability, the authors emphasize the importance of understanding how AI interacts with and shapes our social, economic, and political systems.
Beyond Intelligence: The Importance of Environmental Control
The authors argue that human intelligence should be understood not just as intellectual ability, but also as the capacity to control the habitat. This control is largely facilitated by the social and technical systems we have built.
The prevailing view often presents a disconnected spectrum of intelligence, ranging from animals to humans to hypothetical superintelligent AI. This perspective can lead to the fear that a much smarter superintelligence could dominate humanity.

However,modern human beings have evolved through technical accumulation. The ability to leverage technology for control is more critical than intelligence itself. It is not promptly apparent that AI with superior intelligence will automatically surpass us.

Real dangers and Necessary Prescriptions
One major concern is that AI will reach a point where its technical capabilities lead to an intelligence explosion, rendering human control impractical.A common proposed solution is “Model Alignment,” which involves instilling the right values in AI models to ensure they align with human interests.
Though, the authors argue that model alignment alone is insufficient. AI that learns specific values could still act contrary to those values depending on the situation.
The real danger, according to the “AI as normal technology” theory, lies not in AI itself, but in the construction of the social technical system in which AI operates. This includes concerns such as:
- Discrimination and inequality
- Large-scale unemployment
- Power concentrated in the hands of a few
- Weakening social trust
- Environmental and intellectual pollution
- Democracy collapse and dictatorship
- Public monitoring
Preventing these problems requires controlling accidents, abuse, and excessive military competition that may arise from AI.
In the normal technical theory suggested by the authors, the most serious problem is that AI can no longer be controlled. But strictly, not AI itself Inconstruction of the social technical system that AI prevailsI am mainly concerned.
Instead of allowing AI agents to make critical decisions,it is more realistic to limit AI’s decision-making power on important issues.
Controlling AI: A Path Forward
Currently, there is a strong push from politicians and companies to invest in AI for national competitiveness. Experts frequently enough fuel anxiety and fear, suggesting dependence on foreign technology or even human extinction.
Though, viewing AI as ordinary technology allows for a more measured response. We have developed systems to control other technologies, frequently enough imperfectly, but with the goal of benefiting humanity. The same approach can be applied to AI.
The starting point for effectively controlling AI is recognizing that it is not fundamentally different from other technologies.
The various technologies we use so far are not hazardous. In the meantime, we have built a system that can control humanity from developing — of course, it often fails, but — can be used in a beneficial direction for us.The same is true for AI. The starting point for the right control of AI is that AI is no different from other technologies.