Home » Technology » AI: The Next Normal Skill?

AI: The Next Normal Skill?

Technology? Rethinking Superintelligence" />

Is AI Just Another Technology? Rethinking Superintelligence

CITY — May 6, 2024 —

A new perspective on the burgeoning field of artificial intelligence (AI) suggests we’ve entered a period of hype, calling for a more grounded approach.Arvind Narayanan and Sayash Kapoor‘s upcoming book, “AI as normal technology,” seeks to reframe the way the technology is understood. It challenges the prevailing superintelligence narrative that has taken hold. The authors make a compelling argument, requiring us to consider how the development of AI affects society.

video-container">

Is AI Just Another Technology? Rethinking Superintelligence

The AI Hype Machine: A Necessary Reality Check

Artificial intelligence, a field simultaneously novel adn established, has become a focal point of both immense anticipation and considerable apprehension. Over the past several years, the “AI hype” has been amplified by technological advancements and substantial social investment. Even presidential candidates are making AI investment a cornerstone of their platforms.

Did you know? The term “artificial intelligence” was coined in 1956 at the Dartmouth Workshop,marking the formal beginning of AI research.

However, this enthusiasm warrants careful examination.The media plays a significant role in this self-reinforcing cycle of AI hype. Recognizing this, some researchers are advocating for a more grounded perspective.

‘AI as Normal Technology’: A New Perspective

Arvind Narayanan and sayash Kapoor, researchers addressing the same concerns, have presented a compelling argument in their upcoming book, ‘AI is normal technology.’ This work challenges the prevailing narrative of AI as an extraordinary, almost sentient entity.

The core idea is that AI should be viewed as a general-purpose technology, akin to the Internet, computers, or electricity. While its rapid progress is indeed remarkable, it remains, fundamentally, just another technology.

Pro Tip: When evaluating AI claims, consider the historical context of other technological advancements. How were they initially perceived, and what were the eventual societal impacts?

Challenging the Superintelligence Narrative

The authors argue that AI is increasingly perceived as a separate species, even possessing autonomy and the potential for superintelligence that could threaten humanity. Their book aims to counter this narrative by framing AI as a normal technology.

Why is this distinction significant? The authors contend that focusing on AI as a unique entity can be misleading. As they explain, understanding AI requires a shift in perspective:

The definition of terms of artificial intelligence and human intellectual ability may be helpful when the terminology is defined in the concept of understanding and imagination. Though, it can be interfering with the principles of AI and what mechanisms that actually bring about social, economy, technology, culture, and political ripple effects.

Rather of focusing solely on intellectual ability, the authors emphasize the importance of understanding how AI interacts with and shapes our social, economic, and political systems.

Beyond Intelligence: The Importance of Environmental Control

The authors argue that human intelligence should be understood not just as intellectual ability, but also as the capacity to control the habitat. This control is largely facilitated by the social and technical systems we have built.

The prevailing view often presents a disconnected spectrum of intelligence, ranging from animals to humans to hypothetical superintelligent AI. This perspective can lead to the fear that a much smarter superintelligence could dominate humanity.

Clever Spectrum
If you understand the intelligent spectrum, AI is bound to be a fear. source (text)

However,modern human beings have evolved through technical accumulation. The ability to leverage technology for control is more critical than intelligence itself. It is not promptly apparent that AI with superior intelligence will automatically surpass us.

Control spectrum
On the other hand, excellent intelligence does not mean high control. source

Real dangers and Necessary Prescriptions

One major concern is that AI will reach a point where its technical capabilities lead to an intelligence explosion, rendering human control impractical.A common proposed solution is “Model Alignment,” which involves instilling the right values in AI models to ensure they align with human interests.

Though, the authors argue that model alignment alone is insufficient. AI that learns specific values could still act contrary to those values depending on the situation.

The real danger, according to the “AI as normal technology” theory, lies not in AI itself, but in the construction of the social technical system in which AI operates. This includes concerns such as:

  • Discrimination and inequality
  • Large-scale unemployment
  • Power concentrated in the hands of a few
  • Weakening social trust
  • Environmental and intellectual pollution
  • Democracy collapse and dictatorship
  • Public monitoring

Preventing these problems requires controlling accidents, abuse, and excessive military competition that may arise from AI.

In the normal technical theory suggested by the authors, the most serious problem is that AI can no longer be controlled. But strictly, not AI itself Inconstruction of the social technical system that AI prevailsI am mainly concerned.

Instead of allowing AI agents to make critical decisions,it is more realistic to limit AI’s decision-making power on important issues.

Controlling AI: A Path Forward

Currently, there is a strong push from politicians and companies to invest in AI for national competitiveness. Experts frequently enough fuel anxiety and fear, suggesting dependence on foreign technology or even human extinction.

Though, viewing AI as ordinary technology allows for a more measured response. We have developed systems to control other technologies, frequently enough imperfectly, but with the goal of benefiting humanity. The same approach can be applied to AI.

The starting point for effectively controlling AI is recognizing that it is not fundamentally different from other technologies.

The various technologies we use so far are not hazardous. In the meantime, we have built a system that can control humanity from developing — of course, it often fails, but — can be used in a beneficial direction for us.The same is true for AI. The starting point for the right control of AI is that AI is no different from other technologies.

FAQ: Understanding AI as Normal Technology

What does it mean to view “AI as normal technology?”
It means recognizing that AI, like other technologies such as the Internet or electricity, is a tool developed by humans and subject to human control and regulation.

Why is it critically important to challenge the superintelligence narrative?
The superintelligence narrative can lead to exaggerated fears and misguided priorities, diverting attention from the more immediate and practical challenges posed by AI’s integration into society.

What are the real dangers associated with AI?
The real dangers include discrimination, inequality, job displacement, concentration of power, erosion of social trust, environmental damage, and threats to democratic institutions.

How can we effectively control AI?
By focusing on the social and technical systems in which AI operates, regulating its use, and preventing accidents, abuse, and excessive military competition.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

×
Avatar
World Today News
World Today News Chatbot
Hello, would you like to find out more details about AI: The Next Normal Skill? ?
 

By using this chatbot, you consent to the collection and use of your data as outlined in our Privacy Policy. Your data will only be used to assist with your inquiry.