Home » today » Technology » Artificial Intelligence Critics Face a Setback, but Could Still Succeed in Safeguarding Humanity

Artificial Intelligence Critics Face a Setback, but Could Still Succeed in Safeguarding Humanity

Critics of artificial intelligence lose a round… Will they win the battle to “save humanity”?

The organizers of a high-profile open letter last March calling for a “pause” in work on advanced artificial intelligence lost a round, but they may win a long-term battle to persuade the world to slow down artificial intelligence, according to Axios.

Almost 6 months after the Future of Life Institute letter signed by Elon Musk, Steve Wozniak and more than 1,000 others, which called for a 6-month pause on the use of advanced artificial intelligence, work is still moving forward. But the huge controversy that followed deepened public anxiety about the technology.

The letter helped normalize the expression of deep-seated AI concerns. Voters began repeatedly voicing their concerns to pollsters, the White House mobilized technology executives to make safety commitments, and foreign regulators from Europe to China raced to end AI regulations.

The robot Charlie at the Robotics Innovation Center of the German Research Center for Artificial Intelligence (DFKI) in Bremen, Germany (AFP)

What’s between the lines?

In recent months, the AI ​​conversation around the world has focused intensely on the social, political, and economic risks associated with generative AI, and voters have been vocal in telling pollsters about their concerns about AI, according to Axios.

The British government is bringing together an elite group of deep thinkers in the field of artificial intelligence safety at a global summit on November 1-2. British Deputy Prime Minister Oliver Dowden said at a conference in Washington on Thursday afternoon that the event is “particularly aimed at frontier artificial intelligence.”

OpenAI, Meta, and regulators often use the term “frontier AI” to distinguish larger, riskier AI models from less capable technologies.

“You can never change the world in one summit, but you can take it a step further and create an institutional framework for AI safety,” Dowden said.

In this context, Anthony Aguirre, executive director of the “Future of Life” Institute, which organized the “pause” message, explained to “Axios” that he is “very optimistic” that the UK process is now the best bet to “slow down” the development of artificial intelligence and reshape it carefully. For his original goal of “stopping for 6 months.”

Aguirre believes it is “very important” for China to play a leading role at the UK summit.

While acknowledging the surveillance implications of AI regulation by Beijing, Aguirre noted that China’s approval process for basic model products such as chatbots is evidence that governments can slow down the rollout of AI if they want to.

He said: “Rushing and rushing towards fundamental disruption is not necessarily a race you want to win. “The general public does not want runaway technologies.”

Aguirre dismissed the voluntary safety commitments organized by the White House as “not up to the task,” but expressed hope that US legislation would be passed in 2024.

For his part, Reid Hoffman, co-founder of Inflection AI, said in an interview with Axios that whatever public interest the message may have sparked, the authors have undermined their credibility with the AI ​​developer community, which is what they will need to achieve their goals.

The authors of the letter were “virtue signaling and claiming that they are the only ones who truly care about humanity,” Hoffman said. “This harms the case.”

The original letter described “a dangerous race for larger, unpredictable black box models” and urged AI labs to put an end to the recently released GPT4, a sign that AI might one day destroy humanity.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.