Home » Technology » AI Firms ‘Unprepared’ for Dangers of Building Human-Level Systems, Report Warns

AI Firms ‘Unprepared’ for Dangers of Building Human-Level Systems, Report Warns

Here’s a transformed version of the article, aiming for 100% uniqueness while preserving the core message:

AI Developers Lag in Safety Preparedness, Warns New Report

Leading artificial intelligence developers are significantly underprepared to manage the potential risks associated with their rapidly advancing technology, according to a critical new assessment. The Future of Life Institute (FLI) has released a report highlighting a concerning gap between the ambitious goals of AI companies, such as achieving artificial general intelligence (AGI) within the next decade, and their actual safety planning.

The FLI’s comprehensive index evaluated seven major AI players – Google deepmind, OpenAI, Anthropic, Meta, xAI, and China’s Zhipu AI and DeepSeek – across six key areas, including the mitigation of current harms and the safeguarding against existential threats. The findings were stark: not a single company achieved a passing grade in existential safety planning, with most scoring a ‘D’.Anthropic emerged as the frontrunner in safety, securing a ‘C+’ overall score. OpenAI followed with a ‘C’, and Google DeepMind received a ‘C-‘.

The FLI, a US-based non-profit dedicated to promoting the responsible development of advanced technologies, operates independently thanks to a significant donation from crypto entrepreneur Vitalik Buterin.Adding to these concerns, another safety-focused non-profit, SaferAI, also published a report on the same day, cautioning that advanced AI firms exhibit “weak to very weak risk management practices,” deeming their current approach “unacceptable.”

the FLI’s safety evaluations were conducted and reviewed by a distinguished panel of AI experts,including renowned computer scientist Stuart Russell and AI regulation advocate Sneha Revanur,founder of Encode Justice.Max Tegmark, a co-founder of FLI and a professor at the Massachusetts Institute of Technology, expressed alarm at the situation. He likened the AI industry’s approach to building a massive nuclear power plant without a plan to prevent a meltdown, emphasizing the disconnect between aiming for super-intelligent systems and a lack of published safety protocols.

Tegmark noted the accelerating pace of AI development, which has surpassed previous expert predictions. What was once thought to be decades away for addressing AGI challenges is now, according to the companies themselves, possibly only a few years off. He pointed to the remarkable progress seen as the global AI summit in Paris earlier this year, citing advancements in models like xAI’s Grok 4, Google’s Gemini 2.5, and its Veo3 video generator as evidence of this rapid evolution.A spokesperson for Google DeepMind stated that the reports did not fully encompass all of their AI safety initiatives, asserting that their approach to AI safety and security is far more extensive than what was captured in the assessment. Representatives from OpenAI, Anthropic, Meta, xAI, Zhipu AI, and DeepSeek have also been contacted for comment.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.