Home » News » California Passes AI Disclosure Law, Sparks Industry Debate

California Passes AI Disclosure Law, Sparks Industry Debate

by David Harrison – Chief Editor

California sets New ‍Standard for⁢ AI Safety with Landmark Legislation

California governor Gavin Newsom recently signed ‍Senate bill 53 (SB‍ 53) into‌ law, establishing a first-in-the-nation framework ‌for regulating ⁣artificial intelligence. ‍The⁢ legislation aims to ​increase‌ openness ⁣and accountability ⁤within the ‌AI sector, requiring companies developing powerful AI models to⁣ disclose information and report incidents to the state.

The bill has ⁢garnered support from major tech companies. Meta spokesperson‍ Christopher ‍Sgro described SB 53​ as “a​ positive step” toward “balanced AI regulation,” adding that it would facilitate cooperation between state and federal governments on AI deployment. Similarly,a representative from Google stated⁤ the company believes the law represents an “effective approach to AI safety.”

SB 53’s impact is expected to ⁣be global, given ⁢that 32 of the world’s top 50 ‍AI companies ‍are based in california. The law mandates that ⁤AI ⁣firms report incidents to California’s ⁣Office of⁤ Emergency Services and provides protections⁢ for whistleblowers, ​enabling employees to voice safety concerns without​ fear of ⁢retribution. Noncompliance will be subject to civil penalties enforced by the state attorney ​general, though experts note thes ‌penalties are relatively ⁣mild compared to those outlined ​in the ⁤EU’s AI Act.

While acknowledging the bill ‍as “a step forward,” Miles Brundage, former head of policy research at⁤ OpenAI, emphasized ⁤the need for “actual ⁣transparency” in reporting, stronger​ minimum risk thresholds, and robust‍ third-party evaluations. Collin McCune, ⁣head of government affairs at Andreessen Horowitz, ⁢expressed concern that ⁣the law “risks squeezing out⁣ startups, slowing innovation, and entrenching the biggest players,” ​possibly creating a complex regulatory landscape for smaller companies. Several AI companies echoed⁤ these concerns during ‍lobbying efforts.

Though, Thomas Woodside, a cofounder at Secure AI Project – ​a cosponsor ⁣of the law – dismissed concerns about the impact on startups as⁢ “overblown.” He clarified that the reporting‍ requirements primarily apply ⁢to companies training AI models requiring‌ substantial computational resources – investments beyond the⁣ reach of most startups. “This bill is only applying to companies that are training AI models⁣ with a huge amount ‍of compute that‍ costs hundreds of millions of dollars, something that a tiny startup can’t do,” Woodside told Fortune. He⁣ further noted⁤ that many ⁢obligations only apply⁣ to companies with annual revenue exceeding $500 million.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.