Can one state save us from AI disaster? Inside California's new legislative crackdown
With no federal rules in place, state lawmakers are stepping in to regulate AI safety themselves.

iStock / Getty Images Plus / Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- California's new AI safety law goes into effect Jan. 1.
- It centers on transparency and whistleblower protections.
- Some AI safety experts say the tech is evolving too quickly.
A new California law going into effect Thursday, Jan. 1, aims to add a measure of transparency and accountability to the AI industry at a time when some experts are warning that the technology could potentially escape human control and cause catastrophe.
Originally authored by state Democrat Scott Wiener, the law requires companies developing frontier AI models to publish information on their websites detailing their plans and policies for responding to "catastrophic risk," and to notify state authorities about any "critical safety incident" within fifteen days. Fines for failing to meet these terms can reach up to $1 million per violation.
Also: Why complex reasoning models could make misbehaving AI easier to catch
The new law also provides whistleblower protections to employees of companies developing AI models.
The legislation defines catastrophic risk as a scenario in which an advanced AI model kills or injures more than 50 people or causes material damages exceeding $1 billion, for example by providing instructions on how to develop chemical, biological, or nuclear weapons.
"Unless they are developed with careful diligence and reasonable precaution, there is concern that advanced artificial intelligence systems could have capabilities that pose catastrophic risks from both malicious uses and malfunctions, including artificial intelligence-enabled hacking, biological attacks, and loss of control," wrote the authors of the new law.
Safety concerns
California's new law underscores -- and aims to mitigate -- some of the fears that have been weighing on the minds of AI safety experts as the technology quickly proliferates and evolves.
Canadian computer scientist and Turing Award-winner Yoshua Bengio recently told The Guardian that the AI industry had a responsibility to implement a kill switch to its powerful models in the event that they escape human control, citing research showing that such systems can occasionally hide their objectives and mislead human researchers.
Last month, a paper published by Anthropic claimed some versions of Claude were showing signs of "introspective awareness."
Also: Claude wins high praise from a Supreme Court justice - is AI's legal losing streak over?
Meanwhile, others have been making the case that advancements in AI are moving dangerously quickly -- too quickly for developers and lawmakers to be able to implement effective guardrails.
A statement published online in October by the nonprofit organization the Future of Life Institute argued that unconstrained developments in AI could lead to "human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction," and called for a pause on the development of advanced models until rigorous safety protocol could be established.
The FLI followed up with a study which showed that eight leading developers were falling short on safety-related criteria including "governance & accountability" and "existential risk."
Federal, state, and private sector
California's new law also stands in stark contrast to the Trump administration's approach to AI, which has thus far been, essentially, "Go forth and multiply."
President Donald Trump has scrapped Biden-era regulation of the technology and has given the industry a wide amount of leeway to push ahead with the development and deployment of new models, eager to maintain a competitive edge over China's own AI efforts.
Also: China's open AI models are in a dead heat with the West - here's what happens next
The responsibility to protect the public from the possible harms of AI has therefore largely been handed over to state lawmakers, such as Wiener and tech developers themselves. On Saturday, OpenAI announced that its Safety Systems team was hiring for a new "Head of Preparedness" role, which will be responsible for building frameworks to test for model safety and offers a $555,000 salary, plus equity.
"This is a critical role at an important time," company CEO Sam Altman wrote in a X post about the new position, "models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges."
(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)