AI & Technology
5 min read

Client Newsletter: California AI Safety Bill

Published on
October 6, 2025
Contributors
Share

Context

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53 (“SB 53”), officially known as the Transparency in Frontier Artificial Intelligence Act1. This marks the first U.S. state-level law to set binding safety and transparency requirements for developers of the most advanced AI systems, entering into force on January 1, 2026.

The law comes forth after much political debate and following the creation of an expert working group by the Governor, after a previously stricter bill (SB 1047) was vetoed due to concerns of stifling innovation.  The resulting legislation aims to strike a balance between public safety and continued AI leadership, positioning California as a pioneer in responsible AI governance. California has already been demonstrating its commitment to Responsible AI, having implemented sector-specific AI guardrails, requiring transparency and human fallback in healthcare communications (AB 3030) and imposing anti-bias, testing, and recordkeeping obligations on employment-related AI systems (ADS and SB 7), with further workplace protections under consideration.

The Bill has triggered mixed reactions, with some companies welcoming the initiative, while others arguing that fragmented, state-by-state regulation could slow innovation. In the absence of federal AI legislation, California’s approach may serve as a template for other states.

Who Is Impacted?

The Bill creates binding obligations for a defined set of actors in the AI ecosystem:

  • Frontier AI developers: companies that train or build frontier models.
  • Large frontier developers: firms with annual revenues above $500 million that have developed are least one frontier model.
  • Model deployers in California: organizations that release or make foundation or frontier models available in the state.

Key Points

The Bill sets out both new obligations focused on transparency of safety frameworks and incident reporting (15-day window; 24-hour if there’s imminent risk), as well as whistleblower protections and a state-led “CalCompute” framework effort. The key elements include:

  • Transparency: developers must publish on their websites a Frontier AI Framework explaining how they integrate recognized standards and best practices into their AI development. This Framework must be reviewed annually, and if substantially modified, the updated version and justification must be published within 30 days. Transparency reports are also required at the release of each new or significantly updated model. These reports should include a summary of catastrophic risk assessments, the risk mitigation strategy, restrictions on deployments, and any involvment of external evaluators.
  • Safety: developers must promplty notify California’s Office of Emergency Services of any critical safety incidents and provide a channel for the public to submit such reports. This includes events such as misuse in cyberattacks or models displaying deceptive behavior.
  • Accountability:whistleblowers receive legal protection when disclosing safety concerns. Companies must maintain anonymous internal reporting channels, provide employees with monthly status updates on submitted concerns, and share management-level responses on a quarterly basis, unless conflicts of interest arise.
  • Innovation: the law establishes CalCompute, a public computing consortium that will provide shared infrastructure to support research and the development of safe, ethical, and sustainable AI.
  • Ongoing Review: The Department of Technology must annually review and update certain definitions contained in the Bill – such as “frontier model” and “large frontier developer” - to ensure they  keep pace with technological progress and international standards.
  • Civil Penalties:non-compliance - including failure to publish frameworks or reports, missing reporting deadlines, or making materially false or misleading statements - may trigger civil penalties of up to $1 million per violation, enforceable solely by the Attorney General.
  • Public Oversight: from 2027 onward, the Office of Emergency Services will publish annual anonymized, aggregate reports on incidents and disclosures, while safeguarding trade secrets and security-sensitive information.

What This Means For AI Developers

For companies operating in California, the Bill brings new compliance obligations:

  • maintaining detailed documentation of safety protocols;
  • regularly updating and disclosing risk assessments;
  • establishing internal processes for handling critical incidents and whistleblower reports;
  • ensuring annual review and transparent updates of published frameworks.

While the law applies primarily to large-scale AI developers, its influence is expected to extend across the industry as smaller players and other states align with California’s approach.

Noema’s Key Takeaways

California’s AI Safety Bill underscores a shift from voluntary commitments to binding regulatory obligations for advanced AI developers. While intended to protect the public and build trust, it also keeps the door open for continued innovation by creating state-backed infrastructure and adapting regulations over time.  

For organizations, early compliance is critical. Those that proactively align with these obligations – including robust risk reporting, whistleblower protections, and transparent governance practices – will be best positioned to maintain trust, mitigate liability, and stay ahead as AI regulation accelerates globally.

Your Needs are

Our Priority

We handle every inquiry with confidentiality and discretion.
We look forward to hearing about your case.