Alston & Bird Consumer Finance Blog

Artificial Intelligence

California Focuses on Large AI Models

What Happened?

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first U.S. state to mandate standardized public safety disclosures for developers of sophisticated AI models that are made available to users in California. The law takes effect January 1, 2026, and it applies to:

  • Frontier Developers: Developers that train large-scale AI models using extremely high levels of computing power.
  • Developers: Frontier developers with annual revenue above $500 million (including affiliates), subject to additional reporting and governance obligations.

Key obligations of TFAIA include:

  • Safety Framework Publication: Large developers must publish (and update annually, as appropriate) a publicly-accessible safety framework describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.
  • Transparency Reports: All developers must issue reports when deploying or materially modifying models, detailing model capabilities, intended/restricted uses, risks identified, and mitigation steps. Large developers must submit quarterly summaries to California Office of Emergency Services (COES).
  • Critical Incident Reporting: Developers and the public can report safety incidents directly to COES.
  • Whistleblower Protections: Employees who report substantial public-safety risks are protected from retaliation; large developers must maintain anonymous internal reporting channels.

The California Attorney General may pursue civil actions with penalties up to $1 million per violation. Developers meeting federal AI standards deemed equivalent or stricter by COES may qualify for a safe harbor. TFAIA also creates CalCompute, a public-sector computing consortium under the Government Operations Agency, to advance safe, ethical, and equitable AI research statewide. The California Department of Technology will review and recommend annual updates to the law’s definitions and thresholds.

Why Is It Important?

For the private sector, TFAIA signals that AI risk-governance expectations are maturing beyond voluntary principles. Developers, investors, and enterprises deploying advanced AI should expect heightened scrutiny of model transparency, catastrophic-risk assessment, and cybersecurity practices. Governor Newsom described TFAIA as a “blueprint for balanced AI policy” and the Act positions California as a standard-setter at a time when comprehensive federal AI regulation remains uncertain.

What to Do Now?

As a first step, companies should assess whether TFAIA applies, that is, whether an organization qualifies as a frontier or large frontier developer based on computing thresholds or revenue. In the event it does, companies should update AI safety and governance policies and procedures, including reviewing and aligning internal risk-management, cybersecurity, and third-party assessment frameworks with TFAIA’s requirements. Companies should also plan for transparency reports and establish internal protocols for producing and publishing model-specific transparency documentation. Finally, companies in scope should continue to monitor COES guidance to track additional requirements, safe-harbor determinations, and annual reviews by the Department of Technology.