Alston & Bird Consumer Finance Blog

Artificial Intelligence

California Focuses on Large AI Models

What Happened?

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first U.S. state to mandate standardized public safety disclosures for developers of sophisticated AI models that are made available to users in California. The law takes effect January 1, 2026, and it applies to:

  • Frontier Developers: Developers that train large-scale AI models using extremely high levels of computing power.
  • Developers: Frontier developers with annual revenue above $500 million (including affiliates), subject to additional reporting and governance obligations.

Key obligations of TFAIA include:

  • Safety Framework Publication: Large developers must publish (and update annually, as appropriate) a publicly-accessible safety framework describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.
  • Transparency Reports: All developers must issue reports when deploying or materially modifying models, detailing model capabilities, intended/restricted uses, risks identified, and mitigation steps. Large developers must submit quarterly summaries to California Office of Emergency Services (COES).
  • Critical Incident Reporting: Developers and the public can report safety incidents directly to COES.
  • Whistleblower Protections: Employees who report substantial public-safety risks are protected from retaliation; large developers must maintain anonymous internal reporting channels.

The California Attorney General may pursue civil actions with penalties up to $1 million per violation. Developers meeting federal AI standards deemed equivalent or stricter by COES may qualify for a safe harbor. TFAIA also creates CalCompute, a public-sector computing consortium under the Government Operations Agency, to advance safe, ethical, and equitable AI research statewide. The California Department of Technology will review and recommend annual updates to the law’s definitions and thresholds.

Why Is It Important?

For the private sector, TFAIA signals that AI risk-governance expectations are maturing beyond voluntary principles. Developers, investors, and enterprises deploying advanced AI should expect heightened scrutiny of model transparency, catastrophic-risk assessment, and cybersecurity practices. Governor Newsom described TFAIA as a “blueprint for balanced AI policy” and the Act positions California as a standard-setter at a time when comprehensive federal AI regulation remains uncertain.

What to Do Now?

As a first step, companies should assess whether TFAIA applies, that is, whether an organization qualifies as a frontier or large frontier developer based on computing thresholds or revenue. In the event it does, companies should update AI safety and governance policies and procedures, including reviewing and aligning internal risk-management, cybersecurity, and third-party assessment frameworks with TFAIA’s requirements. Companies should also plan for transparency reports and establish internal protocols for producing and publishing model-specific transparency documentation. Finally, companies in scope should continue to monitor COES guidance to track additional requirements, safe-harbor determinations, and annual reviews by the Department of Technology.

Large AI Model Developers in Focus for New York

What Happened?

On June 12, 2025, the New York State legislature passed the Responsible AI Safety and Education (RAISE) Act, which awaits Governor Kathy Hochul’s signature or veto.  The RAISE Act addresses developers of “frontier” AI models—those large AI models that cost over $100 million or use massive compute—and it aims to reduce the risks of “critical harm,” or the death or serious injury to 100 or more people, or causing $1 billion or more in damages.  The law applies only to large-scale frontier AI models, and it excludes smaller AI models and start up initiatives.

Why is it Important?

If the RAISE Act is signed by the Governor, it would mean that:

  • Developers of fronter AI models must create robust safety and security plans before making those models available in New York; publish redacted versions of those plans; retain unredacted copies; and permit annual external reviews and audits.
  • Any “safety incident”—from model failure to unauthorized access—must be reported to New York’s Attorney General and the New York Division of Homeland Security within 72 hours.
  • The New York AG can penalize violations: up to $10 million for a first offense and $30 million for repeat infractions.
  • Employees and contractors are protected when reporting serious safety concerns.

What to do Now?

Pursuant to the New York Senate rules, the RAISE Act must be delivered to the Governor by July 27, 2025; once delivered, Governor Hochul will have 30 days to sign or veto the bill.  If it is signed, it will take effect 90 days later.

If it is enacted, in-scope AI firms and developers will need to ensure appropriate internal protocols, engage with third-party auditors, and maintain incident reporting and whistleblower channels.

5 Things to Think About When Using AI

What Happened?

As the Trump Administration’s deregulatory, pro-innovation approach to emerging technology moves forward, the use of artificial intelligence has taken center stage, and it is clear that the Administration views it as a competitive economic force.  The Administration’s emphasis on growth and deregulation was clear from its early days as it revoked the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Why is it Important?

In an environment that is supportive of the use of AI and technology innovation, we expect more companies to consider the use of AI tools to realize efficiencies, create and offer new products and services, and grow market share.  As the technology itself continues to grow by leaps and bounds, the opportunities seem limitless; correspondingly, there is a growing market for these tools.

What to do Now?

As the use of AI tools expands, it is important to remember that, while the broader environment is supportive of the use of technology and innovation, fundamental considerations remain relating to consumer protection, risk management, governance of technology tools and third parties that may provide them.  While each company will have its own risk management program tailored to its specific needs, we have highlighted below five important elements that should be considered as part of all programs.  These are enduring principles of risk management that are particularly important in the context of using AI tools.

#1: Use Case Drives Risk Profile

When using AI tools, the how, when, where, and whys matter.  These factors have a direct impact on the legal and regulatory risks that companies should consider, as well as the corresponding compliance obligations and governance mechanisms to put into place.  Not all AI use cases carry the same level of risk, so it’s important to begin each project with a risk assessment that examines the specific use of AI, together with legal and regulatory considerations, including the potential impact to customers, discrimination and bias, and the ability to explain to a customer or a regulator how a decision was made using the AI tool.

#2: Watch Over It: Governance and Oversight

Use of AI tools should be monitored and managed to ensure accountability and oversight throughout the lifecycle of their use.  Clearly defined roles and responsibilities, with ownership and accountability should be established and maintained.  It’s important to bring stakeholders from across the company to the table, including from functional areas like Technology, Operations, Data Management, Privacy, Legal, Compliance, and Risk, and others who should be involved in decision-making, or on an advisory basis.

#3: Maintaining AI Tool Integrity Through Testing:

Whether using a vendor-provided AI solution or a model created by one’s own developers, ongoing monitoring and testing of the AI tool is important to ensure reliability and adherence to performance standards.  Testing should occur throughout the AI tool lifecycle, from pre-deployment, though ongoing use, performance, and maintenance.

#4: Data Governance:  Check the Oil that Fuels Your AI

AI runs on vast amounts of data, and how that data is sourced, collected, stored, managed, used, and ultimately disposed of, has important legal and regulatory ramifications, as well as implications for the quality of the output from any AI tool.  Strong data governance, therefore, is a critical part of AI governance.  Inadequate data management and governances can result in ineffective and harmful AI tools that may introduce bias, and potentially expose companies to legal, regulatory, and reputational risks.

#5 Third-Party Risk: Manage Vendors and Other Third Parties

Many companies use third party providers for AI solutions that can be customized to their needs.  While these arrangements offer convenience and some flexibility, they can introduce risk.  From a regulatory perspective, companies that use third party solutions are viewed as responsible for the outcome of that tool.  Effective oversight of third-party providers, therefore, is critical, and should include transparency with respect to AI tools to be provided, including “explainability” of the tools and the models that are used, as well as audit rights, and other transparency and accountability measures that should be contracted for.

Trump Administration Rescinds Biden Executive Order on Artificial Intelligence

What Happened?

Last week, President Trump signed an Executive Order that rescinded the Biden Administration’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Titled “Removing Barriers to American Leadership in Artificial Intelligence,” the new Executive Order “revokes certain existing AI policies and directives that act as barriers to American AI innovation, [and will] clear a path for the United States to act decisively to retain global leadership in artificial intelligence.” The Trump Administration’s Executive Order directs executive departments and agencies to develop and submit to the President an action plan designed to meet that objective.

Why does it Matter?

AI is expected to be a focus for the new Administration, and policy likely will focus on AI development and innovation as a matter of economic competitiveness and national security. In December, (then President-elect) Trump named David Sacks, a prominent Silicon Valley venture capitalist, as the White House “AI and Crypto Czar.” When announcing this appointment, President Trump characterized AI as “critical to the future of American competitiveness…David will focus on making American the clear global leader…” We expect the Administration to focus on national security issues that include export control issues where the technology could be used in military applications by non-US governments.

What’s Next?

In contrast to the deregulatory approach at the federal level, a number of states already have passed legislation relating to the use of AI, particularly in the consumer space, including laws relating to data use, consent, and disclosures. Additionally, state Attorneys General, particularly in “blue states,” have expressed concern about the risk of “high-risk” AI that can negatively impact consumers’ access to financial goods and services and employment opportunities. With growing use of AI, we expect more activity at the state level.