Alston & Bird Consumer Finance Blog

Artificial Intelligence

President Trump Signs Executive Order Aiming to Curb State AI Regulation

Originally published December 18, 2025 (source).

Executive Summary

President Trump issued an Executive Order aimed at discouraging state artificial intelligence (AI) regulation through federal agency action and funding conditions, without directly preempting state law. Our Privacy, Cyber & Data Strategy Team discusses the resulting uncertainty and what businesses should watch as agencies begin implementing the Order.

  • The Order relies on indirect federal measures, not express preemption, to constrain state AI regulation
  • State laws addressing AI outputs, disclosures, and alleged bias may face increased scrutiny
  • Businesses should continue complying with applicable state AI laws while monitoring federal agency actions

On December 11, 2025, following several unsuccessful congressional attempts to pass a statutory moratorium on state-level artificial intelligence (AI) regulation, President Trump signed an Executive Order that seeks to limit states’ ability to regulate AI under their existing legal frameworks, and to deter them from passing new AI laws. Per the Order, the Trump Administration sees the United States in an AI arms race with adversaries, which it seeks to win—and argues that burdensome state-level AI regulation could impede American innovation, competitiveness, and national security in this effort. The Order also takes the position that a state-by-state AI regulation “patchwork” adds challenging compliance burdens, mandates ideological bias within models, and impermissibly regulates beyond state borders.

The Order seeks to encourage a “minimally burdensome national policy framework for AI” through a variety of measures. Given that an Executive Order applies only to federal agencies, this may raise a key gating question: How, if at all, can an Executive Order impact the effectiveness of state law or the state lawmaking process? However, the Order does not purport to preempt state law. Instead, it adopts various indirect measures—to be implemented by federal agencies—intended to encourage states not to enforce or pass overly burdensome AI regulation, or to penalize them if they do. It also asks certain agencies to use their existing authorities in ways that may preempt state AI laws.

Summary of the Order

The Order takes several actions in pursuit of its goal of a “minimally burdensome” regulatory framework for AI. These broadly fit into two categories. First, the Order establishes a framework through which the Trump Administration can challenge state AI laws or incentivize states not to pass or enforce AI laws. Second, the Order instructs certain agencies to implement policies that the Administration hopes may preempt state AI laws.

Key actions established by the Order include:

  • DOJ AI Litigation Task Force. The Order directs Attorney General Pam Bondi to create an “AI Litigation Task Force” within the Department of Justice DOJ). Its task is to challenge state AI laws inconsistent with the Administration’s policy, or with the goal of “global AI dominance” by the U.S. It remains unclear on what specific AI statutes and regulations would be challenged, or on what basis they would be challenged. The Order’s language seems to give the DOJ broad discretion.
  • Evaluation of “Onerous” State AI Laws. The Order directs Secretary of Commerce Howard Lutnick to publish a report identifying “onerous” existing state AI laws. Per the Order, laws will be deemed onerous if they (1) require AI models to alter truthful outputs; or (2) require AI developers or deployers to engage in impermissible compelled speech or otherwise “disclose or report information in a manner that violates the First Amendment or any other provision of the Constitution.”
  • Restrictions on Broadband Funding. The Order also directs the Department of Commerce (DOC) to issue a policy notice tying states’ receipt of federal broadband funding to their “onerous AI” practices. Specifically, the DOC is directed to specify the conditions under which states that pass “onerous” AI legislation are ineligible for federal broadband funding. This is conceptually similar to the AI moratoria previously proposed in Congress; these tied a state’s receipt of federal broadband funding to a prohibition on it regulating AI. The Order maintains the tie between broadband funding and restrictions on AI regulation, but the tie will now be based on a DOC policy statement, along with a DOC finding that a state is engaged in “onerous” AI regulation. The Order suggests that any state the DOC identifies in its report on “onerous state AI laws” may be deemed ineligible for federal broadband funding.
  • Restrictions on Other Discretionary Spending. Other executive departments and agencies are ordered to assess their discretionary grant programs to determine whether they can condition discretionary funding on states not passing AI laws. For states that have already passed AI laws, agencies are instructed to try to enter into binding agreements with the states that tie receipt of discretionary funding to a commitment not to enforce the AI laws. In these efforts, federal agencies must work together with Special Advisor for AI and Crypto David Sacks.
  • Agency-Created Preemption. The Order directs the Federal Communications Commission to draft a federal reporting and disclosure standard to preempt conflicting state laws that regulate AI. It also directs the Federal Trade Commission (FTC) to publish a policy statement outlining how the FTC Act’s prohibition on unfair and deceptive trade practices preempts any state laws that require alterations to the truthful outputs of AI models. It remains to be seen whether these policy statements could in fact have preemptive effect.

Analysis of Impact on State AI Laws

The Order does not define what constitutes “minimally burdensome” or “onerous” AI regulation, giving wide interpretive discretion to the various agencies within the Administration empowered to enforce the Order (and leaving uncertainty for affected businesses). However, it gives clues on the types of AI laws that the Administration is likely to prioritize in any future actions.

First, the Order takes aim at state AI laws that require models to “alter their truthful outputs.” The Order explicitly calls out the Colorado AI Act and its provisions banning “algorithmic discrimination.” The Order takes the position that it—and similar laws— may induce AI models to produce “false results in order to avoid a ‘differential treatment or impact’ on protected groups.” Although not named in the Order, similar state AI laws banning algorithmic discrimination or bias, such as Illinois’s HB 3773, or new California privacy regulations on “automated decisionmaking technology,” may also be in the crosshairs.

The Order explicitly references state AI laws that require disclosure or reporting of information in violation of the First Amendment or other constitutional provisions. As an example of a law that may be targeted, California recently passed the Transparency in Frontier AI Act, which is a first-in-the-nation “frontier” AI regulation requiring developers of powerful AI models to publish a safety framework and report certain safety incidents to regulators. This law and similar legislation that imposes significant safety obligations on, or requires publication or disclosure of information by, frontier model developers (e.g., New York’s Responsible AI Safety & Education (RAISE) Act) and other AI companies may be targets of the Order.

In citing the First Amendment as a potential bar to such statutes, the Trump Administration may be thinking of prior constitutional challenges on “compelled speech” grounds. One example was a successful challenge to California’s Age-Appropriate Design Code, which required companies that provide digital services “likely to be accessed by minors” to draft detailed “data protection impact assessments” (DPIAs) and produce them to regulators upon request. The Ninth Circuit found the DPIAs were unconstitutional compelled speech and also deputized businesses to become “censors for the state.”

The Order’s broad, discretionary language casts a wide net of uncertainty over what state AI laws the Administration may challenge under the Order. For example, the AI Litigation Task Force is directed to sue states with AI laws “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.” This type of language gives wide discretion to the Administration to enforce the Order. At the same time, in its legislative recommendation provision, the Order expressly excludes laws that relate to children’s safety, AI computing and data center infrastructure, and state government procurement and use of AI, which suggests these types of laws may not be targeted.

Reactions and Legal Challenges

The Order has been praised by various industry groups, such as the Consumer Technology Association, while also receiving pushback from advocacy groups like the American Civil Liberties Union. It remains to be seen whether the Order will face legal challenges, or whether these will be reserved for the agency actions it requires, such as a DOC policy restricting federal broadband funding to states that pass “onerous” AI laws—or an FTC policy statement stating that states cannot use their UDAP statutes on AI.

The Order also calls on Congress to establish a single “minimally burdensome national standard” and tasks Sacks and Assistant to the President for Science and Technology Michael Kratsios with creating a proposed federal AI statute—including its preemption provisions. Congressional action would ameliorate many of the legal concerns that stem from a broad unilateral executive action; however, there is increased skepticism on federal deregulation of AI from Republicans, both on Capitol Hill and in state governments.

What Should Businesses Do?

Governors in California, Colorado, and New York issued statements indicating the Order will not stop them from passing, or enforcing, their local AI statutes and regulations. The DOC’s report on “onerous” AI will not be issued until spring 2026, and it has no effect on its own; any impact on the effectiveness of AI statutes will require challenges by the DOJ, agreements between states and executive agencies, or similar resolutions with significant lead times. Businesses should continue endeavoring to comply with AI laws, rules, and regulations that may apply to their operations.

We will continue to monitor developments arising from the Order, including any legal challenges and the complex state AI law landscape. Please contact our team if you have questions about the impact of the Order or the applicability of state AI laws to your company.

Ransomware Fusion Center

Stay ahead of evolving ransomware threats with Alston & Bird’s Ransomware Fusion Center. Our Privacy, Cyber & Data Strategy Team offers comprehensive resources and expert guidance to help your organization prepare for and respond to ransomware incidents. Visit Alston & Bird’s Ransomware Fusion Center to learn more and access our tools.

Executive Order, Action & Proclamation Task Force

Alston & Bird’s multidisciplinary Executive Order, Action & Proclamation Task Force advises clients on the business and legal implications of President Trump’s Executive Orders.

Learn more about administrative actions on our tracker.


If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy team.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.


California Focuses on Large AI Models

What Happened?

On September 29, 2025, California Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first U.S. state to mandate standardized public safety disclosures for developers of sophisticated AI models that are made available to users in California. The law takes effect January 1, 2026, and it applies to:

  • Frontier Developers: Developers that train large-scale AI models using extremely high levels of computing power.
  • Developers: Frontier developers with annual revenue above $500 million (including affiliates), subject to additional reporting and governance obligations.

Key obligations of TFAIA include:

  • Safety Framework Publication: Large developers must publish (and update annually, as appropriate) a publicly-accessible safety framework describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.
  • Transparency Reports: All developers must issue reports when deploying or materially modifying models, detailing model capabilities, intended/restricted uses, risks identified, and mitigation steps. Large developers must submit quarterly summaries to California Office of Emergency Services (COES).
  • Critical Incident Reporting: Developers and the public can report safety incidents directly to COES.
  • Whistleblower Protections: Employees who report substantial public-safety risks are protected from retaliation; large developers must maintain anonymous internal reporting channels.

The California Attorney General may pursue civil actions with penalties up to $1 million per violation. Developers meeting federal AI standards deemed equivalent or stricter by COES may qualify for a safe harbor. TFAIA also creates CalCompute, a public-sector computing consortium under the Government Operations Agency, to advance safe, ethical, and equitable AI research statewide. The California Department of Technology will review and recommend annual updates to the law’s definitions and thresholds.

Why Is It Important?

For the private sector, TFAIA signals that AI risk-governance expectations are maturing beyond voluntary principles. Developers, investors, and enterprises deploying advanced AI should expect heightened scrutiny of model transparency, catastrophic-risk assessment, and cybersecurity practices. Governor Newsom described TFAIA as a “blueprint for balanced AI policy” and the Act positions California as a standard-setter at a time when comprehensive federal AI regulation remains uncertain.

What to Do Now?

As a first step, companies should assess whether TFAIA applies, that is, whether an organization qualifies as a frontier or large frontier developer based on computing thresholds or revenue. In the event it does, companies should update AI safety and governance policies and procedures, including reviewing and aligning internal risk-management, cybersecurity, and third-party assessment frameworks with TFAIA’s requirements. Companies should also plan for transparency reports and establish internal protocols for producing and publishing model-specific transparency documentation. Finally, companies in scope should continue to monitor COES guidance to track additional requirements, safe-harbor determinations, and annual reviews by the Department of Technology.

Large AI Model Developers in Focus for New York

What Happened?

On June 12, 2025, the New York State legislature passed the Responsible AI Safety and Education (RAISE) Act, which awaits Governor Kathy Hochul’s signature or veto.  The RAISE Act addresses developers of “frontier” AI models—those large AI models that cost over $100 million or use massive compute—and it aims to reduce the risks of “critical harm,” or the death or serious injury to 100 or more people, or causing $1 billion or more in damages.  The law applies only to large-scale frontier AI models, and it excludes smaller AI models and start up initiatives.

Why is it Important?

If the RAISE Act is signed by the Governor, it would mean that:

  • Developers of fronter AI models must create robust safety and security plans before making those models available in New York; publish redacted versions of those plans; retain unredacted copies; and permit annual external reviews and audits.
  • Any “safety incident”—from model failure to unauthorized access—must be reported to New York’s Attorney General and the New York Division of Homeland Security within 72 hours.
  • The New York AG can penalize violations: up to $10 million for a first offense and $30 million for repeat infractions.
  • Employees and contractors are protected when reporting serious safety concerns.

What to do Now?

Pursuant to the New York Senate rules, the RAISE Act must be delivered to the Governor by July 27, 2025; once delivered, Governor Hochul will have 30 days to sign or veto the bill.  If it is signed, it will take effect 90 days later.

If it is enacted, in-scope AI firms and developers will need to ensure appropriate internal protocols, engage with third-party auditors, and maintain incident reporting and whistleblower channels.

5 Things to Think About When Using AI

What Happened?

As the Trump Administration’s deregulatory, pro-innovation approach to emerging technology moves forward, the use of artificial intelligence has taken center stage, and it is clear that the Administration views it as a competitive economic force.  The Administration’s emphasis on growth and deregulation was clear from its early days as it revoked the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Why is it Important?

In an environment that is supportive of the use of AI and technology innovation, we expect more companies to consider the use of AI tools to realize efficiencies, create and offer new products and services, and grow market share.  As the technology itself continues to grow by leaps and bounds, the opportunities seem limitless; correspondingly, there is a growing market for these tools.

What to do Now?

As the use of AI tools expands, it is important to remember that, while the broader environment is supportive of the use of technology and innovation, fundamental considerations remain relating to consumer protection, risk management, governance of technology tools and third parties that may provide them.  While each company will have its own risk management program tailored to its specific needs, we have highlighted below five important elements that should be considered as part of all programs.  These are enduring principles of risk management that are particularly important in the context of using AI tools.

#1: Use Case Drives Risk Profile

When using AI tools, the how, when, where, and whys matter.  These factors have a direct impact on the legal and regulatory risks that companies should consider, as well as the corresponding compliance obligations and governance mechanisms to put into place.  Not all AI use cases carry the same level of risk, so it’s important to begin each project with a risk assessment that examines the specific use of AI, together with legal and regulatory considerations, including the potential impact to customers, discrimination and bias, and the ability to explain to a customer or a regulator how a decision was made using the AI tool.

#2: Watch Over It: Governance and Oversight

Use of AI tools should be monitored and managed to ensure accountability and oversight throughout the lifecycle of their use.  Clearly defined roles and responsibilities, with ownership and accountability should be established and maintained.  It’s important to bring stakeholders from across the company to the table, including from functional areas like Technology, Operations, Data Management, Privacy, Legal, Compliance, and Risk, and others who should be involved in decision-making, or on an advisory basis.

#3: Maintaining AI Tool Integrity Through Testing:

Whether using a vendor-provided AI solution or a model created by one’s own developers, ongoing monitoring and testing of the AI tool is important to ensure reliability and adherence to performance standards.  Testing should occur throughout the AI tool lifecycle, from pre-deployment, though ongoing use, performance, and maintenance.

#4: Data Governance:  Check the Oil that Fuels Your AI

AI runs on vast amounts of data, and how that data is sourced, collected, stored, managed, used, and ultimately disposed of, has important legal and regulatory ramifications, as well as implications for the quality of the output from any AI tool.  Strong data governance, therefore, is a critical part of AI governance.  Inadequate data management and governances can result in ineffective and harmful AI tools that may introduce bias, and potentially expose companies to legal, regulatory, and reputational risks.

#5 Third-Party Risk: Manage Vendors and Other Third Parties

Many companies use third party providers for AI solutions that can be customized to their needs.  While these arrangements offer convenience and some flexibility, they can introduce risk.  From a regulatory perspective, companies that use third party solutions are viewed as responsible for the outcome of that tool.  Effective oversight of third-party providers, therefore, is critical, and should include transparency with respect to AI tools to be provided, including “explainability” of the tools and the models that are used, as well as audit rights, and other transparency and accountability measures that should be contracted for.

Trump Administration Rescinds Biden Executive Order on Artificial Intelligence

What Happened?

Last week, President Trump signed an Executive Order that rescinded the Biden Administration’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

Titled “Removing Barriers to American Leadership in Artificial Intelligence,” the new Executive Order “revokes certain existing AI policies and directives that act as barriers to American AI innovation, [and will] clear a path for the United States to act decisively to retain global leadership in artificial intelligence.” The Trump Administration’s Executive Order directs executive departments and agencies to develop and submit to the President an action plan designed to meet that objective.

Why does it Matter?

AI is expected to be a focus for the new Administration, and policy likely will focus on AI development and innovation as a matter of economic competitiveness and national security. In December, (then President-elect) Trump named David Sacks, a prominent Silicon Valley venture capitalist, as the White House “AI and Crypto Czar.” When announcing this appointment, President Trump characterized AI as “critical to the future of American competitiveness…David will focus on making American the clear global leader…” We expect the Administration to focus on national security issues that include export control issues where the technology could be used in military applications by non-US governments.

What’s Next?

In contrast to the deregulatory approach at the federal level, a number of states already have passed legislation relating to the use of AI, particularly in the consumer space, including laws relating to data use, consent, and disclosures. Additionally, state Attorneys General, particularly in “blue states,” have expressed concern about the risk of “high-risk” AI that can negatively impact consumers’ access to financial goods and services and employment opportunities. With growing use of AI, we expect more activity at the state level.