Originally published December 18, 2025 (source).
Executive Summary
President Trump issued an Executive Order aimed at discouraging state artificial intelligence (AI) regulation through federal agency action and funding conditions, without directly preempting state law. Our Privacy, Cyber & Data Strategy Team discusses the resulting uncertainty and what businesses should watch as agencies begin implementing the Order.
- The Order relies on indirect federal measures, not express preemption, to constrain state AI regulation
- State laws addressing AI outputs, disclosures, and alleged bias may face increased scrutiny
- Businesses should continue complying with applicable state AI laws while monitoring federal agency actions
On December 11, 2025, following several unsuccessful congressional attempts to pass a statutory moratorium on state-level artificial intelligence (AI) regulation, President Trump signed an Executive Order that seeks to limit states’ ability to regulate AI under their existing legal frameworks, and to deter them from passing new AI laws. Per the Order, the Trump Administration sees the United States in an AI arms race with adversaries, which it seeks to win—and argues that burdensome state-level AI regulation could impede American innovation, competitiveness, and national security in this effort. The Order also takes the position that a state-by-state AI regulation “patchwork” adds challenging compliance burdens, mandates ideological bias within models, and impermissibly regulates beyond state borders.
The Order seeks to encourage a “minimally burdensome national policy framework for AI” through a variety of measures. Given that an Executive Order applies only to federal agencies, this may raise a key gating question: How, if at all, can an Executive Order impact the effectiveness of state law or the state lawmaking process? However, the Order does not purport to preempt state law. Instead, it adopts various indirect measures—to be implemented by federal agencies—intended to encourage states not to enforce or pass overly burdensome AI regulation, or to penalize them if they do. It also asks certain agencies to use their existing authorities in ways that may preempt state AI laws.
Summary of the Order
The Order takes several actions in pursuit of its goal of a “minimally burdensome” regulatory framework for AI. These broadly fit into two categories. First, the Order establishes a framework through which the Trump Administration can challenge state AI laws or incentivize states not to pass or enforce AI laws. Second, the Order instructs certain agencies to implement policies that the Administration hopes may preempt state AI laws.
Key actions established by the Order include:
- DOJ AI Litigation Task Force. The Order directs Attorney General Pam Bondi to create an “AI Litigation Task Force” within the Department of Justice DOJ). Its task is to challenge state AI laws inconsistent with the Administration’s policy, or with the goal of “global AI dominance” by the U.S. It remains unclear on what specific AI statutes and regulations would be challenged, or on what basis they would be challenged. The Order’s language seems to give the DOJ broad discretion.
- Evaluation of “Onerous” State AI Laws. The Order directs Secretary of Commerce Howard Lutnick to publish a report identifying “onerous” existing state AI laws. Per the Order, laws will be deemed onerous if they (1) require AI models to alter truthful outputs; or (2) require AI developers or deployers to engage in impermissible compelled speech or otherwise “disclose or report information in a manner that violates the First Amendment or any other provision of the Constitution.”
- Restrictions on Broadband Funding. The Order also directs the Department of Commerce (DOC) to issue a policy notice tying states’ receipt of federal broadband funding to their “onerous AI” practices. Specifically, the DOC is directed to specify the conditions under which states that pass “onerous” AI legislation are ineligible for federal broadband funding. This is conceptually similar to the AI moratoria previously proposed in Congress; these tied a state’s receipt of federal broadband funding to a prohibition on it regulating AI. The Order maintains the tie between broadband funding and restrictions on AI regulation, but the tie will now be based on a DOC policy statement, along with a DOC finding that a state is engaged in “onerous” AI regulation. The Order suggests that any state the DOC identifies in its report on “onerous state AI laws” may be deemed ineligible for federal broadband funding.
- Restrictions on Other Discretionary Spending. Other executive departments and agencies are ordered to assess their discretionary grant programs to determine whether they can condition discretionary funding on states not passing AI laws. For states that have already passed AI laws, agencies are instructed to try to enter into binding agreements with the states that tie receipt of discretionary funding to a commitment not to enforce the AI laws. In these efforts, federal agencies must work together with Special Advisor for AI and Crypto David Sacks.
- Agency-Created Preemption. The Order directs the Federal Communications Commission to draft a federal reporting and disclosure standard to preempt conflicting state laws that regulate AI. It also directs the Federal Trade Commission (FTC) to publish a policy statement outlining how the FTC Act’s prohibition on unfair and deceptive trade practices preempts any state laws that require alterations to the truthful outputs of AI models. It remains to be seen whether these policy statements could in fact have preemptive effect.
Analysis of Impact on State AI Laws
The Order does not define what constitutes “minimally burdensome” or “onerous” AI regulation, giving wide interpretive discretion to the various agencies within the Administration empowered to enforce the Order (and leaving uncertainty for affected businesses). However, it gives clues on the types of AI laws that the Administration is likely to prioritize in any future actions.
First, the Order takes aim at state AI laws that require models to “alter their truthful outputs.” The Order explicitly calls out the Colorado AI Act and its provisions banning “algorithmic discrimination.” The Order takes the position that it—and similar laws— may induce AI models to produce “false results in order to avoid a ‘differential treatment or impact’ on protected groups.” Although not named in the Order, similar state AI laws banning algorithmic discrimination or bias, such as Illinois’s HB 3773, or new California privacy regulations on “automated decisionmaking technology,” may also be in the crosshairs.
The Order explicitly references state AI laws that require disclosure or reporting of information in violation of the First Amendment or other constitutional provisions. As an example of a law that may be targeted, California recently passed the Transparency in Frontier AI Act, which is a first-in-the-nation “frontier” AI regulation requiring developers of powerful AI models to publish a safety framework and report certain safety incidents to regulators. This law and similar legislation that imposes significant safety obligations on, or requires publication or disclosure of information by, frontier model developers (e.g., New York’s Responsible AI Safety & Education (RAISE) Act) and other AI companies may be targets of the Order.
In citing the First Amendment as a potential bar to such statutes, the Trump Administration may be thinking of prior constitutional challenges on “compelled speech” grounds. One example was a successful challenge to California’s Age-Appropriate Design Code, which required companies that provide digital services “likely to be accessed by minors” to draft detailed “data protection impact assessments” (DPIAs) and produce them to regulators upon request. The Ninth Circuit found the DPIAs were unconstitutional compelled speech and also deputized businesses to become “censors for the state.”
The Order’s broad, discretionary language casts a wide net of uncertainty over what state AI laws the Administration may challenge under the Order. For example, the AI Litigation Task Force is directed to sue states with AI laws “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment.” This type of language gives wide discretion to the Administration to enforce the Order. At the same time, in its legislative recommendation provision, the Order expressly excludes laws that relate to children’s safety, AI computing and data center infrastructure, and state government procurement and use of AI, which suggests these types of laws may not be targeted.
Reactions and Legal Challenges
The Order has been praised by various industry groups, such as the Consumer Technology Association, while also receiving pushback from advocacy groups like the American Civil Liberties Union. It remains to be seen whether the Order will face legal challenges, or whether these will be reserved for the agency actions it requires, such as a DOC policy restricting federal broadband funding to states that pass “onerous” AI laws—or an FTC policy statement stating that states cannot use their UDAP statutes on AI.
The Order also calls on Congress to establish a single “minimally burdensome national standard” and tasks Sacks and Assistant to the President for Science and Technology Michael Kratsios with creating a proposed federal AI statute—including its preemption provisions. Congressional action would ameliorate many of the legal concerns that stem from a broad unilateral executive action; however, there is increased skepticism on federal deregulation of AI from Republicans, both on Capitol Hill and in state governments.
What Should Businesses Do?
Governors in California, Colorado, and New York issued statements indicating the Order will not stop them from passing, or enforcing, their local AI statutes and regulations. The DOC’s report on “onerous” AI will not be issued until spring 2026, and it has no effect on its own; any impact on the effectiveness of AI statutes will require challenges by the DOJ, agreements between states and executive agencies, or similar resolutions with significant lead times. Businesses should continue endeavoring to comply with AI laws, rules, and regulations that may apply to their operations.
We will continue to monitor developments arising from the Order, including any legal challenges and the complex state AI law landscape. Please contact our team if you have questions about the impact of the Order or the applicability of state AI laws to your company.
Ransomware Fusion Center
Stay ahead of evolving ransomware threats with Alston & Bird’s Ransomware Fusion Center. Our Privacy, Cyber & Data Strategy Team offers comprehensive resources and expert guidance to help your organization prepare for and respond to ransomware incidents. Visit Alston & Bird’s Ransomware Fusion Center to learn more and access our tools.
Executive Order, Action & Proclamation Task Force
Alston & Bird’s multidisciplinary Executive Order, Action & Proclamation Task Force advises clients on the business and legal implications of President Trump’s Executive Orders.
Learn more about administrative actions on our tracker.
If you have any questions, or would like additional information, please contact one of the attorneys on our Privacy, Cyber & Data Strategy team.
You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.