Executive Order ‘Ensuring a National Policy Framework for Artificial Intelligence’
On December 11, 2025, President Donald J. Trump signed the Executive Order (the “EO”) entitled, “Ensuring a National Policy Framework For Artificial Intelligence.”
Aimed at establishing a unified national policy framework for AI, the EO attempts to significantly restrict states from independently regulating AI in “onerous and excessive” ways or that conflict with federal priorities, including America’s AI innovation, leadership, and global dominance. The EO’s stated goal is to reduce “cumbersome” state regulation that “stymie innovation.”
Targeting of State Laws
The main objective of the EO is to prevent a patchwork of state laws, which may introduce overlapping, incongruous, or burdensome compliance requirements that could slow AI innovation, hinder U.S. competitiveness, and impose costs that may be particularly challenging for start-up companies. These state level requirements, according to the President, place U.S. companies at a disadvantage, compared with their international competitors.
Through the EO, the Administration also intends to challenge the legality of state laws on a variety of bases:
- The EO targets state laws that arguably “require[e] entities to embed ideological bias within [AI] models.” As an example, the EO specifically references the Colorado AI Act, which prohibits “Algorithmic Discrimination,” or any condition in which use of an AI system results in unlawful differential treatment or impact based on an individual’s protected status.
- The EO appears poised to target state-level AI regulation that arguably extends beyond a state’s border in a manner that possibly infringes on interstate commerce.
Artificial Intelligence Litigation Task Force
The EO directs the U.S. Attorney General to form an “AI Litigation Task Force” to challenge state AI laws that are inconsistent with the goal of the “United States’ global AI dominance.” Further, the EO calls upon the Secretary of Commerce to identify by March 11, 2026, potentially unconstitutional state AI laws and other regulations that should be considered for challenge by the AI Litigation Task Force. States that have been identified as having onerous laws may also be ineligible for federal Broadband Equity Access and Deployment funding.
Carve-out Exceptions
Notably, the EO indicates that the resulting framework must protect children, prevent censorship, respect copyrights, and safeguard communities. Specifically, the EO directs that legislative recommendations should not include proposals to preempt state AI laws relating to child safety protection, AI computer and data center infrastructure, state government procurement of AI, and “other topics as shall be determined.” This carve-out language suggests the potential for further negotiation regarding where the federal AI framework should govern versus state AI regulation.
Considerations for Healthcare & Life Sciences Companies
While this EO aims to shift policymaking from states to the federal government, healthcare and life sciences companies developing or implementing AI should continue to develop AI governance, risk management, and contracting approaches to ensure proper compliance with existing federal and state law.
Next Steps for Employers
In light of this EO, employers should continue to:
1. Develop Comprehensive AI Governance Programs that Comply with Existing State AI Laws, Anti-Discrimination Statutes, and Industry-Specific Regulations
Employers who have invested in robust governance frameworks—including algorithmic impact and risk assessments, transparency protocols, and bias testing—will be better positioned to defend against potential litigation, as states, advocacy groups, and trade associations are likely to challenge the EO, and individual plaintiffs continue to pursue claims under applicable anti-discrimination statutes.
Good governance measures—including those that follow NIST, CISA, and similar federal guidelines—not only increase compliance with current state and local AI requirements that are still in full force and effect (many of which expressly reference federal compliance standards), but also demonstrate good-faith efforts to prevent discriminatory outcomes, which remains a legal obligation under longstanding federal and state employment laws, as well as civil rights laws.
2. Conduct Internal Workplace AI Audits and Assessments
Employers should conduct regular audits of all workplace AI to ensure the tools are functioning as intended and are not causing disparities among protected categories. Effective AI audits and assessments could be useful defensive evidence in potential discrimination litigations.
3. Ensure Compliance with Federal, State, and Local Laws, Regulations, and Guidance
Employers should frequently check for updates on the ever-changing legal landscape in the AI space and consult with counsel on best practices to remain compliant with the law.
EBG attorneys continue to actively monitor AI legal developments and have considerable experience guiding companies in the areas of AI compliance, as well as litigating AI-related policy and impact matters. Please contact your EBG attorney for more information.
Blog Editors
Authors
- Member of the Firm
- Of Counsel
- Associate
- Member of the Firm