Executive Order N-5-26: AI Certification Standards
Executive Order N-5-26: AI Certification Standards

Executive Order N-5-26: AI Certification Standards
On March 30, 2026, California Governor Gavin Newsom issued Executive Order N-5-26 (the “Order”), directing California state agencies to create new artificial intelligence1 (AI) vendor certification standards, reshape the state’s procurement process for AI technologies and build out the state’s AI governance infrastructure. The Order, which applies to vendors nationwide that seek to do business with California state agencies, is forward-looking and does not direct agencies to re-evaluate existing contracts.
While the Order does not have the force of law, it attempts to leverage California’s purchasing power to “shape market behavior,” even as recent federal initiatives have sought to roll back AI regulation and limit states’ abilities to set AI policy.
The Order’s Directives
AI Vendor Certification Requirements
Within 120 days, the Order directs the Department of General Services (DGS) and the California Department of Technology (CDT) to develop certification criteria requiring AI vendors seeking to contract with the State of California to “attest to and explain their policies and safeguards” in three priority areas:
- “Exploitation or distribution of illegal content, such as child sexual abuse material and non-consensual intimate imagery,”
- “Utilization of models that display harmful bias or lack governance to reduce the risk of such harmful bias,” and
- “Violation of civil rights and civil liberties such as free speech, voting, human autonomy, and protections against unlawful discrimination, detention, and surveillance.”
For vendors, these certification requirements may require coordination across product, engineering, compliance, and other teams.
Independent Review of Federal Supply Chain Risk Designations
The Order authorizes the CDT’s State Chief Information Security Officer (CISO) to independently assess federal AI “supply chain risk determinations”—a statutory term referring to the risk that an adversary may sabotage a U.S. national security system. Where appropriate, the Order instructs the CISO to facilitate continued procurement by California agencies notwithstanding federal restrictions based on such designations. The Order also empowers the CISO to “review other federal procurement changes to assess whether they improperly restrict procurement and to recommend appropriate measures in response.”
The Order’s directive for the CISO to independently review and potentially act independently of federal national security designations arguably raises questions about federal preemption, among other things. The Order does not address how California might respond to such a challenge or provide guidance for vendors that operate in both state and federal markets.
Reforms to Contractor Responsibility Standards
Within 120 days, the Order instructs the Government Operations Agency (GovOps) to consult with DGS and CDT to recommend “reforms to contractor responsibility provisions” that would establish grounds to suspend or disqualify vendors that have been “judicially determined to have unlawfully undermined privacy or civil liberties, as applicable, such as but not limited to freedom of speech, voting and protections from unlawful discrimination and surveillance.”
Public-Facing Reforms
Within 120 days, the Order directs GovOps to coordinate with several agencies to adopt AI tools and practices in line with the Order. These deliverables include:
- “Facilitat[ing] employee access to vetted GenAI [generative AI] tools for general use cases,”
- Sharing procurement and governance best practices,
- Updating the State Digital Strategy to identify GenAI use cases,
- Piloting a GenAI-enabled public-facing services platform “organized by life event, such as disaster relief, starting a business, and finding a job,”
- Expanding AI training for the state workforce, and
- Publishing a “data minimization toolkit” for departments and agencies handling sensitive information.
Watermarking
Within 120 days, the Order directs CDT and GovOps to develop best-practice guidance for agencies to watermark “AI-generated or significantly manipulated images or video.” This guidance must comport with the requirements outlined in California Business & Professions Code sections 22757.2 and 22757.3, which impose transparency and provenance obligations on large GenAI providers, including AI detection tools and disclosures about AI-generated content.
Legislative and Regulatory Context
This Order marks the latest point of divergence between the federal government's and California’s approaches to AI policy and regulation. In September 2023, Newsom issued Executive Order N-12-23, directing state agencies to develop guidelines and recommendations for a “safe and responsible innovation ecosystem that puts AI systems and tools to the best uses for Californians.” And on January 1, 2026, California’s “Transparency in Frontier AI Act” took effect, requiring large AI developers to adopt safety frameworks, publicly report on their AI risk-management practices and report certain “critical safety incidents” to the state.
Concurrently, the federal government has promoted deregulation of AI. In July 2025, the White House released its “AI Action Plan,” which called for the removal of “red tape and onerous regulation” of AI and asserted that American “AI systems must be free from ideological bias” (see prior alert here). In December 2025, the White House issued an executive order directing federal agencies to block or override “burdensome” state AI regulations and state laws that “require entities to embed ideological bias within [AI] models” (see prior alert here).
Newsom’s Order responds directly to these recent federal measures regarding AI. In February 2026, the Pentagon declared the AI company Anthropic a supply chain risk, marking the first time this designation, which has historically been reserved for entities associated with adversarial nations, was applied to an American company. This designation required U.S. defense contractors to certify that they do not use Anthropic’s product in their military work. U.S. District Judge Rita Lin recently enjoined the U.S. Department of War and other federal agencies from implementing the President’s order to cease all use of Anthropic’s technology and from implementing the Pentagon’s supply chain risk designation, among other things. Newsom’s Order, as noted, directs California agencies to independently assess federal AI supply chain risk determinations.
Newsom’s Order is also in tension with the White House’s recently released National Policy Framework for Artificial Intelligence, which reiterated the need for a preemptive national standard related to AI development in the face of “burdensome” state AI laws, including laws that “dictate right and wrong-think” (see prior alert here). On the heels of this framework, Newsom doubled down on California’s approach, declaring in the March 2026 Order that “no state has taken more aggressive action to strengthen the safety and security of technology and online platforms” and identifying the prevention of the misuse of AI that may “display harmful bias” as a core concern.
Newsom’s Order places AI vendors in the crosshairs of competing federal and state AI regimes. However, it should be noted that both regimes have—thus far—taken shape primarily through executive orders and legislative recommendations rather than comprehensive legislation. Nevertheless, businesses that supply AI systems or services to California agencies may soon be required to “attest to and explain their policies and safeguards” relating to Newsom’s policy priorities, as named in his Order, even as federal policies and procurement practices pull in the opposite direction. Pending judicial resolution or further policy clarification, vendors may need to navigate both regimes simultaneously.
Looking Ahead
We expect California agency recommendations by late July 2026 and will report on them as they become available.
If you have questions about how this Order may affect your business, we welcome you to reach out to our team.
1 California Assembly Bill 2885 amended the Government Code to codify a standardized definition of AI for use across state agencies. The statute defines AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” Cal. Gov’t Code § 11546.45.5(a)(1).



