White House Releases Long-Awaited Artificial Intelligence Framework, Setting the Stage for Federal Preemption Debate and Further Legislative Action

Key Points
- The White House has released its long-awaited AI legislative recommendations, which serve as a legislative roadmap organized around seven broad policy goals spanning multiple Congressional committees.
- The framework reiterates the need for a preemptive national standard related to AI development, unnecessary limits on use and liability for third-party misuse.
- While key Republican lawmakers remain supportive of federal preemption of state AI laws, significant Democratic opposition, particularly among Members serving on panels of jurisdiction, may complicate the path forward in Congress, especially given the razor-thin GOP majority in the House.
Introduction
On March 21, 2026, the White House released its Legislative Recommendations for a National Policy Framework for AI (AI Framework), which reiterates the need for a preemptive national standard related to AI development, unnecessary limits on use and liability for third-party misuse. The AI Framework is a legislative roadmap organized around seven broad policy goals: (1) protecting children and empowering parents; (2) strengthening AI infrastructure, security and economic access; (3) respecting intellectual property (IP) and creator rights; (4) preventing censorship and protecting free speech; (5) removing barriers to AI innovation; (6) educating Americans and developing an AI-ready workforce; and (7) establishing a preemptive federal policy framework. These priorities span across multiple Congressional committees, including the House Energy and Commerce (E&C) and Senate Commerce Committees; House Science, Space, and Technology Committee; House Oversight and Government Reform Committee; House and Senate Judiciary Committees; House and Senate Homeland Security Committees; House and Senate Small Business Committees; House Education and the Workforce Committee; and Senate Health, Education, Labor and Pensions (HELP) Committee. Key federal agencies and offices would also play a central role in advancing and implementing these efforts, including the White House Office of Science and Technology Policy (OSTP), National Institute of Standards and Technology (NIST), National Science Foundation (NSF), Federal Trade Commission (FTC), the U.S. Department of Energy (DOE), the U.S. Department of Justice (DOJ), U.S. Department of War (DoW), and Small Business Administration (SBA), among others.
The AI Framework follows President Trump’s December 2025 Executive Order (EO) establishing a coordinated federal effort to block or override burdensome state AI regulations (see prior alert here). The order directed the White House Office of Legislative Affairs to prepare legislative recommendations for Congress, with Special Advisor for AI and Crypto David Sacks playing a central role. In the absence of a unified national framework, a state-by-state regulatory landscape has emerged—underscored by the 2025 surge in AI legislation across all 50 states and new 2026 compliance regimes in jurisdictions such as California, New York and Colorado.
Prior to the issuance of the EO, President Trump urged lawmakers to include preemption of state AI laws in the fiscal year (FY) 2026 National Defense Authorization Act (NDAA). Despite reported support from House GOP Leadership, this proposal was quickly opposed by House Armed Services Committee (HASC) Chair Mike Rogers (R-AL) and Senate Armed Services Committee (SASC) Ranking Member Jack Reed (D-RI), and federal preemption of state AI laws was ultimately excluded from the final package (see prior alert here).
However, key Republican lawmakers have remained supportive of preemption of state AI laws, including Senate Commerce Committee Chair Ted Cruz (R-TX), who stated, “I look forward to working with the White House and members of the Commerce Committee to advance meaningful AI legislation that safeguards free speech, establishes regulatory sandboxes, protects children and provides a national standard for AI in the United States.” House Speaker Mike Johnson (R-LA), House Majority Leader Steve Scalise (R-LA), House E&C Chair Brett Guthrie (R-KY), House Judiciary Chair Jim Jordan (R-OH), and Science, Space, and Technology Chair Brian Babin (R-TX) have indicated that Republicans are willing to work with Democrats to “enact a national framework that unleashes the full potential of AI, cements the U.S. as the global leader and provides important protections for American families.”
Sen. Marsha Blackburn (R-TN), who recently released her own discussion draft of legislation to codify President Trump’s AI preemption EO, the TRUMP AMERICA AI Act, has outlined the need to focus on legislation that can pass both chambers. The sweeping TRUMP AMERICA AI Act would establish protections around safety, intellectual property and platform accountability (section-by-section summary available here). The framework notably includes the Kids Online Safety Act (KOSA; S. 1748), which would require social media platform to implement tools and safeguards to protect users and visitors under the age of 17, as well as the NO FAKES Act (S. 1367), which would hold AI companies liable for use of a creator’s name, image and likeness without their consent.
Key Democrats, however, continue to voice concern about preemption of state AI laws, including members of the newly convened House Democratic Commission on AI and the Innovation Economy. Caucus Co-Chair Josh Gottheimer (D-NJ), who also serves as Vice Chair of the Problem Solvers Caucus focusing on bipartisan solutions, swiftly criticized the AI Framework as lacking meaningful accountability, emphasizing that federal preemption is only appropriate if it replaces state regimes with a comprehensive and protective national standard. He argued that the framework falls short, citing the need for enforceable guardrails, workforce solutions, stronger STEM incentives and enhanced protections against deepfakes and unsafe AI systems. Further, Caucus Co-Chair Valerie Foushee (D-NC) echoed those concerns, warning that the AI Framework “lacks meaningful guardrails” and overlooks AI’s real-world impacts on jobs, communities and resources, while taking “the wrong approach” by limiting state and local authority.
Moreover, House Democrats, including those serving on key committees of jurisdiction like the House E&C Committee, have joined Sen. Brian Schatz (D-HI) in introducing the GUARDRAILS Act (H.R. 8031) to repeal President Trump’s AI preemption EO. This growing Democratic opposition to preemption, particularly among Members serving on panels of jurisdiction, may significantly complicate the path forward in Congress, especially given the razor-thin GOP majority in the House.
A summary of the AI Framework and related legislative efforts of interest in the 119th Congress is below.
Summary of AI Framework
1. Protecting Children and Empowering Parents
This Section of the AI Framework calls on Congress to (1) equip parents with tools to manage children’s privacy, screen time, content exposure and accounts; (2) establish privacy-protective age assurance requirements for AI services likely accessed by minors; (3) require AI platforms and services likely to be accessed by minors to implement safety features to mitigate risks such as sexual exploitation and self-harm; (4) clarify that existing child privacy protections apply to AI, including limits on data use for training and targeted advertising; and (5) preserve states’ ability to enforce child protection laws, including those addressing AI-generated child sexual abuse material (CSAM).
On March 5, 2026, the House E&C Committee advanced the Kids Internet and Digital Safety (KIDS) Act (H.R. 7757)—a consolidated package of 12 kids’ safety bills that could potentially map to the above-mentioned goals via. The package includes the Kids Online Safety Act (KOSA; H.R. 6484), which requires parental tools to manage privacy, purchases and time spent, controls over recommendation systems and content exposure, and default high-protection settings for minors. The House version of KOSA notably removes the duty of care provision included in the Senate version of the bill (S. 1748). The E&C-approved package also included the SCREEN Act, which would require websites where a substantial portion of content is sexual material harmful to minors to implement technology-based age verification measures to determine whether users are likely minors and prevent minors from accessing that content, as well as the Safeguarding Adolescents From Exploitative BOTs Act (SAFE BOTs Act; H.R. 6489), which would regulate chatbots used by minors by requiring chatbot providers to clearly disclose that the system is AI and not a human, and to provide suicide and crisis hotline resources when a minor raises suicide or self-harm topics.
The E&C Committee also approved, via 36-16 vote, Sammy’s Law (H.R. 2657), which would require “large social media platforms” to provide real-time APIs that allow a child, or a parent/guardian, to authorize an FTC-registered third-party safety software provider to manage the child’s online interactions, content, and account settings on the same terms as the child. Finally, the Committee approved, via a 26-23 vote, the App Store Accountability Act (H.R. 3149), which would set a national framework requiring app stores with over five million U.S. users to verify users’ age categories and, for minors, obtain verifiable parental consent (via a linked parental account) before app downloads, purchases, or in-app purchases, using clear disclosures about data practices, content and age ratings.
The E&C Committee refrained from voting on the House version of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0; H.R. 6291) as the Senate simultaneously passed its version (S. 836) by unanimous consent. The measure would ban online platforms from collecting personal information from teenagers aged 13 to 16 without their consent. It would also provide parents and children with new safety and privacy tools and require websites to create an “eraser button” for parents to delete their children’s personal data.
In the Senate, Senate Commerce Committee Chair Ted Cruz (R-TX) has said that he hopes to advance legislation aimed at strengthening online protections for children in the coming weeks, including KOSA, COPPA 2.0, and the Kids Off Social Media Act (KOSMA; S. 278/H.R. 7433), which would prohibit social media platforms from knowingly allowing children under the age of 13 to create or maintain accounts.
2. Strengthening AI Infrastructure, Security and Economic Access
This Section directs Congress to: (1) protect residential ratepayers from increased electricity costs driven by AI data center expansion; (2) streamline federal permitting to accelerate AI infrastructure buildout, including on-site and behind-the-meter power generation; (3) strengthen law enforcement efforts to combat AI-enabled scams and fraud targeting vulnerable populations; (4) ensure national security agencies have the technical capacity to assess and mitigate risks from frontier AI models in coordination with developers; and (5) expand access to AI through grants, tax incentives and technical assistance for small businesses.
Lawmakers have increasingly examined the strain that large-scale AI data center expansion could place on the electric grid and the potential for costs to be shifted onto residential ratepayers, with some proposals and industry commitments emphasizing that developers, not consumers, should bear the costs of new capacity. At the same time, there is growing bipartisan interest in streamlining federal permitting processes to accelerate the buildout of AI infrastructure and associated energy resources, including support for on-site and behind-the-meter power generation to reduce grid congestion and improve reliability.
Sen. Blackburn’s TRUMP AMERICA AI Act proposes codifying the Ratepayer Protection Pledge, directing the Secretary of Energy to enter into agreements with owners and operators of data centers to protect consumers from rate increases and adverse impacts of data center development. Should a covered entity decline to enter into such agreements, they will be deemed ineligible for “such Federal incentives and assistance, including loans, loan guarantees, grants, tax incentives, land incentives and other incentives and assistance, as the Secretary shall identify.”
Lawmakers in the 119th Congress have also focused heavily on AI-enabled fraud and consumer protection, advancing bills such as the AI Scam Prevention Act (S. 3495) and the QUIET Act (H.R. 1027/S. 3354) to combat impersonation scams and protect vulnerable populations.
At the same time, Congress has prioritized national security and government capacity, with proposals like the AI Talent Act (H.R. 6573) and the AI Risk Evaluation Act (S. 2938) aimed at strengthening federal expertise and improving the government’s ability to assess risks from advanced AI systems. In parallel, several bills (discussed in further detail below) seek to expand access to AI by providing small businesses with training, technical assistance and support through existing institutions.
3. Respecting IP and Creator Rights
This Section of the AI Framework calls on Congress to refrain from interfering with ongoing judicial determinations on whether AI training on copyrighted material constitutes fair use, while expressing the Administration’s view that such training is lawful. It also calls on Congress to: (1) explore licensing or collective rights frameworks that allow creators to negotiate compensation from AI developers without triggering antitrust liability; (2) consider a federal regime to protect individuals from unauthorized use of AI-generated digital replicas of their voice or likeness, with safeguards for First Amendment–protected uses; and (3) monitor evolving copyright law and assess whether additional legislative action is needed to address gaps created by AI.
Most notably, the NO FAKES Act (S. 1367/H.R. 2794) would create a federal right of publicity protecting individuals from unauthorized AI-generated replicas of their voice, likeness, or identity, and allow individuals to bring civil actions against misuse and seek damages and injunctive relief. The measure includes specific exclusions intended to comply with First Amendment protections for free speech.
4. Preventing Censorship and Protecting Free Speech
This Section calls on Congress to (1) prohibit federal agencies from pressuring technology and AI providers to moderate or alter content based on partisan or ideological considerations, and (2) establish clear mechanisms for individuals to seek redress when government actions improperly influence or censor expression on AI platforms.
In recent months, Congressional Republicans, particularly the Senate, have increasingly focused on concerns over “jawboning,” or the practice of federal agencies pressuring technology platforms to moderate or suppress lawful speech. Senate Commerce Committee hearings in 2025 examined whether agencies such as the Cybersecurity and Infrastructure Security Agency (CISA) improperly influenced content moderation decisions, raising broader questions about government authority, free expression and platform governance. Lawmakers have introduced measures such as the Transparency in Bureaucratic Communications Act (S. 66), which would require agency inspectors general to include in their semiannual reports to Congress information on communications between their agencies and online platforms, including details on the content and context of such interactions, particularly those related to content moderation, specific online content, or platform technologies such as algorithms and data systems, along with related proposals tied to Section 230 reform and broader free speech protections.
5. Removing Barriers to AI Innovation
This Section of the AI Framework calls on Congress to (1) establish regulatory sandboxes to promote AI innovation and leadership; and (2) expand access to federal datasets in AI-ready formats for use by industry and academia. It also emphasizes that no new federal AI regulator should be created, instead directing oversight to existing sector-specific agencies and encouraging industry-led standards to support AI development and deployment.
Chair Cruz has previously introduced the SANDBOX Act (S. 2750) as the first step in a broader legislative framework to promote American leadership in AI. The Act aims to create a regulatory sandbox to give AI developers space to test and launch new AI technologies. Under the bill, AI deployers and developers would apply to modify or waive regulations that could impede their work. OSTP would coordinate with relevant federal agencies to evaluate requests under their purview. Congress would collect regular reports on how often rules were waived or modified to better inform future policy decisions and the regulatory structure applicable to AI.
With respect to the second directive, Senate Commerce Ranking Member Maria Cantwell (D-WA) and Sens. Todd Young (R-IN) and Marsha Blackburn (R-TN) have reintroduced the Future of AI Innovation Act (S. 3952), a revised version of earlier legislation that previously sought to codify the AI Safety Institute at NIST. The updated bill aligns with the Commerce Department’s rebranded Center for AI Standards and Innovation (CAISI), shifting the focus from AI safety toward innovation, standards development and industry collaboration. The legislation would authorize AI testbeds at national laboratories, establish grand challenge prize competitions to spur private-sector innovation and expand the use of publicly available datasets for AI research
Further, the CREATE AI Act (H.R. 2385), introduced by Rep. Jay Obernolte (R-CA), would establish the National Artificial Intelligence Research Resource (NAIRR), a national program administered by the NSF to provide U.S. researchers, educators, and students with access to AI data, computational resources, educational tools and testbeds. By enabling access to these resources, including data contributed by federal agencies and the private sector, the program would expand the availability of AI-relevant datasets and infrastructure for use by academia and other eligible entities.
6. Educating Americans and Developing an AI-Ready Workforce
This Section calls on Congress to (1) use non-regulatory approaches to integrate AI training into existing education and workforce programs; (2) expand federal research on AI-driven workforce shifts to inform policy; and (3) strengthen land-grant institutions’ capacity to provide technical assistance, demonstration projects, and youth AI development initiatives.
With respect to integration of AI training into education and workforce programs, several bipartisan small business proposals have been introduced, including the Small Business AI Advancement Act (H.R. 3679), which passed the House in February 2026 and would direct the NIST to develop or identify generally applicable, technology neutral resources for small businesses to address concerns relating to the use of AI. The bipartisan, bicameral AI for Mainstreet Act (S. 3586/H.R. 5764), which passed the House in January alongside the AI-WISE Act (H.R. 5784), would also direct the SBA’s Small Business Development Centers (SBDCs) to help small businesses evaluate and adopt AI by providing guidance, training, and outreach. The Small Business AI Training Act (S. 3888) would authorize the U.S. Department of Commerce to work with SBA to create and distribute AI training resources and tools to help small businesses leverage AI in their operations. More broadly, the proposed AI Workforce Training Act (H.R. 7576) would allow companies to claim a 30% tax credit on qualified expenses for training employees in AI.
Several bipartisan proposals to expand federal research on workforce shifts have been introduced this Congress, including the AI Workforce PREPARE Act (S. 3339), which would establish an AI Workforce Research Hub to improve understanding of how AI affects jobs, including enhancing data collection, expanding access to workforce data and supporting research through pilots, prize competitions and public-private partnerships.
Further, the bipartisan, bicameral NSF Artificial Intelligence Education Act (S. 3957/H.R. 5351) aims to expand AI education and workforce development by funding scholarships, professional training and community college centers of excellence, while also supporting AI research and extension programs, particularly through land-grant institutions, to advance applications in sectors such as agriculture and manufacturing. The Land Grant Research Prioritization Act (H.R. 7734) would expand federal agricultural research and extension priorities by authorizing grants, primarily through land-grant universities, to support projects in areas such as AI applications in agriculture, mechanized harvesting technologies, invasive species management, and aquaculture.
7. Establishing a Preemptive Federal Policy Framework
While the AI Framework reiterates the need for a preemptive standard, including with respect to AI development, unnecessary limits on use and liability for third-party misuse, it clarifies that a national standard should not preempt the traditional police powers retained by the states to enforce laws of general applicability against AI developers and users, state zoning laws and requirements governing a state’s own use of AI.
Conclusion
Akin’s lobbying & public policy team continues to advise clients on navigating the evolving AI regulatory landscape and will closely track implementation of the AI Framework and keep clients apprised of key developments.





