House/Senate Advance AI Provisions Via NDAA, Kicking Off Conference Process

Introduction
On October 9, 2025, the Senate passed its National Defense Authorization Act for Fiscal Year 2026 (NDAA; S. 2296) by a vote of 77 to 20. The GAIN AI Act, which would require chipmakers to give U.S. customers priority access to advanced semiconductors before selling abroad, is notably included in the Senate-passed package, which also folds into the FY 2026 Intelligence Authorization Act (IAA). The House version of the NDAA (H.R. 3838), approved on September 10, 2025, excludes the GAIN AI Act; thus, the provision, which is under review by Senate lawmakers for additional considerations, will still need to withstand conference negotiations.
The broader artificial intelligence (AI) provisions in the House and Senate bills demonstrate substantial overlap in objectives but differ in focus and approach. The Senate bill largely concentrates on U.S. Department of Defense (DoD)-wide frameworks, governance and industry partnerships, including specific directives for AI model oversight, digital sandbox environments and securing AI systems against foreign adversaries. In contrast, the House bill emphasizes institutional capacity-building and operational applications, including the establishment of pilot programs for generative AI (GAI) in training and maintenance, AI integration into business processes and comprehensive cybersecurity and AI/machine learning (ML) security policies. While both bills target workforce development, AI governance and risk mitigation, the Senate provisions lean more heavily on cross-agency coordination and regulatory frameworks, whereas the House provisions focus on programmatic implementation, operational integration and research-to-application pathways.
The House and Senate now must reconcile their respective versions of the NDAA. Senate Armed Services Committee (SASC) Ranking Member Jack Reed (D-RI) indicated that leadership of the House and Senate Armed Services Committees will meet in the coming days as part of NDAA conference negotiations and that staff continue to work on a compromise measure in the midst of the ongoing government shutdown. Committee leaders reportedly aim to finish the conference report by Thanksgiving. Stakeholders continue to weigh in on the process, with the Business Software Alliance (BSA) recently sending a letter to House and Senate leadership recommending several changes to the bills’ AI provisions.
A full summary of the House and Senate AI provisions is below.
Summary of House/Senate AI Provisions
Senate
NDAA
- Cyberspace-Related Matters:
- United States Cyber Command AI Industry Collaboration Roadmap (Sec. 1602): Directs United States Cyber Command to, by August 1, 2026, complete development of a roadmap for industry collaboration on AI-enabled cyber capabilities for DoD cyberspace operations. The roadmap must establish a framework for coordination between the private sector and DoD, including by (1) convening U.S. commercial AI developers, cybersecurity experts and relevant Federal Government offices; and (2) facilitating information exchange on AI technology and capabilities for cyber operations.
- Public-Private Cybersecurity Partnership for Highly Capable AI Systems (Sec. 1621): Requires the Assistant Secretary of Defense for Cyber Policy to establish, within 180 days of enactment, a public-private partnership to address cybersecurity and physical security threats to advanced AI and ML systems. The partnership will serve as a formal forum for collaboration between DoD and industry to strengthen security frameworks and practices for AI systems vulnerable to state-sponsored cyber threats. Participants will include DoD components (e.g., Cyber Policy, Chief Digital and AI Officer (CDAO), National Security Agency (NSA), Defense Advanced Research Projects Agency (DARPA), U.S. Cyber Command); leading AI, cyber and telecom companies and academic or federally funded research institutions.
- Digital Sandbox Environments for AI (Sec. 1622): Requires DoD to, by April 1, 2026, establish a Task Force co-chaired by the CDAO and the Chief Information Officer (CIO) to develop and coordinate “AI sandbox” environments across the Department. The platforms would support AI experimentation, training and model development for users of all technical levels. The Task Force will identify shared requirements, inventory existing tools, streamline approval processes and issue guidance on responsible use. A briefing to Congress is due by August 1, 2026, and the Task Force would terminate on January 1, 2030.
- AI Model Assessment and Oversight (Sec. 1623): Directs DoD to, by June 1, 2026, establish a cross-functional team led by the CDAO to create a standardized framework for evaluating, governing and coordinating AI models across the Department. The team will develop model performance and testing standards, ethical and security compliance requirements and governance structures for AI development and deployment. Functional leads for AI applications will be designated by January 1, 2027, and all major AI systems must be assessed using the framework by January 1, 2028. The team will brief Congress after each milestone, transition its duties to a successor organization by June 30, 2030 and sunset on December 31, 2030.
- DoD Ontology Governance Working Group (Sec. 1624): Directs DoD to, by June 1, 2026, establish a working group to develop and implement a common data ontology and governance structure across the Department. Led by the CDAO and CIO, the group will include representatives from military departments, combatant commands, defense intelligence entities and other relevant stakeholders. The Working Group will coordinate existing ontology efforts, create domain-specific and enterprise-wide ontologies, designate functional domain leads, evaluate security risks and establish governance frameworks for version control, access and integration. Key milestones include designating functional domain leads by August 1, 2026, issuing Department-level policy by June 1, 2027 and overseeing implementation by June 1, 2028.
- Modification of High-Performance Computing Roadmap (Sec. 1625): Expands the scope of DoD high-performance computing roadmap to include both DoD-owned and maintained computing assets, as well as commercially procured cloud services or other infrastructure-as-a-service contracts. For any new or expanded data centers on military installations, the roadmap must provide estimates of additional needs, including physical space, electricity and water usage, impacts on the installation and surrounding community, mitigation measures and strategies to prevent local utility disruptions while coordinating with local, state and federal agencies.
- AGI Steering Committee (Sec. 1626): Requires DoD to, by April 1, 2026, establish a steering committee on artificial general intelligence (AGI). The committee, co-chaired by the Deputy Secretary of Defense and the Vice Chairman of the Joint Chiefs of Staff, would analyze the development trajectory of AGI technologies, assess adversary capabilities, evaluate military applications and implications, develop an adoption strategy with ethical and policy guardrails and analyze threats from adversarial AGI use with counter-strategies.
- Physical and Cybersecurity Procurement Requirements for AI Systems (Sec. 1627): Requires DoD to develop a comprehensive, risk-based framework for implementing cybersecurity and physical security standards for covered AI and ML systems. The framework must address workforce risks, AI-specific threats and vulnerabilities, supply chain risks, adversarial tampering, data theft, and security posture management, while leveraging existing frameworks, including the National Institute of Standards and Technology (NIST) Special Publication 800 series and the Cybersecurity Maturity Model Certification (CMMC) framework. Higher security levels are required for AI systems of greatest national security concern, including protection against highly capable cyber threat actors, with additional components designed specifically for advanced AI systems. The Secretary may amend the Defense Federal Acquisition Regulation Supplement (DFARS) or take similar actions to mandate adoption of best practices by covered entities. The framework must include a detailed implementation plan with timelines, resource requirements and progress metrics. The Secretary must report to congressional defense committees on implementation within 180 days of enactment. Covered AI and ML technologies include all aspects of the system lifecycle, and covered entities are those contracted by DoD to develop, deploy, store or host such systems.
- Guidance and Prohibition on Use of Certain AI (Sec. 1628): Requires DoD to mandate that all Department offices and components exclude or remove “covered AI” from all DoD systems and devices within 30 days of enactment, and to consider issuing guidance to remove AI developed by foreign adversary entities posing national security risks. Contractors with active DoD contracts are similarly prohibited from using such AI. Waivers may be granted on a case-by-case basis for scientifically valid research, national security evaluation; training, testing, counterterrorism, or counterintelligence operations; or mission-critical functions. “Covered AI” includes AI developed by specific foreign companies (DeepSeek, High Flyer) and related entities, while “foreign adversary entity” includes any foreign adversary country (North Korea, China, Russia or Iran), its nationals or entities tied to it, companies with substantial foreign ownership (20% or more), or any entity that is directed or controlled by such actors.
- Roadmap for Advancing Digital Content Provenance Standards (Sec. 1629): Requires DoD to, by June 1, 2026, develop a roadmap guiding potential adoption and integration of digital content provenance capabilities across the Department. The roadmap must assess current and proposed open standards, identify strategic objectives for securing public-facing digital media, clarify roles and responsibilities across DoD components, explore standardized processes for embedding and verifying content credentials, outline acquisition approaches, develop metrics for effectiveness and establish stakeholder engagement with industry, academia and federally funded research centers.
- Enhanced Protection of Data Affecting Operational Security of DoD Personnel (Sec. 1630): Requires DoD to identify and prioritize protection of personal data related to or affecting the operational security of Armed Forces members and civilian employees, ensuring compliance with pre-existing privacy laws and practices. By June 1, 2026, the Secretary must review all applicable guidance and, if necessary, issue new or revised guidance for enhanced protection. Personal data may only be stored on non-Department servers or cloud services under authorized contracts or with data subject permission, with waivers allowed in limited national security cases. Congress must be notified within 30 days of any changes to Department issuances or certain events involving operational security data, including unauthorized storage, exfiltration, waivers or cybersecurity incidents, with the notification requirement sunsetting after five years.
- Department of Energy (DOE) National Security Programs:
- Appropriate Scoping of AI Research within the National Nuclear Security Administration (Sec. 3118): Limits use of funds for AI research, development, program execution or associated computing infrastructure within the National Nuclear Security Administration (NNSA) to activities that directly support the Administration’s nuclear security missions. Clarifies that this limitation does not prevent the establishment of enduring national security AI research and development (R&D) programs in other DOE components or federal agencies.
 
- General Provisions:
 - Report on Implementation of AI Into Certain Anti-Money Laundering Investigations (Sec. 6032): Requires the Director of the Financial Crimes Enforcement Network (FinCEN), in consultation with key banking regulators (e.g., Federal Deposit Insurance Corporation (FDIC), Federal Reserve, Office of the Comptroller of the Currency (OCC), National Credit Union Administration (NCUA)), to submit a report within 180 days on the feasibility of using AI in anti-money laundering (AML) investigations targeting foreign terrorist organizations, drug cartels and other transnational criminal organizations.
 
- Export Controls for Advanced Artificial Intelligence Chips:
- GAIN AI Act—Prohibition on Prioritizing Countries of Concern Over U.S. Persons for Exports of Advanced Integrated Circuits (Sec. 6083): Requires the Bureau of Industry and Security (BIS) to mandate a license for exports, reexports or in-country transfers of advanced integrated circuits (AICs), with certain exceptions for countries listed in Country Group A:4, A:5 or A:6. License applicants exporting to “countries of concern” or under U.S. arms embargo must (1) certify that U.S. persons had a right-of-first-refusal, including by ensuring U.S. persons are notified of the intended sale and its terms, allowing at least 15 business days to request all or part of the product and prioritizing U.S. requests over foreign buyers; and (2) ensure no backlog exists, exports will not reduce U.S. production capacity and foreign buyers are not receiving preferential terms. Applications without certification must be denied. BIS must issue regulations within 90 days establishing procedures, portals, recordkeeping, penalties and metrics to ensure compliance. “Country of concern” is defined as a country that the Director of National Intelligence assesses is hosting, or has the intention of hosting, a military or intelligence facility associated with a country subject to a comprehensive U.S. arms embargo.
 
Intelligence Authorization Act for Fiscal Year 2026
- Matters Concerning Foreign Countries:
- Additional Functions and Requirements of AI Security Center (Sec. 607): Expands the responsibilities of the NSA’s AI Security Center by requiring it to provide a subsidized research testbed for private-sector and academic AI security research, including secure access to proprietary third-party AI models with vendor consent. The Director must establish terms of use, limiting publications only to protect classified or proprietary information and ensure federal agencies can access the testbed on a cost-recovery basis.
- AI Development and Usage by Intelligence Community (Sec. 608): Requires the Intelligence Community (IC) to, within one year, identify commonly used AI systems and functions that have high potential for reuse across IC elements. Chiefs of AI for each IC element must implement policies to share custom-developed code, models and model weights while protecting intelligence sources and methods. The CIO of the IC will provide model contract terms to prevent vendor lock-in and encourage competition with interoperable AI products.
- High-Impact AI Systems (Sec. 609): Establishes requirements for the IC regarding high-impact AI systems and use cases. Defines a “use case” as the specific mission performed by an AI system. Within 30 days of enactment, the Office of the Director of National Intelligence (ODNI) must issue guidance to ensure consistent definitions of high-impact AI systems and use cases across all IC elements.
- Application of AI Policies of the Intelligence Community to Publicly Available Models Used for Intelligence Purposes (Sec. 610): Extends IC AI policies to publicly available AI models when used for intelligence purposes in classified environments. ODNI must ensure that existing IC AI policies apply to such models to the greatest extent possible. The CAIO of the IC, or a designated provider, must establish common testing standards and benchmarks for AI models across common use cases, with higher standards for high-impact use cases, including tasks with potential lethal application.
- Revision of Interim Guidance Regarding Acquisition and Use of Foundation Models (Sec. 611): Directs ODNI to revise existing IC guidance on acquiring and using foundation AI models and clarifies that evaluating training data, labeling methods and model weights does not constitute collection by the IC. The revised guidance must require that IC elements consider the data used to train any foundation model under review, including sources, labeling methods and third-party vendor functions, to inform mitigation measures, usage policies or retention practices before adoption. Additionally, IC elements must, to the greatest extent practicable, avoid using publicly available models that are found to contain information obtained unlawfully by a vendor.
- Strategy on Intelligence Coordination and Sharing Relating to Critical and Emerging Technologies (Sec. 612): Requires ODNI to, within 60 days of enactment, develop a strategy to coordinate the collection, processing, analysis and dissemination of intelligence on critical and emerging technologies across the intelligence community.
 
House
NDAA
- Research, Development, Test and Evaluation:
- National Security and Defense AI Institute (Sec. 219): Authorizes DoD to establish at least one National Security and Defense AI Institute at a U.S. eligible host institution. The Institute would focus on addressing cross-cutting challenges and foundational AI research for national security and defense, building partnerships across public and private organizations, fostering innovation ecosystems, supporting interdisciplinary research and developing the U.S. AI workforce.
- Initiative on Studying Advanced AI, National Security and Strategic Competition (Sec. 235): Requires DoD to establish an initiative to prepare the Department to leverage advanced AI, assess its national security and defense implications and analyze strategic competition, particularly regarding China’s pursuit of advanced AI. The initiative designates a lead office within DoD to carry out duties, including reviewing industry assessments, engaging with AI developers and researchers, identifying strategies for AI adoption and infrastructure development, monitoring China’s AI progress and associated strategic risks, evaluating U.S. AI developer security, developing preparedness and crisis plans and recommending measures to prevent adversarial acquisition of sensitive AI.
 
- Military Personnel Policy:
- Pilot Program for GAI and Spatial Computing for Performance Training and Proficiency Assessment (Sec. 549): Requires the Navy to, within 90 days, develop and implement a pilot program to optimize the use of GAI and spatial computing for immersive training and proficiency assessment. The program would terminate one year after its establishment.
 
- General Provisions:
- Use of Technology Using AI to Facilitate Audit of the Financial Statements of DoD for FY 2026 (Sec. 1010): Directs DoD and the service secretaries to encourage, to the greatest extent practicable, the use of AI and ML technologies to support audits of the Department’s financial statements.
- Responsible Use of AI for Logistics, Intelligence, Maintenance, Cyber Defense and Other Mission Areas (Sec. 1070M): Expresses the sense of Congress that AI adoption is essential for U.S. defense readiness and competitiveness, and that DoD should expand pilot programs and deployment of AI-enabled systems to enhance decision-making, reduce costs and improve warfighter effectiveness. Requires DoD to submit a report to congressional defense committees every six months on current and planned AI integration efforts, including implementation barriers and recommendations to accelerate adoption, with the reporting requirement set to expire five years after enactment.
 
- Cyberspace Related Matters:
 - Incorporation of AI Considerations into Annual Cybersecurity Training (Sec. 1512): Requires the CIO to, within one year, revise the mandatory annual cybersecurity training for Armed Forces members and civilian employees to include content addressing the unique cyber challenges posed by AI.
- Strategy to Defend Against Risks Posed by the Use of AI (Sec. 1515): Requires DoD to submit to Congress within 180 days, and annually thereafter, a report outlining interagency policies and procedures to protect the defense industrial base, cybersecurity, supply chains and U.S. operational security from AI-enabled information espionage and cyber attacks. Within 90 days of the report, the Secretary must provide Congress with recommendations, including legislative proposals and best practices to help U.S. businesses and government entities mitigate and respond to AI-enabled cyber threats.
- Biological Data for AI (Sec. 1521): Requires DoD to, within one year, develop and implement standards ensuring that biological data generated by Department-funded research is collected and stored to enable use in advanced computational methods, including AI. The requirements must define “qualified biological data resources” based on data type, size, funding, sensitivity or other factors; establish metrics and metadata for data quality, usability and interoperability; mandate tiered cybersecurity safeguards and access controls; provide exceptions for national security and protect individual privacy.
- AI and ML Security in DoD (Sec. 1531): Requires DoD to, within 180 days, develop and implement a department-wide policy for cybersecurity and governance of AI and ML, including models used in national defense. The Secretary must conduct a comprehensive review of current AI/ML security practices and submit a report by August 31, 2026, assessing risks, gaps, alignment with industry frameworks and recommendations for enhancing AI/ML security and governance. Additionally, policies governing software bills of materials must apply to AI software, with implementation oversight and reporting requirements within one year.
- Pilot Program for Data-Enabled Fleet Maintenance (Sec. 1532): Requires each covered armed force (Army, Navy and Air Force) to, within 90 days, establish a pilot program using commercially available AI technologies to improve ground vehicle maintenance. The program must assess feasibility, effectiveness, cost savings and potential risks, including cybersecurity concerns.
- GAI for National Defense (Sec. 1533): Requires DoD, subject to appropriations, to implement between two and twelve GAI initiatives to enhance U.S. national security, improve DoD capabilities and accelerate AI adoption. Within 180 days, the Secretary must designate a responsible organization to manage and coordinate these efforts.
- Reports on AI Use for Business Processes (Sec. 1534): Requires DoD’s CIO to submit a report to congressional defense committees within 180 days of enactment and annually as needed, analyzing the use of AI tools and capabilities across Department business processes to establish guidelines for their appropriate use.
 
- Other Defense Matters:
- 
- SAFE Research Act (Sec. 1736-1740): Establishes restrictions and disclosure requirements aimed at protecting U.S. federally funded research from foreign influence, espionage and technology theft, particularly from “hostile foreign entities” and “foreign adversary countries.” “Foreign adversary country” is defined to include “covered nations” under 10 U.S.C. § 4872(f) (North Korea, China, Russia and Iran). “Hostile foreign entity” is defined to include any organization, or its subsidiaries or affiliates, that is based in or legally organized under a “foreign adversary country” and (1) appears on a U.S. government sanctions or restricted list; (2) is controlled or influenced by an adversary government and engages in harmful or sensitive national-security-related activities; or (3) participates in a foreign talent recruitment program linked to an adversary government.
 





