Another Piece in the AI Policy Puzzle for the Health and Life Sciences Sector: White House Releases AI Action Plan

September 3, 2025

Reading Time : 3 min

AI policy for the health and life sciences sector has continued to take shape. Building on recent activity, on July 23, 2025, the White House released its highly-anticipated AI Action Plan, setting forth the Trump Administration’s recommended policy actions to accelerate AI innovation and build American AI infrastructure. This Plan recommends policies that would promote AI adoption, the creation of “AI-ready” scientific datasets and the establishment of real-world AI evaluation systems by and for the health care and life sciences industries.

The Plan maintains that many of the “most critical sectors” in America, such as health care, are “especially slow to adopt” AI “due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.” To enable AI adoption, including in the health care industry, the White House recommends:

  • Establishing “regulatory sandboxes” or “AI Centers of Excellence” where researchers, startups and established enterprises, enabled by regulatory agencies such as the Food and Drug Administration (FDA), can “rapidly deploy and test AI tools while committing to open sharing of data and results.”
  • Launching domain-specific efforts, including in health care, led by the National Institute of Standards and Technology (NIST), to “convene a broad range of public, private, and academic stakeholders to accelerate the development and adoption of national standards for AI systems and to measure how much AI increases productivity at realistic tasks in those domains.”

The AI Action Plan also states that the United States “must lead the creation of the world’s largest and highest quality AI-ready scientific datasets, while maintaining respect for individual rights and ensuring civil liberties, privacy, and confidentiality protections.” The White House recommends, for example, directing the National Science and Technology Council (NSTC) Machine Learning and AI Subcommittee to “make recommendations on minimum data quality standards for the use of biological, materials science, chemical, physical, and other scientific data modalities in AI model training.”

The Plan sets forth recommendations for building an “AI evaluations ecosystem,” including investing in the development of “AI testbeds for piloting AI systems in secure, real-world settings, allowing researchers to prototype new AI systems and translate them to the market.” The Plan recommends that these testbeds “span a wide variety of economic verticals touched by AI,” including health care delivery.

The Plan also addresses the growing patchwork of state AI regulations, recommending that federal agencies with AI-related discretionary funding programs “ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” An increasing number of states have adopted or are considering legislation regulating AI, including several that impact the use of AI in health care settings. For example, California regulates the use of generative AI in the context of provider-patient communications pertaining to patient clinical information (effective since January 1, 2025), while Texas established disclosure requirements concerning AI systems used in relation to health care services or treatment (effective January 1, 2026) as well as certain requirements for Texas health care practitioners using AI for purposes like making recommendations on a diagnosis or course of treatment based on a patient’s medical record (effective since September 1, 2025). Some states, including Utah and Nevada, are also beginning to regulate consumer mental and behavioral health care chatbots.

The AI Action Plan aligns with recent AI developments in the Department of Health and Human Services (HHS), including FDA’s recent announcement of AI councils, FDA’s release of a Regulatory Accelerator initiative for digital health innovators and FDA’s and HHS’s appointments of chief AI officers. FDA has been active in this space for some time, including a significant focus on clinical decision support (CDS) tools, and periodically updating the AI-Enabled Medical Device List, which is intended to serve as a resource for identifying AI-enabled medical devices that are authorized for marketing in the United States. The Centers for Medicare and Medicaid Services (CMS) also included patient conversational AI assistants among priorities adjacent to the recently announced CMS Interoperability Framework.

Share This Insight

Previous Entries

Data Dive

September 3, 2025

AI policy for the health and life sciences sector has continued to take shape. Building on recent activity, on July 23, 2025, the White House released its highly-anticipated AI Action Plan, setting forth the Trump Administration’s recommended policy actions to accelerate AI innovation and build American AI infrastructure. This Plan recommends policies that would promote AI adoption, the creation of “AI-ready” scientific datasets and the establishment of real-world AI evaluation systems by and for the health care and life sciences industries.

...

Read More

Data Dive

July 29, 2025

The U.S. Court of Appeals for the Sixth Circuit has upheld a 2024 Federal Communications Commission (FCC) Order that significantly broadens telecommunications carriers’ breach notification obligations. This decision, issued on August 14, 2025, in Ohio Telecom Association v. FCC, mandates that carriers disclose breaches of any customer personally identifiable information (PII), not just customer proprietary network information (CPNI), and applies to both inadvertent and intentional breaches.2

...

Read More

Data Dive

March 3, 2025

On January 16, 2025, the Federal Trade Commission (FTC) issued a Final Rule updating the Children’s Online Privacy Protection (COPPA) Rule, significantly expanding compliance obligations for online services that collect, use, or disclose personal information from children under 13.1 The amendments impose new restrictions on targeted advertising, add data security requirements, refine parental consent mechanisms, and introduce additional compliance measures.

...

Read More

Data Dive

February 21, 2025

On January 8, 2025, the DOJ published a final rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).

...

Read More

Data Dive

January 22, 2025

On January 17, 2025, days before the inauguration, former President Joe Biden issued an executive order titled Strengthening and Promoting Innovation in the Nation's Cybersecurity (EO 14144). Building on previous efforts, including Executive Order 14028, this directive seeks to bolster cybersecurity across federal systems, supply chains and critical infrastructure from adversarial nations, particularly from the People’s Republic of China (PRC).

...

Read More

Data Dive

January 10, 2025

UPDATE: The California Privacy Protection Agency (CPPA) has extended the deadline for submitting public comments from January 14 to February 19, 2025, in response to the recent California wildfires. This extension aims to afford stakeholders additional time to provide comprehensive and detailed feedback, considering the significant challenges posed by the wildfires.

...

Read More

Data Dive

November 25, 2024

Treasury has issued a Final Rule to implement President Biden’s 2023 EO targeting U.S. investments in Chinese companies engaged in certain activities related to semiconductors, quantum computing or AI.

...

Read More

Data Dive

November 19, 2024

The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.

...

Read More

© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.