AI policy for the health and life sciences sector has continued to take shape. Building on recent activity, on July 23, 2025, the White House released its highly-anticipated AI Action Plan, setting forth the Trump Administration’s recommended policy actions to accelerate AI innovation and build American AI infrastructure. This Plan recommends policies that would promote AI adoption, the creation of “AI-ready” scientific datasets and the establishment of real-world AI evaluation systems by and for the health care and life sciences industries.
The Plan maintains that many of the “most critical sectors” in America, such as health care, are “especially slow to adopt” AI “due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.” To enable AI adoption, including in the health care industry, the White House recommends:
- Establishing “regulatory sandboxes” or “AI Centers of Excellence” where researchers, startups and established enterprises, enabled by regulatory agencies such as the Food and Drug Administration (FDA), can “rapidly deploy and test AI tools while committing to open sharing of data and results.”
- Launching domain-specific efforts, including in health care, led by the National Institute of Standards and Technology (NIST), to “convene a broad range of public, private, and academic stakeholders to accelerate the development and adoption of national standards for AI systems and to measure how much AI increases productivity at realistic tasks in those domains.”
The AI Action Plan also states that the United States “must lead the creation of the world’s largest and highest quality AI-ready scientific datasets, while maintaining respect for individual rights and ensuring civil liberties, privacy, and confidentiality protections.” The White House recommends, for example, directing the National Science and Technology Council (NSTC) Machine Learning and AI Subcommittee to “make recommendations on minimum data quality standards for the use of biological, materials science, chemical, physical, and other scientific data modalities in AI model training.”
The Plan sets forth recommendations for building an “AI evaluations ecosystem,” including investing in the development of “AI testbeds for piloting AI systems in secure, real-world settings, allowing researchers to prototype new AI systems and translate them to the market.” The Plan recommends that these testbeds “span a wide variety of economic verticals touched by AI,” including health care delivery.
The Plan also addresses the growing patchwork of state AI regulations, recommending that federal agencies with AI-related discretionary funding programs “ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” An increasing number of states have adopted or are considering legislation regulating AI, including several that impact the use of AI in health care settings. For example, California regulates the use of generative AI in the context of provider-patient communications pertaining to patient clinical information (effective since January 1, 2025), while Texas established disclosure requirements concerning AI systems used in relation to health care services or treatment (effective January 1, 2026) as well as certain requirements for Texas health care practitioners using AI for purposes like making recommendations on a diagnosis or course of treatment based on a patient’s medical record (effective since September 1, 2025). Some states, including Utah and Nevada, are also beginning to regulate consumer mental and behavioral health care chatbots.
The AI Action Plan aligns with recent AI developments in the Department of Health and Human Services (HHS), including FDA’s recent announcement of AI councils, FDA’s release of a Regulatory Accelerator initiative for digital health innovators and FDA’s and HHS’s appointments of chief AI officers. FDA has been active in this space for some time, including a significant focus on clinical decision support (CDS) tools, and periodically updating the AI-Enabled Medical Device List, which is intended to serve as a resource for identifying AI-enabled medical devices that are authorized for marketing in the United States. The Centers for Medicare and Medicaid Services (CMS) also included patient conversational AI assistants among priorities adjacent to the recently announced CMS Interoperability Framework.