FDA Announces AI Councils Amid Calls for Greater Agency Transparency

July 16, 2025

Reading Time : 2 min

Recently, it was reported that the U.S. Food and Drug Administration (FDA) is launching two cross-agency artificial intelligence (AI) councils. One AI council will be tasked with addressing how the agency uses AI internally and the other will focus on policy governing AI’s use in FDA-regulated products (reportedly pre-existing AI councils in various FDA divisions will continue to operate) (Politico Pro).

The agency’s internal usage of AI has been of particular interest in the last few months after the agency’s announcement in May that its first AI-assisted scientific review pilot was successful and directing all FDA centers to begin integrating certain AI-generated capabilities within FDA’s internal data platforms by the end of the following month. Then, in June, FDA launched Elsa, a generative AI tool designed to help employees work more efficiently. According to FDA, Elsa is designed to prepare information so that FDA staff can make decisions more efficiently, and that a human remains in the decision-making loop. FDA reports that Elsa models do not train on data submitted by regulated industry, which is intended to safeguard research and data handled by FDA staff. 

Regarding Elsa’s capabilities, several weeks after initially launching the platform, FDA’s chief AI officer, Jeremy Walsh, noted that Elsa is unlikely to be connected to the internet, which would prohibit it from accessing real time information (Regulatory Focus). While this approach was framed as a necessary security precaution, it could also hinder Elsa’s ability to produce up-to-date responses. In the days following these announcements, there were reports that the model, which is currently only trained on information through April 2024, provided inaccurate or incomplete information during its first week in use (NBC News). 

FDA is actively updating and improving Elsa, but questions and concern persists among the industry that FDA is potentially using a tool that might not have passed the agency’s own expectations for validation, governance and transparency of such tools when used for FDA-regulated functions. Presumably, the internally-focused AI council will be tasked with creating internal policies and procedures that ensure effective use of Elsa and other AI tools. However, the timeline and extent to which the agency will be transparent about these internal policies and procedures is, as of yet, unclear.

Share This Insight

© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.