ARTICLE
22 July 2025

FDA Announces AI Councils Amid Calls For Greater Agency Transparency

AG
Akin Gump Strauss Hauer & Feld LLP

Contributor

Akin is a law firm focused on providing extraordinary client service, a rewarding environment for our diverse workforce and exceptional legal representation irrespective of ability to pay. The deep transactional, litigation, regulatory and policy experience we bring to client engagements helps us craft innovative, effective solutions and strategies.
The agency's internal usage of AI has been of particular interest in the last few months after the agency's announcement in May that its first AI-assisted scientific review pilot was successful and directing all FDA centers to begin integrating certain AI-generated capabilities within FDA's internal data platforms by the end of the following month.
United States Technology

Recently, it was reported that the U.S. Food and Drug Administration (FDA) is launching two cross-agency artificial intelligence (AI) councils. One AI council will be tasked with addressing how the agency uses AI internally and the other will focus on policy governing AI's use in FDA-regulated products (reportedly pre-existing AI councils in various FDA divisions will continue to operate) (Politico Pro).

The agency's internal usage of AI has been of particular interest in the last few months after the agency's announcement in May that its first AI-assisted scientific review pilot was successful and directing all FDA centers to begin integrating certain AI-generated capabilities within FDA's internal data platforms by the end of the following month. Then, in June, FDA launched Elsa, a generative AI tool designed to help employees work more efficiently. According to FDA, Elsa is designed to prepare information so that FDA staff can make decisions more efficiently, and that a human remains in the decision-making loop.FDA reports that Elsa models do not train on data submitted by regulated industry, which is intended to safeguard research and data handled by FDA staff.

Regarding Elsa's capabilities, several weeks after initially launching the platform, FDA's chief AI officer, Jeremy Walsh, noted that Elsa is unlikely to be connected to the internet, which would prohibit it from accessing real time information (Regulatory Focus). While this approach was framed as a necessary security precaution, it could also hinder Elsa's ability to produce up-to-date responses. In the days following these announcements, there were reports that the model, which is currently only trained on information through April 2024, provided inaccurate or incomplete information during its first week in use (NBC News).

FDA is actively updating and improving Elsa, but questions and concern persists among the industry that FDA is potentially using a tool that might not have passed the agency's own expectations for validation, governance and transparency of such tools when used for FDA-regulated functions. Presumably, the internally-focused AI council will be tasked with creating internal policies and procedures that ensure effective use of Elsa and other AI tools. However, the timeline and extent to which the agency will be transparent about these internal policies and procedures is, as of yet, unclear.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More