Lawmakers Introduce New Standalone Bills

December 22, 2023

United StatesU.S. Congress

Summary

• AI-Generated Fakes: Reps. Maria Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nate Moran (R-TX), Joe Morelle (D-NY), and Rob Wittman (R-VA) introduced a discussion draft of the NO AI Fraud Act. This bill would establish a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries. A one-pager on the bill is available here.  • Agency Guidelines/Procurement: Moreover, Reps. Ted Lieu (D-CA), Zach Nunn (R-IA), Don Beyer (D-VA), and Marc Molinaro (R-NY) have introduced the Federal Artificial Intelligence Risk Management Act, which would require U.S. federal agencies and vendors to adhere to NIST’s AI Risk Management Framework (RMF). The Senate version of the bill (S. 3205) was previously introduced by Sens. Jerry Moran (R-KS) and Mark Warner (D-VA) in November. A one pager on the bill is available here.  • Training Data Disclosure: Reps. Don Beyer (D-VA) and Anna Eshoo (D-CA) have introduced the AI Foundation Model Transparency Act of 2023 (H.R. 6881), which would require entities deploying AI models of a certain size to disclose their training data to avoid copyright violations. Specifically, the bill would (1) direct the FTC, in consultation with NIST, the Copyright Office, and OSTP, to set transparency standards for foundation model deployers; (2) direct companies to provide consumers and the FTC with information on the model’s training data, model training mechanisms, and whether user data is collected in inference. “Covered entities” are defined to include the use of or services from a foundation model which generate, over 100,000 monthly output instances, or use of or services from a foundation model which has over 30,000 monthly users. • Financial Services: Sens. Mark Warner (D-VA) and John Kennedy (R-LA) have introduced the Financial Artificial Intelligence Risk Reduction (FAIRR) Act (S. 3554), which would require the Financial Stability Oversight Council (FSOC) to (1) coordinate financial regulators’ response to threats to the stability of the markets posed by AI; (2) identify gaps in existing regulations, guidance, and examination standards that could hinder effective responses to AI threats; and (3) implement specific recommendations to address such gaps.

Share This Page

Artificial Intelligence Resource Center

Giving you full access to the latest in AI across regulatory developments, legal & policy issues and industry news.

Akin Intelligence Newsletter

Subscribe to Akin Intelligence, our monthly newsletter recapping the latest in AI and its impact on various sectors. 

© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.