Lawmakers Introduce New Standalone Bills

Summary
• AI-Generated Fakes: Reps. Maria Elvira Salazar (R-FL), Madeleine Dean (D-PA), Nate Moran (R-TX), Joe Morelle (D-NY), and Rob Wittman (R-VA) introduced a discussion draft of the NO AI Fraud Act. This bill would establish a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries. A one-pager on the bill is available here. • Agency Guidelines/Procurement: Moreover, Reps. Ted Lieu (D-CA), Zach Nunn (R-IA), Don Beyer (D-VA), and Marc Molinaro (R-NY) have introduced the Federal Artificial Intelligence Risk Management Act, which would require U.S. federal agencies and vendors to adhere to NIST’s AI Risk Management Framework (RMF). The Senate version of the bill (S. 3205) was previously introduced by Sens. Jerry Moran (R-KS) and Mark Warner (D-VA) in November. A one pager on the bill is available here. • Training Data Disclosure: Reps. Don Beyer (D-VA) and Anna Eshoo (D-CA) have introduced the AI Foundation Model Transparency Act of 2023 (H.R. 6881), which would require entities deploying AI models of a certain size to disclose their training data to avoid copyright violations. Specifically, the bill would (1) direct the FTC, in consultation with NIST, the Copyright Office, and OSTP, to set transparency standards for foundation model deployers; (2) direct companies to provide consumers and the FTC with information on the model’s training data, model training mechanisms, and whether user data is collected in inference. “Covered entities” are defined to include the use of or services from a foundation model which generate, over 100,000 monthly output instances, or use of or services from a foundation model which has over 30,000 monthly users. • Financial Services: Sens. Mark Warner (D-VA) and John Kennedy (R-LA) have introduced the Financial Artificial Intelligence Risk Reduction (FAIRR) Act (S. 3554), which would require the Financial Stability Oversight Council (FSOC) to (1) coordinate financial regulators’ response to threats to the stability of the markets posed by AI; (2) identify gaps in existing regulations, guidance, and examination standards that could hinder effective responses to AI threats; and (3) implement specific recommendations to address such gaps.