White House Partners with Industry on AI Commitments and Develops Broader Executive Order

Summary
On July 21, 2023, the White House announced new, voluntary commitments made by seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—to manage the risks of new AI development and use based on three overarching principles: safety, security and trust. The companies’ commitments include: • Safety: Internal and external security testing of their AI systems, conducted in part by independent experts, as well as information sharing amongst industry and with governments, civil society, and academia on managing AI risks. • Security: Investments in cybersecurity and insider threat safeguards, as well as enabling third-party discovery and reporting of vulnerabilities in their AI systems. • Trust: (1) developing comprehensive technical mechanisms to notify users of AI-generated content, such as a watermarking system; (2) publicly reporting their AI systems’ capabilities and limitations; (3) prioritizing research on the potential societal risks of AI systems; and (4) deploying advanced AI systems to “help address society’s greatest challenges.” Concurrently, the White House indicated that the Biden-Harris Administration is developing an executive order (EO) and will pursue bipartisan legislation to help the US lead in AI innovation. Following the White House announcement, four of the companies—Anthropic, Google, Microsoft, and OpenAI—announced the creation of the Frontier Model Forum, which aims to, among other things, advance AI safety research; formulate best practices for the development and deployment of frontier models; and facilitate information-sharing with lawmakers, industry, academics and civil society.