NIST Hosts Workshop on Secure AI Software Development

Summary
On January 17, 2024, NIST held a virtual workshop examining secure software development practices for AI models to inform the agency’s efforts (per the AI EO), to “develop[] a companion resource to the [Secure Software Development Framework (SSDF)] to incorporate secure development practices for generative AI and for dual-use foundation models.” The workshop was divided into three sessions and featured presenters from agencies and the private sector, followed by Q&A segments with each set of presenters. Participants included the Cybersecurity and Infrastructure Security Agency (CISA), the Software Alliance (BSA), IBM, Google, OpenAI, AWS, Microsoft, and HiddenLayer. The three sessions were: • Secure Software Development Challenges with Large Language Models (LLMs) and Generative AI Systems, which focused on the cybersecurity challenges and impacts of AI development. • Secure Development of LLMs and Generative AI Systems, which discussed security practices specific to AI development. • Secure Use of LLMs and Generative AI Systems, which covered security practices for deploying LLMs and generative AI. The presenters addressed both current practices and steps being taken to address “unknown unknowns”—i.e., novel issues that may arise as more powerful models are developed. Some themes throughout included commonalities with existing cloud and “big data” security practices, an emphasis on trust and security as ongoing requirements of AI development and deployment, and the need for a whole-system based approach. A recording of the workshop is available on the event page and the presentation slides are expected to be posted soon.