President Biden’s AI EO: Key Takeaways for Cybersecurity & Data Privacy

December 1, 2023

Reading Time : 7 min

Key Points

  • The Biden administration’s highly anticipated executive order (EO) on artificial intelligence (AI) contains a wide range of directives for addressing risks associated with the development and deployment of AI, many of which are entwined with data privacy and cybersecurity.
  • The EO directs agencies to develop and strengthen techniques to preserve individuals’ privacy and identify personal information collected from commercially available sources like data brokers. The EO also orders agencies to increase use of privacy-enhancing technologies (PETs) such as encryption tools.
  • The EO aims to build new standards and guidelines for AI cybersecurity, focusing on critical infrastructure owners and operators. In a bid to prevent use of U.S. infrastructure as a service (IaaS) product by foreign cyber attackers, the EO also requires the Secretary of Commerce to set reporting requirements for IaaS providers to verify identities of foreign buyers.


Introduction

On October 30, 2023, the Biden administration released a far-reaching executive order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI). The EO issues directives related to the use of AI across several areas, with special attention paid to the critical areas of cybersecurity and data privacy. Here, we discuss the EO directives pertaining to data privacy and cybersecurity. For a general overview of the EO, Akin’s coverage is available here.

AI’s breakneck pace of development has left government agencies and the private sector scrambling to figure out how best to regulate the burgeoning new tools without compromising their potential benefits. The EO attempts to balance these concerns by considering new standards for governing AI safety while also promoting innovation and competition. Many of the EO’s directives include questions for agencies to resolve through planning and stakeholder engagement. Data privacy and cybersecurity will play a pivotal role as the EO’s directives are put into practice.

Privacy and cybersecurity are significant factors throughout the EO, with multiple agencies tasked with directives that include privacy and cybersecurity elements. These agencies include the Office of the Attorney General (OAG), Department of Homeland Security (DHS), the Department of Health and Human Services (HHS), the Office of Management and Budget (OMB) and the Department of Commerce (DoC), among others.

EO Data Privacy Directives

AI models require enormous amounts of data for use in training. The data in these datasets can include personal data—and even sensitive data—that may be acquired from publicly available sources, purchased through vendors or collected directly from individuals. Given the privacy risks of using personal data as training data, the EO presents privacy as a major factor throughout, with Section 9 focusing specifically on privacy protection. The EO directs agencies in general to use “privacy-enhancing technologies” (PETs) as appropriate, as well as other technical and policy tools to protect privacy and fend off potential legal and societal risks.1

Section 9 attempts to address risks from AI, such as the collection or use of personal data and the ability of AI systems to make inferences about individuals by, among other things, mandating that the Director of OMB evaluate and take steps to identify commercially available information (CAI) procured by federal agencies by conducting appropriate agency inventory and reporting processes.2  This directive focuses on CAI that contains personally identifiable information (PII) and CAI procured from data brokers or processed indirectly through vendors, other than CAI used for national security purposes. The OMB Director must also evaluate potential guidance to agencies on ways to mitigate privacy risks from agencies’ activities related to CAI in consultation with the Federal Privacy Council and Interagency Council on Statistical Policy.3  The OMB Director is further required to issue a request for information (RFI) on potential revisions to guidance to agencies on privacy provisions of the E-Government Act of 2002 (Public Law 107-347) within 180 days of the date of the EO, consulting with the Attorney General, the Assistant to the President for Economic Policy and the Director of the Office of Science and Technology Policy (OSTP). Within 365 days of the EO, the EO requires the Secretary of Commerce, acting through the National Institute of Science and Technology (NIST), to create guidelines for agencies on evaluating “differential-privacy-guarantee protections” to improve the use of privacy-enhancing technology in the face of risks from AI.4  Within 120 days, the EO requires the Secretary of Energy and the Director of the National Science Foundation (NSF) to collaborate on a new organization to advance privacy research, especially the development and deployment of PETs.5

Citing the need to address the risk of AI to Americans’ privacy, President Biden called on Congress to pass federal data privacy legislation in his subsequent Fact Sheet on the EO. Data privacy law in America currently exists as a patchwork of different state and local laws, many of which include AI either directly or indirectly.

EO Cybersecurity Directives

Cybersecurity is a clear concern throughout the EO, with special attention given to the potential for increasingly sophisticated AI-enabled threat vectors. For example, the Secretary of Commerce, acting through NIST, is required to coordinate with the Secretaries of Energy and Homeland Security to develop guidelines on AI security, including guidance for evaluating AI’s ability to compromise cybersecurity.6   The Department of Commerce and NIST announced the U.S. Artificial Intelligence Safety Institute (USAISI) as part of these obligations. The EO directs the Secretary of Commerce to require that developers of certain “dual-use foundation models”7  submit reports to the Department of Commerce outlining their training and testing procedures, including cybersecurity protections for defending the training process against sophisticated threats.8  To directly target “significant malicious cyber-enabled activities,” the EO further tasks the Secretary of Commerce with proposing regulations within 90 days that require U.S. infrastructure as a service (IaaS) providers to submit a report to the secretary when a foreign entity transacts with that IaaS provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (this process is referred to as a “training run” in the EO).9

Within 90 days, and at least annually thereafter, the EO requires each agency head with authority over critical infrastructure to coordinate with the DHS in providing assessments of risks from the use of AI in critical infrastructure sectors—including the potential increased threat from cyberattacks.10  Within 150 days, the Secretary of the Treasury is required to issue best practices for financial institutions on managing AI-specific cybersecurity risks.11   Critical infrastructure owners and operators will also receive safety and security guidelines from the Secretary of Homeland Security within 180 days. These guidelines must incorporate the NIST AI Risk Management Framework.12   The EO also establishes an Artificial Intelligence Safety and Security Board (AISSB) at DHS, with AI experts from the private sector, academia and government to provide recommendations for improving security, resilience and incident response related to AI usage in critical infrastructure.13

The EO also attempts to leverage AI to boost U.S. cyber defenses, directing the Secretaries of Defense and Homeland Security to conduct an operational pilot project within 180 days to identify, develop, test, evaluate and deploy AI capabilities that can help uncover vulnerabilities in critical government software, systems and networks.14  Within 270 days, the Secretaries of Defense and Homeland Security are required to provide a report on effective deployment of AI for cyber defense.15  The EO also focuses on mitigating the risk of AI being used for chemical, biological, radiological and nuclear (CBRN) threats by requiring, within 180 days, the Secretary of Homeland Security to evaluate how AI could be used for CBRN threats—and how it might be used to counter such threats—through a process that includes consulting with government and third party experts.16

Next Steps

The EO’s reach is incredibly broad, and its impacts will be felt on an even wider range of agencies, programs and industries than those listed in its directives. Agencies will be developing guidance, standards and eventually regulations that will help establish best practices for the evolving field of privacy and cybersecurity in AI. Much like the NIST Cybersecurity Framework, these new standards and guidance may become important elements of best practices for AI governance, privacy and security. Stakeholders should keep abreast of these developments and any collaborative processes that follow to take the opportunity to be heard, particularly when there is potential impact to businesses.

The Akin data privacy & cybersecurity team and Akin’s cross-practice AI team regularly counsel clients that develop and deploy AI/machine learning (ML) technologies and will continue to monitor federal efforts to regulate the use of AI, including implementation of this EO.


1 “[P]rivacy enhancing technologies” in the EO refer to “any software or hardware solution, technical process, technique, or other technological means of mitigating privacy risks arising from data processing, including by enhancing predictability, manageability, disassociability, storage, security, and confidentiality.” Section 3(z).

2 Exec. Order No. 14110, 88 FR 75191 (2023), Section 9(a).

3 Id.

4 Id. at Section 9(b); the term “differential-privacy guarantee” is defined by the EO as “protections that allow information about a group to be shared while provably limiting the improper access, use, or disclosure of personal information about particular entities.” Section 3(j).

5 Id. at Section 9(c)(i).

6 Id. at Section 4.1(a)(i)(C).

7 A “dual-use foundation model” is defined by the EO to mean “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by:

. . .

(ii)  enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or

(iii) permitting the evasion of human control or oversight through means of deception or obfuscation.”

Id. at Section 3(k).

8 Id. at Section 4.2(a)(i).

9 Id. at Section 4.2(c).

10 Id. at Section 4.3(a)(i).

11 Id. at Section 4.3(a)(ii).

12 Id. at Section 4.3(a)(iii).

13 Id. at Section 4.3(a)(v).

14 Id. at Section 4.3(b).

15 Id.

16 Id. at Section 4.4(a)(i).

Share This Insight

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.