White House Reveals Long-Anticipated 'AI Bill of Rights'

October 21, 2022

Reading Time : 5 min

Key Points

  • On October 4, 2022, the White House released its “Blueprint for an AI Bill of Rights,” a document detailing the Biden-Harris administration’s proposed approach towards algorithmic regulation.
  • The blueprint consists of non-binding guidelines for the design, use and deployment of AI systems in public and private sectors.
  • The framework for AI systems in the blueprint is organized into five key principles. These principles apply to automated systems that could meaningfully impact rights, opportunities or access to critical resources or services.
  • The blueprint represents a change from previous guidance on AI regulation, taking a harms-based approach and emphasizing the importance of protecting those subject to automated decision making from certain algorithmic harms.
  • It is unlikely that the blueprint will result in new enforceable regulations implementing its principles, but they may inform future federal agency actions.

Background

Issued by the White House Office of Science and Technology Policy (OSTP) on October 4, 2022, the “Blueprint for an AI Bill of Rights” is the Biden-Harris administration’s seminal work on its vision for the future of artificial intelligence (AI) regulation. Previous final guidance issued on November 17, 2020, by the U.S. Office of Management and Budget (OMB) adopted a risk-based approach that stressed the need to promote AI innovation. The new blueprint is much more harms-focused, centering on the potential for algorithmic harms in sectors such as education, employment, health care, financial services access and commercial surveillance, among others.1

The new blueprint is somewhat similar to a document the European Commission published for AI systems in 2019: the Ethics Guidelines for Trustworthy AI. The European Parliament is currently implementing the EU Artificial Intelligence Act to enforce these guidelines, which features a private right of action, along with detailed requirements for AI systems developers on accuracy, data security, data governance and transparency.

The blueprint lays out five principles to govern the design, use and deployment of AI, as well as longer technical explanations and guidance on implementing the five principles.

Principles for AI Systems in America

The blueprint applies to automated systems that could meaningfully impact the rights, opportunities or access to critical resources or services of the American public,2 and consists of the following five key principles:

1. Safe and Effective Systems Principle

This principle states that individuals should be protected from unsafe or ineffective AI systems. To this end, this principle encourages development involving “diverse communities, stakeholders, and domain experts” to identify risks. This principle also calls for thorough pre-deployment testing and risk identification, along with mitigating potential harms through continuous monitoring of AI systems.3

2. Algorithmic Discrimination Protections Principle

According to the blueprint, algorithmic discrimination occurs when “automated systems contribute to unjustified different treatment or impacts disfavoring people” based on a variety of factors such as race, sex, religion, age, national origin, disability, genetic information and other classifications protected by law.4 This principle encourages AI system developers to use proactive and continuous measures to guard against algorithmic discrimination, including equity assessments and algorithmic impact assessments featuring both independent evaluation and plain language reporting.

3. Data Privacy Principle

This principle calls for designing AI systems with built-in data privacy protections, using the minimum amount of data necessary and conforming to “reasonable expectations.” Where possible, AI system developers are encouraged to seek consent before collecting, using, accessing, transferring or deleting personal data. Consent should only be used to justify data collection, according to this principle, in cases where it can be “appropriately and meaningfully given.” If it’s not possible to obtain consent in advance, developers are encouraged to implement privacy by design safeguards. This principle also states that uses of data in “sensitive domains”5 (such as health, work education and finance) should be subject to additional review or outright prohibition. For surveillance technologies, this principle advocates for increased protections, including “at least” pre-deployment assessment of harms and scope.6

4. Notice and Explanation Principle

According to this principle, AI system developers should provide descriptions of their AI system’s functionality in plain language to explain that AI decision making is in use, the role AI plays, who is responsible for the AI system and the expected outcomes of AI decision making. This principle also calls for explanations of AI systems that account for risk and are “technically valid, meaningful and useful[.]”7

5. Human Alternatives, Consideration, and Fallback Principle

The final principal recommends that AI systems allow subjects to opt out of automated decision making where appropriate, granting them the alternative of a human decision maker. According to the blueprint, appropriateness should be determined “based on reasonable expectations in a given context” while ensuring the public is protected from impacts that would be especially harmful.8

As far as its relationship to existing law, the blueprint is intended to share a “broad, forward-leaning vision of recommended principles” for AI system development, use and deployment, informing private and public action rather than mapping to any current or proposed laws regulating AI.9

Takeaway

While the blueprint is nonbinding, it gives the public and private sector further insight to the Biden-Harris administration’s stance on AI regulation. Much of the blueprint is consistent with the European stance emphasizing algorithmic harms, transparency and data privacy. The blueprint provides perspective for future federal agency action on AI regulation. Similar concerns were raised when the Department of Commerce established the National AI Advisory Committee. The Federal Trade Commission has also ordered deletion of AI algorithms for alleged illegal data practices, and has proposed rulemaking on unfair or deceptive practices in commercial surveillance. The Food and Drug Administration’s (FDA) approach to AI also emphasizes the need for transparency and methods to monitor performance and mitigate risks throughout the product lifecycle.10 Despite it being nonbinding guidance, companies should review the blueprint as well as recent federal agency actions and guidance, and determine to what degree their current or planned AI systems align with these principles.

See more of our articles covering AI developments:

UK Gov’t Proposals for New AI Regulatory Framework

Artificial Intelligence in the CHIPS Act of 2022

New EEOC Initiative on Use of AI in Hiring Decisions

Administrative and Congressional Update on Artificial Intelligence in the U.S.

FDA Releases Action Plan for Artificial Intelligence/Machine Learning-Enabled Software as a Medical Device

FDA’s AI White Paper: To Be or Not to Be, That is the Question


1 The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House Off. Of Science and Technology Policy (October 4, 2022), available here

2 Id. at 8.

3Id. at 5.

4Id., note that “automated system” is defined as “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.”

5“sensitive domains” are defined as “those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights.”

6Id. at 6.

7Id.

8 Id. at 7.

9Id. at 9.

10See FDA, Artificial Intelligence/Machine Learning (AI/M)-Based Software as a Medical Device (SaMD) Action Plan (Jan. 2021), available at https://www.fda.gov/media/145022/download.

Share This Insight

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.