FTC Report on AI: A Cautionary Tale: Combatting Online Harms Through Innovation

July 1, 2022

Reading Time : 3 min

On June 16, 2022, the Federal Trade Commission (FTC) issued a report to Congress titled, “Combatting Online Harms Through Innovation,” examining whether artificial intelligence (AI) is a useful tool to combat the proliferation of harmful content online. As part of the Consolidated Appropriations Act, 2021 (P.L. 116-260), Congress tasked the FTC to conduct and complete a study on whether and how AI could be used to identify, remove or otherwise address a variety of specified “online harms.” The bill references deceptive, fraudulent, manipulated, misleading or illegal content and activities, citing examples such as deepfake videos, fake reviews, hate crimes, election-related disinformation, the illegal sale of opioids and product counterfeiting, among others.

The FTC’s report concludes that governments, platforms and other stakeholders must exercise great caution in mandating AI use, or over-relying on AI as a solution. In its report, the FTC also notes that the datasets supporting automated tools are often “not robust or accurate enough to avoid false positives or false negatives” or have difficulties identifying new phenomena when trained on previously identified data. The agency also remains concerned that the overreliance on such tools can result in censorship, bias and discrimination. The FTC further concludes that if AI is not the optimal solution, and if the scale makes meaningful human oversight challenging, other means, both regulatory and otherwise, should be explored in order to prevent these harms from spreading.

In exercising caution, the FTC recommends that Congress, regulators, platforms and scientists focus attention on several related considerations:

  • The use and decision-making of AI-based tools aimed at preventing harmful content are insufficient on their own without human intervention.
  • AI use in this area needs to be transparent, meaning it must be explainable and contestable, especially when the rights of people are at stake or when personal data is being collected and used.
  • Platforms and other companies relying on AI tools to remove harmful content must be held responsible for their data practices as well as their results. This must include requirements to implement meaningful consumer appeal and redress mechanisms.
  • AI developers, as well as companies that obtain and deploy them, are responsible for both inputs and outputs and they should hire and retain diverse teams, which may help reduce inadvertent bias or discrimination, as well as avoid data classifications that reflect societal inequities.
  • Platforms and others can use a number of interventions and tools, such as ad targeting, downranking, labeling or inserting interstitial pages, to mitigate the viral spread of certain harmful content and limit its impact.
  • AI tools make it possible for individuals to limit their exposure to harmful content, for example, filters allowing users, at their discretion, to block sensitive or harmful content or middleware (third-party content moderation systems).
  • It would be beneficial for smaller platforms and organizations to have access to AI tools to prevent online harm since they may not be able to create them themselves. Noting that access to user data should be contingent on robust privacy safeguards.
  • The use of complementary (and possibly blockchain enabled) authentication tools can aid AI by tracking the source of content and whether that content has been altered.

The FTC also warns that laws or regulations need to be carefully considered in the context of AI and online harms, while recognizing that its mandated use to address harmful content, including overly quick takedown requirements imposed on platforms, can be highly problematic and lead to additional harms. Among other concerns, the FTC highlighted that such mandates can lead to “overblocking” and put smaller platforms at a disadvantage, as well as conflict with the First Amendment.

Federal regulators and Congress continue to scrutinize and take action to ensure the responsible use of AI and other automated tools across web and mobile platforms. The Akin Gump cross-practice AI team continues to monitor forthcoming congressional, administrative and private-stakeholder and international initiatives in this area. 

Contact Information 

If you have any questions concerning this alert, pleast contact:

Lamar Smith
Email
Washington, D.C.
+1 202.887.4031

Ed Pagano
Email
Washington, D.C.
+1 202.887.4255

Hans Christopher Rickhoff
Email
Washington, D.C.
+1 202.887.4145

Davina Garrod
Email
London
+44 20.7661.5480

Natasha G. Kohne
Email
San Francisco
+1 415.765.9505

David C. Vondle
Email
Washington, D.C.
+1 202.887.4184

Christina Barone
Policy Advisor
Email
Washington, D.C.
+1 202.416.5543
 

Share This Insight

Attachments

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.