FCC Ruling on AI-Generated Robocalls Reflects Focus on Artificial Intelligence

February 9, 2024

Reading Time : 7 min
  • The FCC released a declaratory ruling on February 8, taking action to make AI-generated robocalls illegal.
  • The declaratory ruling follows the attempted disruption of the New Hampshire Democratic presidential primary by AI-created robocalls imitating President Joe Biden’s voice. The robocalls told voters to stay home from voting in the primary.
  • Prior to the declaratory ruling, the FCC had already sought information on the issues posed by the use of AI in the telecommunications industry launching an NOI on November 15, 2023 to assess AI’s impact on illegal robocalls and texts and adopting an NOI on August 3, 2023 on potential uses of AI for non-federal spectrum usage and management.

Background

The proliferation of artificial intelligence (AI) technologies is likely to have a profound impact on the telecommunications industry. Technology underlying the creation of deepfakes, which accurately mimic a person via digitally altered video, audio or images, has recently been used maliciously in ways that spread false information. Robocalls are a prime opportunity to use deepfakes to reach a large amount of people, as evidenced during the leadup to the 2024 New Hampshire Democratic presidential primary on January 23. Prior to the primary, New Hampshire voters received calls containing a message purportedly in the voice of President Biden. The call encouraged voters not to vote in the New Hampshire primary, warning that doing so would enable a victory by former president Donald Trump in November. The calls were found to be AI-generated robocalls leveraging deepfake technology and did not, in fact, generate from President Biden’s campaign or actually contain the President’s voice.

To address these and similar robocalls that attempt to defraud or prey on consumers, Chairwoman Rosenworcel announced on January 31 that the Federal Communications Commission (FCC or Commission) would take action to make AI-generated robocalls illegal. Stating that these calls are “already sowing confusion,” the Chairwoman indicated that the FCC will support State Attorneys General in protecting consumers from these fraudulent callers. The full Commission adopted the proposal on February 2, releasing it publicly on February 8. The declaratory ruling is effective as of its release date; per the Chairwoman, the ruling will grant State Attorneys General “another tool to go after voice cloning scams and get this junk off the line.”

Notably, the FCC’s growing focus on AI aligns with other government efforts following the launch of the White House Executive Order on AI in October 2023. The Biden administration continues its AI-focused initiatives with the launch of the U.S. AI Safety Institute Consortium (AISIC) through the Department of Commerce’s National Institute of Standards and Technology (NIST). AISIC, announced by Commerce Secretary Gina Raimondo on February 8, will support 200 member companies and organizations in setting industry-wide safety standards while protecting innovation. The Federal Trade Commission (FTC) has its own AI robocall program, offering a Voice Cloning Challenge to “encourage the development of multidisciplinary approaches . . . aimed at protecting consumers from AI-enabled voice cloning harms.” Even Congress is looking to act, with three bills introduced in the House of Representatives in January 2024 targeting AI robocalls. These measures, alongside various other AI-related proposals and measures by the U.S. government, signal a nationwide strategy to better understand and regulate AI technologies.

Robocall Proposal

The FCC is authorized by the Telephone Consumer Protection Act of 1991 (TCPA) to restrict junk calls. The TCPA was designed to limit telemarketing calls and the use of automatic telephone dialing systems, as well as artificial or prerecorded voice messages. The FCC’s rules implementing the TCPA mandate that telemarketers and advertisers acquire prior express written consent from consumers before calling or texting them through an automatic telephone dialing system or by using a prerecorded or artificial voice.

The declaratory ruling clarifies that calls made with AI technologies that generate human voices qualify as “artificial or prerecorded” voices for purposes of the TCPA and the Commission’s implementing rules. The Commission emphasizes that messages using voice cloning technologies count as “artificial” because a person is not speaking them, and their use signifies the type of call that the TCPA is designed to protect consumers from. Moreover, the Commission deems the use of AI technologies that communicate with consumers using prerecorded messages as subject to the TCPA, as these classify as “using” a “prerecorded voice.”

Since the TCPA restricts calls initiated using “artificial or prerecorded” voices, this change impacts voice cloning technology used in common robocall scams, subjecting those initiating calls using these technologies to steep fines and the threat of legal action if those calls do not comply with the FCC’s restrictions on the use of artificial or prerecorded voice messages. Such calls, per the Commission’s TCPA rules, require the prior express consent of the receiving party, unless an emergency purpose or exemption applies. Moreover, entities responsible for initiating the call using an artificial or prerecorded voice must disclose their identifying information and offer opt-out mechanisms for advertising and telemarketing calls, per the Commission’s existing rules. The Commission specifies that the rules apply to AI technologies, including tools that mimic or resemble a real person’s voice to simulate a conversation between the person and the consumer.

While the Commission acknowledges that some AI-generated calls may be useful and not deceptive, it deems the great potential for harm more of a threat – i.e., with calls that imitate a loved one’s voice in an attempt to extort or scam the consumer. The requirement of consumer consent thus allows the consumer to decide which AI-generated calls they want to receive.

The FCC’s robocall initiative has significant support, with a coalition of 26 State Attorneys General in favor of the approach. The Commission additionally has a Memorandum of Understanding with 48 State Attorneys General to cooperate with the FCC in the battle against robocalls. Consequently, this declaratory ruling provides new tools to State Attorneys General to hold malicious actors behind the robocalls legally accountable, enabling them to seek damages under the law.

AI-Related Robocall Initiatives

This latest action comes after the Commission voted on November 15, 2023 to advance a Notice of Inquiry (NOI) seeking public comment on how the agency can combat illegal robocalls and how AI might be involved. In particular, the Commission asked questions about how AI might be used for scams that arise from junk calls, and whether this technology should be subject to oversight under the TCPA.

The record developed from the inquiry will provide the Commission with a stronger understanding of the risks and benefits of artificial intelligence technologies and their impact on consumers. In particular, the NOI reflects the FCC’s view that AI offers significant potential with respect to ongoing efforts to block unwanted robocalls and increase consumer trust. Conversely, the NOI also acknowledged that the use of AI in the marketing and robocalling context carries significant and potentially dangerous implications to consumer safety and privacy, as the New Hampshire primary deepfake robocall illustrates. 

Comments in the proceeding were due December 18, 2023, and reply comments were due January 16, 2024. Based on the record generated by the NOI, it is likely that the FCC will next develop a notice of proposed rulemaking to address the benefits and risks identified in response to the NOI and propose rules and guidance to inform and oversee the use of AI in both combatting unwanted robocalls and malicious actors, and in generating and distributing marketing messages.

AI and Spectrum Management

The robocall actions supplement additional FCC efforts to understand the impact of AI on the telecommunications industry. On August 3, 2023, FCC launched an NOI exploring the feasibility of tools to support and enhance the FCC’s understanding of non-federal spectrum usage. Given the growth of AI and machine learning (ML) technologies, the Commission is interested in how these technologies can help analyze large and complex datasets in regard to non-federal spectrum management. The FCC sought comment on a range of topics, including defining spectrum usage, technological advancements for real-time spectrum usage monitoring, and data collection-related issues. In particular, the Commission aimed to develop a better understanding of how technology can improve data collection and extrapolation, as well as spectrum utilization modeling. Chairwoman Rosenworcel, in her accompanying statement, indicated the goal of the proceeding is to “increase our understanding of spectrum utilization and support the development of AI tools in wireless networks.” Comments in the proceeding were due on October 3, 2023, and replies were due on November 2, 2023.

The NOI is consistent with the White House Executive Order on AI, which encourages the FCC to consider how AI will impact spectrum management, spectrum sharing, and the efficiency of non-federal spectrum usage. The FCC will likely examine possible actions on this front through its Communications Security, Reliability, and Interoperability Council (CSRIC), which it re-launched for this year with a focus on AI/ML technologies. CSRIC will convene for two years, starting in June 2024. CSRIC will likely explore the best means of responding to and implementing provisions from the executive order, including examining the potential for AI to streamline spectrum management and sharing, as well as using technology to enhance the nation’s communication networks.

Takeaways and Next Steps

With these actions, the FCC continues its engagement with the intersection of AI and communications. In her statement accompanying the AI robocall declaratory ruling, Commissioner Anna Gomez noted that the benefits created by AI should be “harnessed to protect consumers from harm rather than amplify the risks they face in an increasingly digital landscape.” Addressing AI-generated robocalls and improving telecommunications infrastructure with AI technology remain priorities for the Commission, alongside AI-related actions from other agencies and government entities, so we expect to see movement on this front in 2024.

 

Share This Insight

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.