Digital Lords Committee Hears Evidence from Artificial Intelligence (AI) Firms and Regulators

November 27, 2023

Reading Time : 4 min

On 21 November 2023, as part of its inquiry into large language models (LLMs), the UK’s Communications and Digital Lords Committee considered the role of regulators and emerging trends regarding LLMs, including evidence from UK regulators, Google DeepMind and Aleph Alpha. This follows the UK’s Competition and Market Authority’s (CMA) recent review into artificial intelligence (AI) foundation models (including LLMs), published in September 2023 (see here for our previous client alert).

Key Takeaways

  • Regulators are broadly confident dealing with the issues that are, and will continue to arise, in relation to LLMs. However, regulators stressed the importance of cross-collaboration in cases which stretch beyond the remits of individual regulators and are supportive of a ‘central risk function’ as described in the UK government’s AI White Paper (the White Paper) (see here).
  • Industry experts were keen to see regulation develop in a way that is sensitive to the various contexts in which LLMs are deployed. Ensuring effective balance between intervention and allowing innovation to flourish will be key for developers moving forward.

Addressing LLMs From a Regulator’s Perspective

The discussion involved representatives from four UK regulators: Ofcom (Dr Yih-Choung Teh), the Information Commissioner’s Office (the ICO) (Stephen Almond), the Equality and Human Rights Commission (the EHRC) (Anna Boaden) and the CMA (Hayley Fletcher).

For the most part, the regulators agreed that their existing powers and the regulatory regime will be sufficient to manage the problems and challenges that may be thrown up by LLMs in the next three years. However, smaller regulators—such as the EHRC—foresee resources and funding challenging their ability to fulfil their role to the fullest extent possible.

Ofcom, the ICO and the CMA explained that they have taken steps to bolster their professional and technical capacity to address the increasing volume and technical complexity of LLM-related cases. The CMA, in particular, noted that they have built up a specialist data unit with experienced data scientists (the Digital Markets Unit). For EHRC, budgetary constraints have prevented any significant internal technical expertise being developed.

Regulators were questioned on their ability to deal with challenges associated with LLMs throughout the value chain (e.g., upstream with LLM developers, downstream with end users). Ms Boaden (EHRC) noted that looking upstream presents more jurisdictional challenges for the EHRC; Ms Fletcher (CMA) and Mr Almond (ICO) noted that the CMA and ICO’s powers are sufficient to approach the entire LLM value chain; and Dr Teh (Ofcom) highlighted that Ofcom’s sectoral focus means that it is better equipped to investigate downstream concerns, although considering LLM input may be necessary in some cases.

Whilst there is some clarity over the issue of liability in LLM-related cases (e.g., the ICO will be looking to investigate the ‘data controller’ for the purposes of data protection law), it is clear that this is an issue that will be informed by consultation and experience. The EHRC, in particular, noted that end users will bear a high onus to prove discrimination in LLM cases, given the scale and complexity contained within LLM algorithms.

The key takeaway to emerge from the discussion is that cross-regulator collaboration will be key for delivering the UK government’s vision for AI regulation, particularly where this concerns cross-cutting AI issues. Regulators were broadly positive about the existing relationships between themselves and other regulators. Mr Almond (ICO) drew attention to the planned UK government’s AI and Digital Hub; an advisory service, benefiting from approximately £2 million in funding, which will aim to support innovators accessing advice on regulatory requirements and providing a joined-up regulatory response to questions. Dr Teh (Ofcom) also highlighted the work done by the Digital Regulation Cooperation Forum (the DRCF) which provides for significant cooperation between the ICO, Ofcom, CMA and the Financial Conduct Authority on issues relating to online regulation, as well as any statutory duties to consult with other regulators.

Regulators also addressed the government’s ‘central risk function,’ as outlined in the White Paper. There was broad consensus that a central risk function will be key for addressing cross-cutting AI risks that fall through the gaps between the different regulatory regimes. Dr Teh (Ofcom) and Mr Almond (ICO) agreed that any central risk function should be located within, or at least close to, the UK government to ensure the function holds influence and any gaps are prioritised. It is understood that the Department for Science, Industry and Technology are continuing to consider the most appropriate framework for the central risk function, with regulators engaging as appropriate.

The Future of LLMs and Opportunities for Regulation

The discussion involved representatives from Google DeepMind (Professor Zoubin Ghahramani) and Aleph Alpha (Jonas Andrulis).

Both accepted that there are significant risks associated with LLMs, and AI more broadly, that must be balanced against the aim of fostering innovation. Mr Andrulis noted that too much regulatory focus is at the higher levels of foundational technologies themselves, which is limiting the capability of models to be applied in different use cases. Mr Andrulis is keen to see regulation which is sensitive to the levels of risk people are willing to accept in different contexts to get models off the ground more quickly (e.g. the harms associated with hallucinations in an entertainment context might be less problematic than risks in medical-related AI technology).

Professor Ghahramani also accepted that a contextual approach to regulation might be preferable. He broadly welcomed the UK’s approach to regulation, including the recent AI Safety Summit in November 2023. He also noted that the UK’s AI Safety Institute will be a useful tool for developers when assessing overall risk.

On the use of copyrighted materials to train LLMs, Professor Ghahramani highlighted that LLMs cannot evaluate the copyright status of all information accessed during web scraping. However, Google DeepMind has adopted an ‘opt-out approach,’ enabling parties to exclude their data from being used to train LLMs. The suggestion of allowing copyright holders to access developer records containing all of the information used to train LLMs was doubted by Mr Andrulis, noting that this would stifle meaningful development of models.

Akin’s lawyers would be delighted to advise on any AI-related regulatory activity, including in relation to LLMs. Our newsletter Akin Intelligence also covers the latest news and developments on the AI revolution and its impact on businesses (subscribe here).

Share This Insight

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.