AI Auditing Framework – Draft ICO Guidance Published for Consultation

Feb 26, 2020

Reading Time : 2 min

By: Jenny Arlington, Cassandra Padget (Trainee Solicitor)

The ICO states that it understands the distinct benefits which AI can bring, but also the risks it can pose to the rights and freedoms of individuals. AI is therefore one of ICO’s top three strategic priorities, and it is why the ICO decided to develop a framework for auditing AI compliance with data protection obligations. The framework comprises (i) auditing tools and procedures that the ICO will use in audits and investigations and (ii) the draft guidance, which includes indicative risk and control measures that organizations and individuals can deploy when they use AI to process personal data. The framework connects with other work streams undertaken by the ICO in relation to AI, including as to how organizations can best explain their use of AI to individuals (the draft ExplAIn guidance, due to be published in final form later this year) and the ICO’s investigation in relation to use of live facial recognition technology.

The draft guidance has a broad focus on the management of several different risks arising from AI systems as well as governance and accountability measures, and it is thus the first of its kind. The guidance provides “best practice” and it is not a statutory code. It is structured in four parts, each of which corresponds to different data protection principles. The guidance provides recommendations for organizational and technical measures aimed at mitigating the risks to individuals that AI may cause or enhance. For example, the guidance includes sample risk statements and proposed controls, which are either preventative, detective or corrective.

The ICO has stressed that it aims for the guidance to be both conceptually sound and applicable to real life situations, because it will shape how the ICO will regulate in this space. The ICO states that feedback from those developing and implementing AI systems is essential, and hence it seeks feedback from two groups in particular:

  • Those with a compliance focus, such as data protection officers, general counsel and risk managers, in particular those with responsibilities for signing off on the implementation of AI systems in their organizations.
  • Technology specialists, including machine learning experts, data scientists, software developers and engineers, and cybersecurity and information technology (IT) risk managers.

The period of consultation on the draft guidance will close on April 1, 2020, and any feedback must be provided by then, either by way of submitting the online questionnaire or by e-mailing the ICO at AIAuditingFramework@ico.org.uk.

The final guidance is expected to be published in the summer of 2020.

Share This Insight

© 2024 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.