European Commission Launches Consultation to Develop Guidelines and Code of Practice on Transparent AI Systems

September 4, 2025

European UnionInternational

Summary

On September 4, 2025, the European Commission (Commission) launched a public consultation to aid in developing guidelines and a code of practice on AI transparency obligations applicable to providers and deployers of certain AI systems under the EU’s Artificial Intelligence Act (AI Act). One of the key aims of the AI Act is to encourage responsible and trustworthy AI development and deployment in the EU; thus, this consultation marks an important step toward building common guidelines around transparent AI systems.

The consultation targets a wide range of stakeholders, including but not limited to providers and deployers of AI systems, supervisory authorities, governments, civil society organizations, academia, research institutions, and the general public.

Among other things, the AI Act subjects providers and deployers of interactive and generative AI models to certain transparency obligations, including the following:

  1. Providers of AI systems designed to directly interact with natural persons must ensure that those persons are informed they are interacting with an AI system unless it is otherwise obvious.
  2. Providers of AI systems must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.
  3. Deployers of emotion recognition or biometric categorization AI systems must ensure individuals exposed to these systems are informed about their operation.
  4. Deployers of AI systems that operate or manipulate image, audio, or video content constituting deepfakes must disclose that the content has been artificially generated or manipulated.
  5. All of the above must be communicated clearly and distinguishably at the time of the first interaction or exposure.

These transparency obligations (see Article 50 of the AI Act) will apply from August 2, 2026. The AI Act also requires the Commission to issue guidelines on the practical implementation of these obligations, while the AI Office will encourage and facilitate the drawing up of codes of practice to aid effective implementation.

The purpose of the consultation is to collect feedback from stakeholders, which will inform the Commission's guidelines and a Code of Practice on the detection and labeling of artificially generated or manipulated content. Stakeholders are encouraged to submit feedback before the consultation's closing date on October 2, 2025.

Share This Page

AI Law & Regulation Tracker

Access to the latest in AI across regulatory developments, legal and policy issues, and industry news.

Akin Intelligence Newsletter

Subscribe to Akin Intelligence, our monthly newsletter recapping the latest in AI and its impact on various sectors. 

© 2025 Akin Gump Strauss Hauer & Feld LLP. All rights reserved. Attorney advertising. This document is distributed for informational use only; it does not constitute legal advice and should not be used as such. Prior results do not guarantee a similar outcome. Akin is the practicing name of Akin Gump LLP, a New York limited liability partnership authorized and regulated by the Solicitors Regulation Authority under number 267321. A list of the partners is available for inspection at Eighth Floor, Ten Bishops Square, London E1 6EG. For more information about Akin Gump LLP, Akin Gump Strauss Hauer & Feld LLP and other associated entities under which the Akin Gump network operates worldwide, please see our Legal Notices page.