European Commission Launches Consultation to Develop Guidelines and Code of Practice on Transparent AI Systems

Summary
On September 4, 2025, the European Commission (Commission) launched a public consultation to aid in developing guidelines and a code of practice on AI transparency obligations applicable to providers and deployers of certain AI systems under the EU’s Artificial Intelligence Act (AI Act). One of the key aims of the AI Act is to encourage responsible and trustworthy AI development and deployment in the EU; thus, this consultation marks an important step toward building common guidelines around transparent AI systems.
The consultation targets a wide range of stakeholders, including but not limited to providers and deployers of AI systems, supervisory authorities, governments, civil society organizations, academia, research institutions, and the general public.
Among other things, the AI Act subjects providers and deployers of interactive and generative AI models to certain transparency obligations, including the following:
- Providers of AI systems designed to directly interact with natural persons must ensure that those persons are informed they are interacting with an AI system unless it is otherwise obvious.
- Providers of AI systems must ensure outputs are marked in a machine-readable format and detectable as artificially generated or manipulated.
- Deployers of emotion recognition or biometric categorization AI systems must ensure individuals exposed to these systems are informed about their operation.
- Deployers of AI systems that operate or manipulate image, audio, or video content constituting deepfakes must disclose that the content has been artificially generated or manipulated.
- All of the above must be communicated clearly and distinguishably at the time of the first interaction or exposure.
These transparency obligations (see Article 50 of the AI Act) will apply from August 2, 2026. The AI Act also requires the Commission to issue guidelines on the practical implementation of these obligations, while the AI Office will encourage and facilitate the drawing up of codes of practice to aid effective implementation.
The purpose of the consultation is to collect feedback from stakeholders, which will inform the Commission's guidelines and a Code of Practice on the detection and labeling of artificially generated or manipulated content. Stakeholders are encouraged to submit feedback before the consultation's closing date on October 2, 2025.