🔴 ⚖️ Regulation Friday, May 8, 2026 · 2 min read ·

EU AI Office: Commission opens consultation on draft AI Act transparency guidelines

Editorial illustration: Commission opens consultation on draft AI Act transparency guidelines

The European Commission has published draft guidelines on AI Act transparency obligations and opened a public consultation. The consultation runs until 3 June 2026, with obligations taking effect on 2 August 2026. Providers must label AI-generated content with machine-readable markers and notify users when they interact with AI systems.

🤖

This article was generated using artificial intelligence from primary sources.

The European Commission published on 7 May 2026 draft guidelines on the AI Act’s transparency obligations and opened a public consultation to gather feedback before final adoption. The document operationalises Article 50 of the regulation and clarifies what providers and deployers of AI systems must do from August 2026.

What are the deadlines and who must act?

The public consultation is open until 3 June 2026 and is addressed to AI system providers and developers, deployers, public bodies, academic institutions, research organisations and citizens. The final guidelines are expected to be adopted before 2 August 2026, when Article 50 of the AI Act begins producing legal effects. The Commission simultaneously announced a separate voluntary code of practice on the labelling of AI-generated content, being developed by independent experts and expected in June 2026 as a tool for demonstrating compliance.

What do the transparency obligations concretely require?

Providers of AI systems will be required to notify users in the European Union when they are interacting with an AI system or when they have encountered AI-generated content. In addition, any AI-generated or AI-manipulated content must be labelled with machine-readable markers — technical metadata or watermarks that enable automated origin detection. Additional, visible notices are required for four categories: deepfake recordings, AI-generated publications of public interest, emotion recognition systems, and biometric categorisation tools.

How does this differ from the code of practice?

The guidelines now in consultation are the official interpretation of the regulation and are legally binding in the sense that supervisory authorities use them when assessing compliance. The code of practice announced for June is a voluntary instrument providing concrete technical solutions — for example which watermark formats to use or how to embed them in a model — and makes it easier for companies that adopt it to demonstrate compliance. The Commission positions both documents as complementary: the guidelines define “what”, the code defines “how”.

What does this mean for providers outside the EU?

The obligations apply extraterritorially — any AI system whose outputs are consumed in the EU must comply, regardless of where the provider is headquartered. This includes generative models for text, images, audio and video, as well as classifiers performing emotion recognition or biometric categorisation. Penalties for non-compliance can reach up to €35 million or 7% of global annual turnover, whichever is higher.

Frequently Asked Questions

Until when does the public consultation on the draft guidelines run?
The consultation is open until 3 June 2026 and is addressed to providers, deployers, public bodies, academia and citizens.
When do the transparency obligations take effect?
The obligations under Article 50 of the AI Act apply from 2 August 2026, regardless of when the final guidelines are adopted.
What are machine-readable markers?
Technical metadata or watermarks embedded in AI-generated content that enable automated detection of its artificial origin, typically through standards such as C2PA.