Regulation

AI Act Article 50 (transparency)

Article 50 of the EU AI Act sets transparency rules — chatbots, deepfakes, and AI-generated content must be clearly disclosed; enforcement begins Aug 2026.

Article 50 of the EU AI Act introduces transparency obligations for AI systems that directly interact with humans or generate synthetic content. The goal is that users always know when they are talking to an AI or looking at AI-generated material.

Key obligations (enforced from 2 August 2026):

  • Chatbots and AI assistants: must clearly disclose that the counterpart is an AI system, unless this is obvious from context
  • Deepfakes: synthetic video, image, or audio depicting real persons must carry a visible label indicating it is AI-generated
  • AI-generated text of public interest: news articles or informational content produced by AI must be labelled, unless human editorial review applies
  • Emotion-recognition or biometric-categorization systems: must notify the person exposed to the system

Labelling must be both machine-readable (e.g. C2PA metadata, watermarks) and human-perceivable (visible text or visual marker). Technical standards for AI-content watermarking are still under development.

Penalties for non-compliance with Article 50 go up to €15 million or 3% of global annual turnover (less severe than prohibited practices, but still significant). The Act has extraterritorial reach — any provider serving EU users falls under it.

Our site applies Article 50 voluntarily — every article carries a visible AI-generated label placed above the body so the reader knows before reading.

Sources

See also