Regulation
EU AI Act
The European Union's regulation governing AI systems by risk tier (unacceptable, high, limited, minimal); the world's first comprehensive AI law, with enforcement phasing in 2024-2027.
The EU AI Act (Regulation (EU) 2024/1689) is the European Union’s horizontal regulation of AI systems, in force since August 2024 and phasing in through 2027. It is the first comprehensive AI law in any major jurisdiction and follows a risk-based approach.
Risk tiers:
- Unacceptable risk (prohibited from Feb 2025): social scoring, real-time biometric ID in public spaces (with narrow exceptions), manipulative dark patterns, exploiting vulnerabilities, untargeted face scraping
- High risk (full requirements from Aug 2026): AI in employment, education, critical infrastructure, healthcare, law enforcement, migration, essential services. Requires risk management, data governance, transparency, human oversight, accuracy, cybersecurity, conformity assessment.
- Limited risk: chatbots, emotion-recognition, deepfakes — must disclose AI use (Article 50)
- Minimal risk: unregulated (e.g., AI spam filters)
General-Purpose AI (GPAI) has its own track. All GPAI providers must publish training data summaries, respect copyright, and document the model. Systemic-risk GPAI (training compute > 10²⁵ FLOPs) faces additional safety, evaluation, and red-teaming obligations.
Key dates:
- Aug 2024: in force
- Feb 2025: unacceptable risk + AI literacy obligations
- Aug 2025: GPAI obligations begin
- Aug 2026: full high-risk obligations + Article 50 transparency
- Aug 2027: legacy high-risk systems must comply
Penalties scale with severity, up to €35M or 7% of global annual turnover for prohibited practices. The Act has extraterritorial reach: any AI provider serving EU users falls under it.