Definition
The EU AI Act is an EU regulation that governs AI systems using a risk-based approach. It defines prohibited practices, special obligations for certain uses (especially high-risk systems), and transparency requirements for some AI interactions.
Why it matters
- Compliance: it can impose concrete requirements on how AI systems are built and used.
- Contracts and procurement: vendors and clients will ask for evidence (documentation, controls).
- Operational impact: governance, oversight, and monitoring become mandatory for some systems.
How it works
Identify role + classify system risk -> apply requirements -> document -> monitor
Practical steps usually include: define intended purpose, determine whether a system falls into a regulated category, implement risk controls, and maintain the required documentation and logs.
Practical example
An AI tool used in a regulated workflow is assessed for whether it qualifies as high-risk. If so, the provider prepares documentation and completes conformity steps, and deployers ensure human oversight and proper use.
Common questions
Q: Does the EU AI Act apply to every AI tool?
A: No. Obligations depend on the type of system, intended use, role (provider/deployer), and risk classification.
Q: Is this only a provider problem?
A: Not always. Deployers can have obligations too, especially for high-risk systems and certain operational controls.
Related terms
- High-Risk AI System — when strict obligations apply
- AI Conformity Assessment — required compliance procedure
- AI Documentation Requirements — evidence for audits and users
- Human Oversight — meaningful human control
- Algorithmic Transparency — disclosures and limits
- AI Risk Management — risk controls in practice
References
Regulation (EU) 2024/1689 (EU AI Act).