Skip to main content
AI Regulation

EU AI Act

The EU AI Act is the European Union’s risk-based regulation for artificial intelligence, setting obligations for providers and deployers depending on system risk.

Also known as: AI Act, Regulation (EU) 2024/1689

Definition

The EU AI Act is an EU regulation that governs AI systems using a risk-based approach. It defines prohibited practices, special obligations for certain uses (especially high-risk systems), and transparency requirements for some AI interactions.

Why it matters

  • Compliance: it can impose concrete requirements on how AI systems are built and used.
  • Contracts and procurement: vendors and clients will ask for evidence (documentation, controls).
  • Operational impact: governance, oversight, and monitoring become mandatory for some systems.

How it works

Identify role + classify system risk -> apply requirements -> document -> monitor

Practical steps usually include: define intended purpose, determine whether a system falls into a regulated category, implement risk controls, and maintain the required documentation and logs.

Practical example

An AI tool used in a regulated workflow is assessed for whether it qualifies as high-risk. If so, the provider prepares documentation and completes conformity steps, and deployers ensure human oversight and proper use.

Common questions

Q: Does the EU AI Act apply to every AI tool?

A: No. Obligations depend on the type of system, intended use, role (provider/deployer), and risk classification.

Q: Is this only a provider problem?

A: Not always. Deployers can have obligations too, especially for high-risk systems and certain operational controls.


References

Regulation (EU) 2024/1689 (EU AI Act).