Skip to main content
AI Regulation

High-Risk AI System

A high-risk AI system is an AI system classified as high-risk under the EU AI Act due to its intended use, triggering stricter obligations and controls.

Also known as: High risk system, Annex III system

Definition

A high-risk AI system is a system that falls into a category defined by the EU AI Act as high-risk, typically because its intended use can significantly affect people’s rights, safety, or access to essential services.

Why it matters

  • More obligations: high-risk classification triggers stricter requirements.
  • Operational controls: risk management, documentation, and oversight become central.
  • Enforcement: penalties and audits focus on high-impact uses.

How it works

Define intended use -> check AI Act categories -> implement controls -> document -> assess conformity

The exact obligations depend on role (provider vs deployer) and the specific high-risk category.

Practical example

An AI system used to support decisions in a regulated process is treated as potentially high-risk. The organization implements human oversight, logging, and documentation before relying on it in production.

Common questions

Q: Does “high-risk” mean the model is unsafe?

A: Not necessarily. It’s a legal classification based on use case and impact, not a statement about model quality.

Q: Can a general-purpose model become high-risk?

A: It depends on how it’s used and integrated. Intended use and deployment context matter.


References

Regulation (EU) 2024/1689 (EU AI Act).