Definition
A high-risk AI system is a system that falls into a category defined by the EU AI Act as high-risk, typically because its intended use can significantly affect people’s rights, safety, or access to essential services.
Why it matters
- More obligations: high-risk classification triggers stricter requirements.
- Operational controls: risk management, documentation, and oversight become central.
- Enforcement: penalties and audits focus on high-impact uses.
How it works
Define intended use -> check AI Act categories -> implement controls -> document -> assess conformity
The exact obligations depend on role (provider vs deployer) and the specific high-risk category.
Practical example
An AI system used to support decisions in a regulated process is treated as potentially high-risk. The organization implements human oversight, logging, and documentation before relying on it in production.
Common questions
Q: Does “high-risk” mean the model is unsafe?
A: Not necessarily. It’s a legal classification based on use case and impact, not a statement about model quality.
Q: Can a general-purpose model become high-risk?
A: It depends on how it’s used and integrated. Intended use and deployment context matter.
Related terms
- EU AI Act — classification and obligations
- AI Conformity Assessment — compliance process
- AI Documentation Requirements — required evidence
- Human Oversight — mandatory controls in many contexts
- AI Risk Management — structured controls
References
Regulation (EU) 2024/1689 (EU AI Act).