Definition
AI risk management is a disciplined approach to understanding and controlling how an AI system can fail, who it can harm, and what controls reduce that risk. It is continuous: risks change when data, models, users, or regulations change.
Why it matters
- High-stakes domains (law, tax) need predictable and defensible behavior.
- Regulation increasingly expects documented risk controls.
- Incidents are expensive: reputational damage often exceeds technical cost.
How it works
Identify -> assess -> mitigate -> monitor -> respond -> improve
Typical risks include: incorrect outputs, bias, data leakage, misuse, overreliance, and unclear accountability.
Practical example
For an AI research tool, you track risks like hallucinated citations, stale law, and confidentiality. Controls include source citation, freshness checks, access control, and mandatory human review for client-facing advice.
Common questions
Q: Is AI risk management only for “high-risk” systems?
A: No. The controls scale with impact. Even low-risk tools benefit from basic monitoring and documentation.
Q: What is the most common failure mode in practice?
A: Not the model itself — the process: unclear intended use, weak oversight, and no monitoring.
Related terms
- Responsible AI — the broader practice
- AI Governance Framework — roles and controls
- EU AI Act — risk-based obligations
- High-Risk AI System — compliance triggers
- Human Oversight — intervention and review
- Model Accountability — ownership and traceability
- Data Ethics — safe data handling
- Bias Mitigation — reduce unfair outcomes
References
NIST (2023), AI Risk Management Framework (AI RMF 1.0).