Definition
An AI governance framework defines how an organization makes decisions about AI: who owns the system, who approves changes, what rules apply, and how risks and incidents are handled. It turns “Responsible AI” into concrete processes.
Why it matters
- Clear ownership prevents unmanaged models and shadow deployments.
- Repeatable approvals reduce compliance and operational risk.
- Auditability makes it possible to prove what happened, when, and why.
How it works
Policies -> roles (RACI) -> controls -> documentation -> monitoring -> incident response
Common elements include: an AI inventory, acceptable-use rules, vendor due diligence, change management, and a review board for high-impact use cases.
Practical example
A tax advisory firm maintains a registry of AI tools used in client work, defines when human review is mandatory, and requires documented testing before any model update goes live.
Common questions
Q: What is the minimum viable governance?
A: Inventory, ownership, intended use, approval process, monitoring, and an incident playbook.
Q: Does governance slow down delivery?
A: It can, if it’s bureaucratic. Good governance is lightweight and risk-based: higher-risk systems get more controls.
Related terms
- Responsible AI — principles into practice
- AI Risk Management — risk controls and monitoring
- Model Accountability — ownership and traceability
- Data Ethics — responsible data handling
- Algorithmic Transparency — explainability and disclosure
- EU AI Act — compliance obligations
References
NIST (2023), AI Risk Management Framework (AI RMF 1.0).