Definition
Bias mitigation includes technical and process measures to reduce unfair or harmful disparities caused by an AI system. It can target different stages: data collection, training, evaluation, and deployment.
Why it matters
- Fairness and harm: biased outputs can disadvantage groups or individuals.
- Reputation: perceived unfairness erodes trust quickly.
- Compliance: regulated uses may require evidence of testing and controls.
How it works
Define fairness goal -> measure -> mitigate -> re-measure -> monitor in production
Mitigations can include better data coverage, reweighting, constraints, post-processing, and human review for sensitive cases.
Practical example
If an AI system is used to triage client requests, you test whether certain groups systematically receive slower responses and adjust features, thresholds, and review rules to remove that disparity.
Common questions
Q: Is bias the same as inaccuracy?
A: Not always. A model can be accurate on average while consistently underperforming for specific groups.
Q: Can we “remove bias” completely?
A: Usually no. The goal is to reduce harmful disparities, document tradeoffs, and monitor outcomes over time.
Related terms
- Data Ethics — responsible data practices
- Algorithmic Transparency — disclose limitations and tests
- AI Risk Management — bias as a managed risk
- Model Accountability — ownership of fairness decisions
References
Barocas, Hardt & Narayanan (2019), Fairness and Machine Learning.