AI Governance

Bias in AI

Systematic errors in AI outputs that unfairly favor or disadvantage certain groups based on characteristics like race, gender, age, or socioeconomic status. Bias can originate from training data, model design, or deployment context.

Why It Matters

AI bias can lead to discrimination in hiring, lending, healthcare, and criminal justice. Addressing it is both an ethical imperative and often a legal requirement.

Example

A hiring AI trained on historical data where most executives were male, learning to score male candidates higher — perpetuating rather than fixing existing biases.

Think of it like...

Like a mirror that slightly distorts reality — if training data reflects societal biases, the AI model reflects and potentially amplifies those same biases.

Related Terms