AI Governance

Red Teaming

The practice of systematically testing AI systems by attempting to find failures, vulnerabilities, and harmful behaviors before deployment. Red teamers actively try to break the system.

Why It Matters

Red teaming catches problems before real users encounter them. Every major AI lab conducts extensive red teaming before releasing models to the public.

Example

A team of testers trying to get an LLM to generate harmful content, reveal confidential training data, or produce biased outputs through creative prompting strategies.

Think of it like...

Like hiring professional burglars to test your home security — they find the weaknesses before actual criminals do, so you can fix them.

Related Terms