Hallucination Rate
The frequency at which an AI model generates incorrect or fabricated information. It is typically measured as a percentage of responses containing hallucinations.
Why It Matters
Hallucination rate is a key metric for evaluating LLM trustworthiness. Reducing it from 20% to 2% can make the difference between a useful and a dangerous system.
Example
Testing an LLM on 1,000 factual questions and finding that 35 responses contained fabricated information — a 3.5% hallucination rate.
Think of it like...
Like an error rate in manufacturing — a 5% defect rate might be acceptable for toys but catastrophic for medical devices.
Related Terms
Hallucination
When an AI model generates information that sounds plausible and confident but is factually incorrect, fabricated, or not grounded in its training data or provided context. The model essentially 'makes things up'.
Hallucination Detection
Methods and systems for automatically identifying when an AI model has generated false or unsupported information. Detection can compare outputs against source documents or use consistency checks.
Evaluation
The systematic process of measuring an AI model's performance, safety, and reliability using various metrics, benchmarks, and testing methodologies.
Grounding
The practice of connecting AI model outputs to verifiable sources of information, ensuring responses are based on factual data rather than the model's potentially unreliable internal knowledge.