Artificial Intelligence

Hallucination

When an AI model generates information that sounds plausible and confident but is factually incorrect, fabricated, or not grounded in its training data or provided context. The model essentially 'makes things up'.

Why It Matters

Hallucinations are one of the biggest barriers to enterprise AI adoption. Understanding and mitigating them is critical for building trustworthy AI applications.

Example

An LLM confidently citing a research paper that does not exist, or inventing a historical event with specific dates and details that never happened.

Think of it like...

Like a confident storyteller who fills in gaps in their memory with plausible-sounding but completely fabricated details, and delivers them with total conviction.

Related Terms