Machine Learning

Perplexity

A metric that measures how well a language model predicts text. Lower perplexity indicates the model is less 'surprised' by the text, meaning it can predict the next token more accurately.

Why It Matters

Perplexity is the standard intrinsic evaluation metric for language models. It enables quick comparison of model quality during development without expensive human evaluation.

Example

A model with perplexity of 10 on a text is equivalent to being uncertain between 10 equally likely next words at each step — lower is better.

Think of it like...

Like a guessing game where you predict the next word in a sentence — a perplexity of 5 means you are as confused as if you had to choose between 5 equally likely options.

Related Terms