Artificial Intelligence

Context Window

The maximum amount of text (measured in tokens) that a language model can process in a single interaction. It includes both the input prompt and the generated output. Larger context windows allow models to handle longer documents.

Why It Matters

Context window size determines what tasks an LLM can handle — from short Q&A to analyzing entire codebases or books. It is a key differentiator between models.

Example

Claude's 200K token context window can process an entire novel in one go, while earlier models with 4K tokens could only handle a few pages.

Think of it like...

Like the size of a desk — a small desk forces you to work with only a few papers at a time, while a massive desk lets you spread out an entire project and see everything at once.

Related Terms