Machine Learning

Prompt Tuning

A parameter-efficient fine-tuning technique that prepends learnable 'soft prompt' tokens to the input while keeping the main model weights frozen. Only the soft prompt parameters are trained.

Why It Matters

Prompt tuning achieves near full fine-tuning performance at a fraction of the cost. Each task gets its own tiny set of learnable parameters while sharing one base model.

Example

Training just 20 learnable token embeddings (a few KB) that are prepended to every input, adapting a frozen 70B model to a specific task without touching its weights.

Think of it like...

Like adding a personalized cover letter to a standard resume template — the template (model) stays the same, but the cover letter (soft prompt) customizes it for each application.

Related Terms