Machine Learning

Early Stopping

A regularization technique where training is halted when the model's performance on validation data stops improving, even if training loss continues to decrease. It prevents overfitting by finding the optimal training duration.

Why It Matters

Early stopping is one of the simplest and most effective regularization techniques — it automatically prevents overfitting without needing to manually choose when to stop.

Example

Monitoring validation loss during training and stopping at epoch 42 (out of planned 100) because the validation loss started increasing after epoch 42 despite training loss still decreasing.

Think of it like...

Like taking cookies out of the oven when they look perfectly golden, rather than waiting for the timer — leaving them too long means they burn (overfit).

Related Terms