Catastrophic Interference
When learning new information in a neural network severely disrupts previously learned knowledge. It is the underlying mechanism behind catastrophic forgetting.
Why It Matters
Understanding catastrophic interference is key to building AI systems that can continuously learn — a fundamental challenge for production ML.
Example
A language model fine-tuned on medical text losing its ability to write poetry because the medical training overwrote the neural pathways used for creative writing.
Think of it like...
Like remodeling one room in a house and accidentally damaging the plumbing for the entire building — changes in one area have unintended consequences elsewhere.
Related Terms
Catastrophic Forgetting
The tendency of neural networks to completely forget previously learned information when trained on new data or tasks. New learning overwrites old knowledge.
Continual Learning
Training a model on new data or tasks over time without forgetting previously learned knowledge. Also called lifelong learning or incremental learning.
Fine-Tuning
The process of taking a pre-trained model and further training it on a smaller, domain-specific dataset to specialize its behavior for a particular task or domain. Fine-tuning adjusts the model's weights to improve performance on the target task.
Neural Network
A computing system inspired by the biological neural networks in the human brain. It consists of interconnected nodes (neurons) organized in layers that process information and learn to recognize patterns.