Scaling Hypothesis
The theory that increasing model size, data, and compute will continue to improve AI capabilities predictably, and may eventually lead to artificial general intelligence.
Why It Matters
The scaling hypothesis drives billions of dollars in AI investment. Whether it holds true determines the future trajectory of AI development.
Example
The prediction that a 10x increase in compute will yield a predictable improvement in model capability, supported by empirical scaling laws observed across model families.
Think of it like...
Like Moore's Law for AI — the hypothesis that consistent increases in resources will produce consistent capability improvements, potentially without limit.
Related Terms
Scaling Laws
Empirical findings showing predictable relationships between model performance and factors like model size (parameters), dataset size, and compute budget. Performance improves as a power law with these factors.
Compute
The computational resources (processing power, memory, time) required to train or run AI models. Compute is measured in FLOPs (floating-point operations) and is a primary constraint and cost in AI development.
Frontier Model
The most capable and advanced AI models available at any given time, typically characterized by the highest performance across multiple benchmarks. These models push the boundaries of AI capabilities.
Artificial General Intelligence
A hypothetical AI system with human-level cognitive abilities across all domains — able to reason, learn, plan, and understand any intellectual task that a human can. AGI does not yet exist.
Parameter
Any learnable value in a machine learning model that is adjusted during training. Parameters include weights and biases in neural networks. Model size is often described by parameter count.