Ensemble Learning
A strategy that combines multiple models to produce better predictions than any single model alone. Ensemble methods leverage the diversity of different models to reduce errors.
Why It Matters
Ensemble methods consistently outperform individual models in production. They are the backbone of winning solutions in ML competitions and high-stakes applications.
Example
A medical diagnosis system combining predictions from a random forest, gradient boosting, and neural network — the consensus of three different approaches is more reliable.
Think of it like...
Like the 'wisdom of crowds' — asking 100 people to estimate the number of jellybeans in a jar gives a more accurate answer than asking one expert.
Related Terms
Random Forest
An ensemble learning method that builds multiple decision trees during training and outputs the majority vote (classification) or average prediction (regression) of all the trees. The 'forest' of diverse trees is more robust than any single tree.
Gradient Boosting
An ensemble technique that builds models sequentially, where each new model focuses on correcting the errors made by previous models. It combines many weak learners into a single strong learner.