Self-Consistency
A decoding strategy where the model generates multiple reasoning paths for the same question and selects the answer that appears most frequently across paths. It improves accuracy on reasoning tasks.
Why It Matters
Self-consistency improves accuracy on math and logic problems by 10-20% — using the model's own diversity of thought to converge on correct answers.
Example
Asking the same math problem 10 times with high temperature. If 7 out of 10 answers say 42, that is likely correct, even if 3 gave different answers.
Think of it like...
Like asking a room of people to independently solve the same puzzle — the answer that most people converge on is probably right.
Related Terms
Chain-of-Thought
A prompting technique where the model is encouraged to show its step-by-step reasoning process before arriving at a final answer. This improves accuracy on complex reasoning tasks.
Reasoning
An AI model's ability to think logically, make inferences, draw conclusions, and solve problems that require multi-step thought. Reasoning goes beyond pattern matching to genuine logical analysis.
Temperature
A parameter that controls the randomness or creativity of an LLM's output. Lower temperatures (closer to 0) make outputs more deterministic and focused; higher temperatures increase randomness and creativity.
Ensemble Learning
A strategy that combines multiple models to produce better predictions than any single model alone. Ensemble methods leverage the diversity of different models to reduce errors.