Chain-of-Thought
A prompting technique where the model is encouraged to show its step-by-step reasoning process before arriving at a final answer. This improves accuracy on complex reasoning tasks.
Why It Matters
Chain-of-thought prompting can dramatically improve LLM performance on math, logic, and multi-step reasoning tasks — often the difference between wrong and right answers.
Example
Prompting: 'If a store has 23 apples and sells 17, then receives 12 more, how many apples? Think step by step.' The model reasons: 23-17=6, 6+12=18.
Think of it like...
Like showing your work on a math test — working through each step makes you less likely to make mistakes and helps identify where errors might creep in.
Related Terms
Prompt Engineering
The practice of designing and optimizing input prompts to get the best possible output from AI models. It involves crafting instructions, providing examples, and structuring queries to guide the model toward desired responses.
Reasoning
An AI model's ability to think logically, make inferences, draw conclusions, and solve problems that require multi-step thought. Reasoning goes beyond pattern matching to genuine logical analysis.
Tree of Thought
A prompting framework where the model explores multiple reasoning branches, evaluates intermediate states, and can backtrack from dead ends — like a deliberate tree search through thought space.
Self-Consistency
A decoding strategy where the model generates multiple reasoning paths for the same question and selects the answer that appears most frequently across paths. It improves accuracy on reasoning tasks.
Few-Shot Learning
A technique where a model learns to perform a task from only a few examples provided in the prompt. Instead of training on thousands of examples, the model generalizes from just 2-5 demonstrations.