Prompt Chaining
A technique where the output of one LLM call becomes the input for the next, creating a pipeline of prompts that together accomplish a complex task.
Why It Matters
Prompt chaining breaks complex tasks into reliable steps. Each step can be validated independently, making the overall system more robust and debuggable.
Example
Step 1: Extract key claims from an article. Step 2: For each claim, generate a search query. Step 3: Search and retrieve evidence. Step 4: Fact-check each claim against evidence.
Think of it like...
Like a relay race where each runner handles one leg — the baton (output) passes from one to the next, and each runner focuses on their specific stretch.
Related Terms
Prompt Engineering
The practice of designing and optimizing input prompts to get the best possible output from AI models. It involves crafting instructions, providing examples, and structuring queries to guide the model toward desired responses.
LangChain
A popular open-source framework for building applications powered by language models. It provides tools for prompt management, chains, agents, memory, and integration with external tools and data sources.
Orchestration
The coordination and management of multiple AI components, tools, and services to accomplish complex workflows. Orchestration handles routing, sequencing, error handling, and resource allocation.
Tool Use
The ability of an AI model to interact with external tools, APIs, and systems to accomplish tasks beyond text generation. Tools extend the model's capabilities to include search, calculation, code execution, and more.