Patterns Catalog
Timeless Design Patterns for Effective Communication with Large Language Models
Objective Framing
The patterns in this category set the model's worldview before any work begins.
Task Directive
Defines the exact unit of work the model must perform, independently of how it must be performed.
Perspective Framing
Sets the perspective (role or world view) the model should use to interpret, prioritize, and express information.
Constraint Scoping
Defines the boundaries within which a language model is allowed to operate.
Information Supply
The patterns in this category address what information to provide and how to control what the model treats as relevant.
Context Injection
Supplies external information, state, constraints, and references the model lacks but must rely on to complete the task correctly.
Context Selection
Specifies which provided information counts as relevant evidence and which should be ignored, without adding, removing, or rewriting context.
Example-Driven Specification
Specifies desired model behavior through concrete examples rather than abstract rules.
Structural Segmentation
Separates the prompt into distinct regions so instructions, provided information, and user input are interpreted by their designated purpose.
Method Prescription
The patterns in this category prescribe how the model should process the task.
Alternative Enumeration
Requires the model to surface multiple viable approaches before committing to one, preventing premature convergence.
Hierarchical Decomposition
Breaks a concept into its immediate constituent parts, increasing resolution through containment rather than variation.
Dependency Decomposition
Solves complex problems by progressively establishing prerequisite answers so that later steps become solvable.
Stepwise Decomposition
Breaks down a problem into ordered intermediate steps that can be solved before the final answer, to expose reasoning and prevent premature conclusions.
Multi-Path Reasoning
Evaluates multiple independent reasoning paths and delays commitment until they can be compared and the best one identified.
Semantic Lifting
Restates a problem at a higher semantic level before solving it.
Quality Validation
The patterns in this category are interventions between output produced and output trusted.
Knowledge Externalization
Requires the model to expose the background knowledge it will rely on before performing the task.
Deferred Commitment
States an explicit decision early, then defers final commitment until it is evaluated and confirmed or revised.
Reflective Evaluation
Requires the model to evaluate its own output before revising or committing to a final answer.
Claim Enumeration
Requires the model to list the claims its output asserts as true, making them visible for verification.
Output Representation
The patterns in this category shape the model's output into a form that can be consumed, compared, parsed, or acted on.
Semantic Compression
Compresses context by preserving meaning and constraints while discarding surface wording to fit within token limits.
Response Tail
Reserves a consistent slot in every response for secondary content (for example disclaimers, state, next steps).
Answer Extractor
Forces the model to commit to a final, discrete answer after reasoning, so the result is explicit and extractable.
Output Template
Declares the response structure in advance to produce predictable, reusable outputs.
Formal Representation
Constrains output to a rule-governed symbolic form (for example JSON, SQL, DOT) so meaning is carried by structure rather than prose.
Iterative Alignment
The patterns in this category shift the design lever from what the prompt says to how the interaction unfolds.
Interaction Reversal
Flips control of the conversation so the model drives discovery before answering.
Interaction Language
Defines explicit meanings for terms, symbols, and commands so the model follows convention instead of inference.
Meta Prompting
Uses one prompt to generate or refine another prompt, using the LLM to craft how to interact with it.
