Patterns Catalog
Timeless Design Patterns for Effective Communication with Large Language Models
Objective Framing
The patterns in this category set the model's worldview before any work begins.
Task Directive
Defines the exact unit of work the model must perform, independently of how it must be performed.
Perspective Framing
Sets the communicative perspective (role, audience, or situation) the model should adopt for interpreting, prioritizing, and expressing information.
Judgment Criteria
Declares the criteria the model should use to evaluate evidence, weigh trade-offs, and handle uncertainty while generating.
Constraint Scoping
Defines the boundaries within which a language model is allowed to operate.
Example-Driven Specification
Specifies desired model behavior through concrete examples rather than abstract rules.
Information Supply
The patterns in this category address what information to provide and how to control what the model treats as relevant.
Context Loading
Supplies external information, state, constraints, and references the model lacks but must rely on to complete the task correctly.
Context Curation
Specifies which provided information counts as relevant evidence and which should be ignored, without adding, removing, or rewriting context.
Structural Segmentation
Separates the prompt into distinct regions so instructions, provided information, and user input are interpreted by their designated purpose.
Method Prescription
The patterns in this category prescribe how the model should process the task.
Alternative Enumeration
Requires the model to surface multiple viable approaches before committing to one, preventing premature convergence.
Hierarchical Decomposition
Breaks a concept into its immediate constituent parts, increasing resolution through containment rather than variation.
Dependency Decomposition
Solves complex problems by progressively establishing prerequisite answers so that later steps become solvable.
Stepwise Decomposition
Breaks down a problem into ordered intermediate steps that can be solved before the final answer, to expose reasoning and prevent premature conclusions.
Multi-Path Reasoning
Evaluates multiple independent reasoning paths and delays commitment until they can be compared and the best one identified.
Governing Abstraction
Derives a governing principle or broader frame before solving the specific instance, so reasoning proceeds from a stable abstraction rather than surface details.
Quality Validation
The patterns in this category are interventions between output produced and output trusted.
Knowledge Externalization
Requires the model to expose the background knowledge it will rely on before performing the task.
Deferred Commitment
States an explicit decision early, then defers final commitment until it is evaluated and confirmed or revised.
Reflective Evaluation
Requires the model to evaluate its own output before revising or committing to a final answer.
Claim Enumeration
Requires the model to list the claims its output asserts as true, making them visible for verification.
Output Representation
The patterns in this category shape the model's output into a form that can be consumed, compared, parsed, or acted on.
Semantic Compression
Compresses context by preserving meaning and constraints while discarding surface wording to fit within token limits.
Response Tail
Reserves a consistent slot in every response for secondary content (for example disclaimers, state, next steps).
Answer Boundary
Forces the model to commit to a final, discrete answer after reasoning, so the result is explicit and separable.
Output Template
Declares the response structure in advance to produce predictable, reusable outputs.
Formal Representation
Constrains output to a rule-governed symbolic form (for example JSON, SQL, DOT) so meaning is carried by structure rather than prose.
Interaction Design
The patterns in this category shift the design lever from what the prompt says to how the interaction between the user and the model is structured.
Control Reversal
Transfers control of the conversation to the model, so it leads before committing to a response.
Interpretation Grammar
Defines explicit rules for how terms, operators, and notation are interpreted, replacing the model's inference with declared definitions.
Meta Prompting
Uses one prompt to generate or refine another prompt, using the LLM to craft how to interact with it.
