Declare Intent
Task Framing
Defines the unit of work a language model is expected to perform.
Role Anchoring
Assigns a social or professional role to the language model, establishing a shared situational context that shapes how information is expressed.
Constraint Scoping
Defines the boundaries within which a language model is allowed to operate.
Build Context
Context Injection
Supplies the background information, state, and references a language model must rely on to perform a task correctly.
Context Reconstruction
Forces the model to deliberately decide what to attend to by regenerating a cleaner, task-relevant context before answering, reducing distraction, bias-copying, and spurious correlations.
Context Scoping
Constrains a model's response by explicitly declaring what information is relevant and what should be ignored, without adding, removing, or rewriting context.
Example-Driven Specification
Specifies desired model behavior by driving the specification through concrete examples rather than abstract rules.
Structural Segmentation
Separates a prompt into clearly defined regions so instructions, inputs, constraints, and reference material are interpreted according to their intended roles rather than inferred from context.
Guide Reasoning
Alternative Enumeration
Explicitly requires the model to surface multiple viable approaches to a task before committing to one, preventing premature convergence on a single solution.
Hierarchical Decomposition
Explicitly requires the model to break a concept into its immediate constituent parts, increasing structural resolution through containment rather than variation.
Dependency Decomposition
Solves complex problems by progressively establishing prerequisite answers, so later steps become solvable using the smallest amount of help required.
Stepwise Decomposition
Explicitly requires the model to solve a task by producing ordered intermediate steps, preventing premature answers and making the reasoning process visible.
Multi-Path Reasoning
Deliberately generates multiple independent reasoning paths for the same task and delays commitment to a final answer until those paths can be compared or aggregated.
Semantic Lifting
Forces the model to move a problem to a higher semantic level before reasoning, replacing a detail-heavy instance with a more general concept, principle, or canonical representation.
Accuracy Control
Knowledge Externalization
Forces the model to generate and write down the background knowledge it will rely on, then perform the task while conditioning on that explicit knowledge.
Deferred Commitment
Forces an explicit decision early, while deliberately deferring final commitment until that decision has been evaluated.
Reflective Evaluation
Requires the model to explicitly evaluate its own output before revising or committing to a final answer.
Claim Enumeration
Requires the model to explicitly list the claims its output asserts as true, making those claims visible and reviewable without attempting to verify them.
Shape Output
Output Template
Declares the structure of the model's response in advance, guiding generation toward a stable, consumable artifact rather than an open-ended stream of text.
Semantic Compression
Compresses information by preserving intent and meaning rather than exact wording, enabling substantially larger effective context within fixed model token limits.
Response Tail
Appends a deliberate, structured segment to every model response that contains information secondary to the user's primary goal, ensuring consistent disclosure, guidance, or context regardless of the main content.
Formal Representation
Forces the model to express its output in a rule-governed symbolic system rather than natural language prose, so meaning is carried by structure instead of explanation.
Answer Extractor
Forces a language model to explicitly commit to a final, machine-readable answer after reasoning, separating thinking from decision and reducing ambiguity.
Interaction Design
Meta Prompting
Uses one prompt to generate, refine, or optimize another prompt, allowing humans and models to collaborate on better prompts instead of writing them directly.
Interaction Language
Defines explicit rules for how inputs, symbols, and commands are interpreted, reducing ambiguity and shaping reasoning by establishing a shared interaction language between the user and the model.
Interaction Reversal
Explicitly flips control of the interaction so the model drives questioning and discovery until a task can be responsibly completed.