Skip to content

The Prompt Taxonomy

Not all prompts serve the same purpose. Understanding the different types helps you choose the right approach for each situation.

Purpose: Understand code, patterns, or architecture without making changes.

Context cost: High (reads many files) Best practice: Always use sub-agents for exploration

# Direct exploration (avoid — fills main context)
How does our auth system handle token refresh?
# Delegated exploration (preferred)
Use sub-agents to investigate how our auth system handles
token refresh. Summarize the flow, key files, and any
potential issues.

Purpose: Design an approach before committing to implementation.

Context cost: Low-medium Best practice: Switch to planning mode if your tool supports it

I want to add OAuth2 support. Based on the current auth
architecture, create a detailed plan covering:
- Which files need changes
- New files that need to be created
- Database migrations required
- Test scenarios
- Potential risks
Don't implement anything yet.

Purpose: Write code to fulfill a specification or plan.

Context cost: Medium-high Best practice: Reference a plan or spec; use TDD

Follow the plan in .sdlc/plans/oauth.md.
Start with step 1. For each step:
1. Write failing tests
2. Implement to pass
3. Run tests and verify
4. Move to next step

Purpose: Check that implementation meets requirements.

Context cost: Medium Best practice: Use a separate agent for unbiased review

Use the reviewer agent to check the OAuth implementation
against the spec in .sdlc/specs/oauth.md.
For each requirement, verify it's implemented and tested.
Flag any deviations or missing coverage.

Purpose: Refactoring, dependency updates, migrations.

Context cost: Varies widely Best practice: Fan-out pattern for bulk changes

# Single file
Refactor the OrderProcessor to use the Strategy pattern.
Keep all existing tests green.
# Bulk migration (fan-out)
Here are 200 files that use the old API format.
Process each file: update imports, change method calls
from oldApi.fetch() to newApi.get(), run tests.

Purpose: Debug issues, understand failures, trace problems.

Context cost: High (error logs, stack traces, file reads) Best practice: Provide the symptom and narrow the search space

The /api/orders endpoint returns 500 for this request:
[paste request]
The error log shows: [paste error]
Check the order processing flow in src/services/orders.ts
and the database query in src/repos/orders.ts.
Find the root cause and suggest a fix.
Is the task clear and small?
├── Yes → Implementation prompt (skip planning)
└── No
├── Do you know where the code is?
│ ├── No → Exploration prompt (with sub-agents)
│ └── Yes
│ ├── Is the approach clear?
│ │ ├── No → Planning prompt
│ │ └── Yes → Implementation prompt
│ └── Is it a bug?
│ └── Yes → Diagnostic prompt
└── Is it a bulk change?
└── Yes → Maintenance prompt (fan-out)

The most effective workflows chain prompt types:

Exploration (sub-agents) → Planning (planning mode) → Implementation (TDD) → Verification (reviewer agent)

Each type uses a different context strategy:

  • Exploration: sub-agent context isolation
  • Planning: clean main context
  • Implementation: fresh context with plan loaded
  • Verification: separate agent context