Skip to content

Spec-Driven Development

Spec-driven development (SDD) places well-crafted specifications at the center of the development process. Instead of describing features in chat prompts, you write formal specifications in markdown — and AI agents “compile” those specs into code.

In traditional development, the code is the source of truth. In spec-driven development, the specification is the source of truth, and the code is a generated artifact that must conform to it.

This shift has profound implications:

  • Specifications are reviewed and iterated independently of code
  • Multiple agents can generate code from the same spec
  • Specs survive context compaction — they’re files, not chat messages
  • Specs serve as documentation, not just implementation guides
  1. Write the specification

    Create a detailed markdown spec that describes the feature:

    # Rate Limiting Middleware
    ## Purpose
    Protect API endpoints from abuse by limiting request frequency per client.
    ## Behavior
    - Track requests per API key using Redis
    - Default limit: 100 requests per minute
    - Custom limits configurable per endpoint via route metadata
    - When limit exceeded: return 429 with Retry-After header
    - Sliding window algorithm (not fixed windows)
    ## Interface
    ```typescript
    interface RateLimitConfig {
    windowMs: number; // Window size in milliseconds
    maxRequests: number; // Max requests per window
    keyGenerator: (req: Request) => string;
    }
    function rateLimit(config: RateLimitConfig): Middleware
    • Multiple API keys from same IP: independent limits
    • Redis unavailable: fail open (allow request, log warning)
    • Clock skew between servers: use Redis time, not local time
    1. Client makes 100 requests → all succeed
    2. Client makes 101st request → 429 with Retry-After
    3. Wait for window expiry → requests succeed again
    4. Redis down → requests succeed with warning log
    5. Custom limit on specific endpoint → respected
  2. Review the specification

    This is the highest-leverage review point. Validate:

    • Is the behavior correct and complete?
    • Are edge cases covered?
    • Are test scenarios sufficient?
    • Does the interface match existing patterns?
  3. Agent compiles the spec

    Implement the specification in .sdlc/specs/rate-limiting.md.
    Follow TDD: write tests matching the test scenarios first,
    then implement to pass them.
  4. Verify against the spec

    Review the implementation against .sdlc/specs/rate-limiting.md.
    For each requirement, confirm it's implemented and tested.
    Flag any deviations.
  5. Update the spec

    If implementation revealed new requirements or design changes, update the spec to remain the source of truth.

# [Feature Name]
## Purpose
[Why this feature exists — business context]
## User Stories
- As a [role], I want [capability] so that [benefit]
## Behavior
[Detailed description of how the feature works]
## Interface
[Types, function signatures, API contracts]
## Data Model
[Database changes, if any]
## Edge Cases
[Boundary conditions and error scenarios]
## Test Scenarios
[Numbered list of specific test cases]
## Dependencies
[Other features or systems this depends on]
## Non-Goals
[Explicitly what this feature does NOT do]
# Bug: [Title]
## Symptom
[What the user observes]
## Root Cause
[Why it happens — from research phase]
## Fix
[What needs to change]
## Verification
[How to confirm the fix works]
## Regression Test
[Test to prevent this bug from recurring]
.sdlc/
├── specs/
│ ├── active/ # Currently being implemented
│ │ ├── rate-limiting.md
│ │ └── oauth-flow.md
│ ├── completed/ # Implemented and verified
│ │ ├── user-auth.md
│ │ └── pagination.md
│ └── templates/ # Reusable spec templates
│ ├── feature.md
│ └── bugfix.md
├── plans/
│ ├── rate-limiting-plan.md
│ └── oauth-plan.md
└── research/
├── auth-system-analysis.md
└── redis-patterns.md
  1. Compaction resilience — Specs are files that survive any context event
  2. Multi-agent coordination — Multiple agents reference the same spec
  3. Human leverage — Review specs instead of code for maximum impact
  4. Reproducibility — Same spec, different agent, consistent results
  5. Documentation — Specs become feature documentation automatically
  6. History — Git history of spec changes tells the “why” story