Review Patterns
Traditional code review doesn’t scale to agentic output. If agents produce 10x the code, line-by-line review becomes a bottleneck. The solution is shifting review upstream and leveraging agents for review.
The Pyramid of Influence
Section titled “The Pyramid of Influence” /\ / \ Research / ×× \ 1 bad line = 1000s of bad code lines /______\ / \ Plan / ××××× \ 1 bad line = 100s of bad code lines /____________\ / \ Code / ×××××××××× \ 1 bad line = 1 bad line/__________________\Focus review effort at the highest leverage point: research and planning, not code.
Review Strategy by Phase
Section titled “Review Strategy by Phase”Research Review (Highest Leverage)
Section titled “Research Review (Highest Leverage)”Before the agent plans anything, validate the research:
- Are the findings accurate?
- Are key dependencies identified?
- Are there alternatives the research missed?
- Is the scope appropriate?
Time investment: 5-10 minutes Impact: Prevents fundamentally wrong approaches
Plan Review (High Leverage)
Section titled “Plan Review (High Leverage)”Before the agent writes code, validate the plan:
- Does the approach fit the architecture?
- Are the steps ordered correctly?
- Are verification criteria sufficient?
- Are there missing edge cases?
Time investment: 10-15 minutes Impact: Prevents structural mistakes across the entire implementation
Code Review (Standard Leverage)
Section titled “Code Review (Standard Leverage)”After implementation, validate the code:
- Does it match the plan?
- Do tests cover the requirements?
- Are there security concerns?
- Does it follow existing patterns?
Time investment: Varies by size Impact: Catches implementation bugs
Agent-Assisted Review
Section titled “Agent-Assisted Review”Writer/Reviewer Pattern
Section titled “Writer/Reviewer Pattern”Use separate agents for writing and reviewing:
# Session A: WriterImplement the rate limiter following the plan.
# Session B: Reviewer (fresh context, unbiased)Review the rate limiter implementation at src/middleware/rateLimit.ts.Check for: race conditions, edge cases, security issues,consistency with existing middleware patterns.Provide specific file:line references for each finding.The reviewer runs in a fresh context — it’s not biased by the implementation decisions.
Custom Review Agents
Section titled “Custom Review Agents”Define specialized reviewers in your agent’s agents/ folder (see Tool Configuration Reference for exact paths):
---name: security-reviewerdescription: OWASP-focused security reviewtools: Read, Grep, Glob# use a cost-efficient model---Check for OWASP Top 10 vulnerabilities.---name: perf-reviewerdescription: Performance-focused code reviewtools: Read, Grep, Glob, Bash# use a cost-efficient model---Check for: N+1 queries, missing indexes, unbounded loops,memory leaks, unnecessary allocations.The Multi-Reviewer Pattern
Section titled “The Multi-Reviewer Pattern”For critical code, use multiple specialized reviewers:
Run these reviews in parallel:1. Use the security-reviewer agent to check for vulnerabilities2. Use the perf-reviewer agent to check for performance issues3. Use a sub-agent to verify all test scenarios from the spec are covered
Synthesize the findings into a single review summary.Review Checklist for Agent Code
Section titled “Review Checklist for Agent Code”| Category | Check |
|---|---|
| Correctness | Does it match the spec/plan? |
| Edge cases | Are boundary conditions handled? |
| Tests | Do tests cover all specified scenarios? |
| Security | Any injection, auth, or data exposure risks? |
| Performance | Any N+1 queries, unbounded operations, or leaks? |
| Patterns | Does it follow existing codebase conventions? |
| Dependencies | Any unnecessary new dependencies added? |
| Scope | Does it only change what was specified? |