Prompting Fundamentals
Agentic AI prompting differs fundamentally from chatbot prompting. You’re not asking questions — you’re defining objectives, providing context, and setting constraints for an autonomous agent.
The Prompt Quality Spectrum
Section titled “The Prompt Quality Spectrum”Our experiments compared three prompting approaches on identical tasks:
| Approach | Completeness | Edge Cases | Test Coverage | Code Quality |
|---|---|---|---|---|
| Minimal (“build X”) | 4/10 | 2/10 | 1/10 | 5/10 |
| Context-rich | 7/10 | 6/10 | 5/10 | 7/10 |
| Spec-driven + TDD | 9/10 | 9/10 | 9/10 | 8/10 |
The gap is dramatic. Minimal prompts produce superficially correct but brittle code. Spec-driven + TDD prompts produce production-ready implementations.
The Five Elements of Effective Prompts
Section titled “The Five Elements of Effective Prompts”1. Business Context (The Why)
Section titled “1. Business Context (The Why)”A common mistake is issuing command-style prompts without explaining why. Agents that understand the business context make better architectural decisions.
Add a rate limiter to the API.Agent might choose a simple in-memory counter — works for one server, fails at scale.
Add rate limiting to protect our API from abuse.We run 3 API servers behind a load balancer,so limits must be shared across instances (use Redis).Focus on the auth endpoints — those are the abuse targets.Agent makes architecture-appropriate decisions.
2. Specific Scope (The What)
Section titled “2. Specific Scope (The What)”Scope every task precisely. Reference specific files, functions, and scenarios.
| Vague | Specific |
|---|---|
| ”add tests for foo.py" | "write tests for foo.py covering the edge case where the user is logged out. Avoid mocks." |
| "fix the login bug" | "Users report login fails after session timeout. Check src/auth/, especially token refresh. Write a failing test first." |
| "make the dashboard look better" | "[screenshot] Implement this design. Take a screenshot of the result and compare.” |
3. Pattern References (The How)
Section titled “3. Pattern References (The How)”Point agents to existing patterns in your codebase. This is more effective than describing patterns in prose.
Look at how existing widgets are implemented on the home page.HotDogWidget.php is a good example. Follow that pattern toimplement a new CalendarWidget that lets users select a monthand paginate forwards/backwards through years.Build from scratch — no new libraries.4. Success Criteria (The Done)
Section titled “4. Success Criteria (The Done)”Define what “done” looks like with verifiable criteria:
The feature is done when:1. All existing tests pass2. New tests cover: valid input, invalid input, edge cases3. TypeScript compiles with no errors4. The rate limiter correctly limits to 100 req/min per client5. Redis failure doesn't crash the server (fail-open behavior)5. Constraints (The Boundaries)
Section titled “5. Constraints (The Boundaries)”Explicitly state what the agent should NOT do:
Constraints:- Don't add new dependencies- Don't modify the database schema- Don't change the public API contract- Keep changes within src/middleware/ onlyPrompt Templates
Section titled “Prompt Templates”Feature Implementation
Section titled “Feature Implementation”## TaskImplement [feature description].
## Context[Why this feature exists, business context]
## ReferenceFollow the pattern in [existing file]. Look at [reference file]for conventions.
## Requirements1. [Specific requirement with acceptance criteria]2. [Another requirement]3. [Edge case handling]
## Verification- Write tests first, confirm they fail- Implement to pass tests- Run full suite: `pnpm test`- Typecheck: `pnpm tsc --noEmit`
## Constraints- [What not to do]- [Scope limitations]Bug Fix
Section titled “Bug Fix”## Bug[Description of the symptom]
## Error[Paste the actual error message or stack trace]
## Likely LocationCheck [specific files/functions]
## Fix Requirements1. Write a failing test that reproduces the bug2. Fix the root cause (don't suppress the error)3. Verify the fix doesn't break existing tests4. Add a regression testCodebase Exploration
Section titled “Codebase Exploration”Use sub-agents to investigate:1. How [system X] handles [specific behavior]2. What patterns exist for [capability Y]3. Which files would need to change for [feature Z]
Report findings in a structured summary. Don't make any changes.The Interview Pattern
Section titled “The Interview Pattern”For larger features where you’re uncertain about scope, let the agent interview you:
I want to build [brief description]. Interview me in detailusing the AskUserQuestion tool.
Ask about technical implementation, UI/UX, edge cases,concerns, and tradeoffs. Don't ask obvious questions —dig into the hard parts I might not have considered.
Keep interviewing until we've covered everything,then write a complete spec to .sdlc/specs/[feature].mdAnti-Patterns
Section titled “Anti-Patterns”See the dedicated Anti-Patterns page for common mistakes and how to avoid them.