Skip to content

Quick Start

This guide gets your project configured with the highest-impact practices immediately.

  • A code project (any language)
  • An AI coding agent installed and configured
  • Git initialized in your project
  1. Create a focused agent configuration file

    Create an agent configuration file in your project root (the exact filename depends on your tool — see Tool Configuration Reference). Keep it under 60 lines — every line should answer: “Would removing this cause the AI agent to make mistakes?”

    # Project: [Your Project Name]
    ## Tech Stack
    - Language: TypeScript / Node.js 22
    - Framework: [Your framework]
    - Testing: Vitest
    - Package manager: pnpm
    ## Commands
    - Install: `pnpm install`
    - Dev: `pnpm dev`
    - Test single: `pnpm vitest run path/to/test`
    - Test all: `pnpm test`
    - Typecheck: `pnpm tsc --noEmit`
    - Lint: `pnpm lint`
    ## Architecture
    - src/api/ — REST endpoints, one file per resource
    - src/services/ — Business logic, no direct DB access
    - src/repos/ — Database layer, Drizzle ORM
    - src/types/ — Shared TypeScript interfaces
    ## Conventions
    - Pure functions preferred, no side effects in services
    - All API endpoints must have integration tests
    - Error handling via Result<T, E> pattern, no thrown exceptions
    - Use zod for all external input validation
  2. Create essential skills

    Create a skills/ directory in your agent’s configuration folder. For tool-specific paths, see the Tool Configuration Reference.

    Create skills/research.md:

    ---
    name: research
    description: Research a topic in the codebase before making changes
    ---
    Research the following topic: $ARGUMENTS
    1. Use sub-agents to explore relevant files and patterns
    2. Document findings in a structured summary
    3. Identify dependencies and potential impacts
    4. List specific files that would need changes
    5. Do NOT make any code changes

    Create skills/implement.md:

    ---
    name: implement
    description: Implement a feature following the research-plan-implement workflow
    ---
    Implement: $ARGUMENTS
    Follow this workflow:
    1. Read the plan/spec if one exists
    2. Write failing tests first (red phase)
    3. Implement minimum code to pass tests (green phase)
    4. Run the full test suite
    5. Run typecheck and lint
    6. Refactor if needed while keeping tests green
  3. Create a custom sub-agent for code review

    Create agents/reviewer.md in your agent’s configuration folder:

    ---
    name: reviewer
    description: Reviews code changes for quality, security, and correctness
    tools: Read, Grep, Glob, Bash
    # use a cost-efficient model
    ---
    Review the code changes for:
    - Logic errors and edge cases
    - Security vulnerabilities (injection, auth flaws, data exposure)
    - Consistency with existing patterns
    - Missing test coverage
    - Performance concerns
    Provide specific file:line references and suggested fixes.
  4. Set up verification hooks

    In your agent’s settings/permissions file, add a hook that runs typechecking after every file edit. For tool-specific configuration syntax, see the Tool Configuration Reference.

    This catches errors immediately after each file change.

  5. Configure permissions for flow

    In your AI coding agent’s settings/permissions file, allow commonly-used safe commands without prompting:

    • pnpm test, pnpm lint, pnpm tsc
    • git status, git diff, git log, git add, git commit

Now test the setup with the Research → Plan → Implement pattern:

Use sub-agents to research how authentication works in this project.

Your agent uses sub-agents to explore without polluting your context.

After setup, verify these work:

  • Your agent configuration file is under 60 lines and loads every session
  • Skills activate when invoked by name (e.g., research, implement)
  • Sub-agent reviewer runs in isolated context
  • Hooks trigger typecheck after file edits
  • Permissions allow common commands without prompting