Many teams write detailed specifications, implement features, then try to "maintain" those specifications forever. This creates specification debt - outdated docs that drift from reality.
The correct mental model:
What persists:
- 📋 PRD - Business requirements and rationale (lives in docs/prds/)
- 🏗️ CLAUDE.md - Project constitution
- 📐 ADRs - Architecture decisions (immutable records)
- 📖 API Schema - API contracts
- 💻 Code + Tests - Executable implementation
What's temporary:
- 📝 Specification - Implementation guide (archived after use)
- 🔧 Technical Plan - Implementation approach
- ✅ Task List - Execution checklist
A specification guides implementation RIGHT NOW. Once implemented + tested → spec's job is done:
- Behavior captured in code
- API contract in API schema
- Business context in PRD
Archive after implementation:
If feature needs modification later:
- Start from PRD (business requirements still current?)
- Generate NEW specification (fresh guide)
- Don't try to "update" old specification
A Product Requirements Document defines WHAT and WHY at the business level.
PRD contains:
- Problem statement (what problem does this solve?)
- Users and personas
- Functional requirements
- Constraints (technical, business, performance)
- Success metrics (post-deployment)
- Out of scope
PRD is written for: Product managers, stakeholders, developers, future team members
PRD is NOT: As detailed as specification, implementation guide, or runtime documentation for agents
Critical distinction: PRDs are historical records for human reference, NOT documentation that agents browse during development.
Only when explicitly directed:
Typical workflow (PRD not involved):
- Human: "Add pagination to task list endpoint"
- Agent reads: OpenAPI, CLAUDE.md, code (current state)
- Agent generates specification and implements
- Agent never browsed PRD folder
Think of PRDs as blueprints: Once the house is built, you tour the house (code), not old blueprints.
PRD says (high-level):
Specification says (precise):
Key differences:
This distinction is critical for AI-assisted development.
Definition: Agent can verify BEFORE merging code.
Definition: Measured AFTER deployment based on user behavior.
❌ BAD (in Specification - agent can't verify):
✅ GOOD (in Specification):
✅ GOOD (in PRD):
Template:
Key Principles:
✅ DO include in PRD:
- Business problem (past/neutral tense)
- Functional requirements (what system must do)
- Post-deployment success metrics (clearly marked)
✅ DO include in Specification (not PRD):
- Exact API contracts
- Validation rules with regex
- Test cases agent can run
- Performance benchmarks to hit before merge
❌ DON'T include (or mark as post-deployment):
- User adoption rates (can't verify during dev)
- Long-term engagement metrics (need time + real users)
- Business KPIs dependent on user behavior
PRDs change only when fundamental business requirements evolve.
Example:
v1.0: Tasks have priority 1-5 (numeric)
Problem discovered: 68% of users confused by numeric scale
v2.0 PRD:
The Process:
- Human provides informal requirements (1-2 paragraphs)
- Claude analyzes codebase (models, API patterns, CLAUDE.md)
- Claude generates architecture-aware PRD (references actual code)
- Human reviews for business accuracy (right problem? feasible constraints?)
- Claude refines based on feedback
- Approved PRD → input for specification generation
Important: Once approved and implemented, PRD lives in Git as historical record. Agents won't browse it unless human explicitly directs them to.
Let's examine a real PRD section by section to understand how each part serves its purpose.
Header and Metadata:
Status field immediately shows this PRD's lifecycle state. "Implemented" means we can reference it for historical context.
Problem Statement:
Notice: Past/neutral tense ("needed"), not "currently broken." The PRD documents what problem existed, not current system state.
Functional Requirements:
Business requirements at high level. Specification would detail exact regex, API endpoints, error codes.
Constraints:
References actual codebase structure. This is what makes AI-assisted PRDs powerful - they understand existing architecture.
Success Metrics (Post-Deployment):
Clearly marked as analytics goals. Agent can't verify these during development - needs real users over time.
Scope Boundaries:
Prevents scope creep. Documents what we're deliberately NOT building.
Integration Notes:
Key Concepts:
- Specification lifecycle: PRD (persistent) → Spec (temporary) → Implementation → Archive Spec
- PRDs are for humans: Historical records, not agent discovery docs
- What agents read: API Schema, CLAUDE.md, Code, ADRs (PRDs only when human directs)
- PRD structure: Status field, past-tense problems, post-deployment metrics clearly marked
- Verification vs Success: Agent verifies before merge → Verification. Need real users → Success metric
- PRD versioning: Rare - only when business requirements fundamentally change
Key Mental Models:
- PRDs are blueprints: Tour the house (code), not blueprints
- Specifications are scaffolding: Removed after construction
- Verification vs Success: Can agent test now? → Verification. Need users? → Success metric
Next: Practice tasks where you'll:
- Analyze PRD vs Specification differences
- Generate architecture-aware PRDs
- Distinguish verification criteria from success metrics
- Review PRDs for business accuracy
This establishes the persistent documentation layer that feeds specification work!
