The Specification Lifecycle Problem

Many teams write detailed specifications, implement features, then try to "maintain" those specifications forever. This creates specification debt - outdated docs that drift from reality.

The correct mental model:

What persists:

  • 📋 PRD - Business requirements and rationale (lives in docs/prds/)
  • 🏗️ CLAUDE.md - Project constitution
  • 📐 ADRs - Architecture decisions (immutable records)
  • 📖 API Schema - API contracts
  • 💻 Code + Tests - Executable implementation

What's temporary:

  • 📝 Specification - Implementation guide (archived after use)
  • 🔧 Technical Plan - Implementation approach
  • Task List - Execution checklist
Why Specifications Are Temporary

A specification guides implementation RIGHT NOW. Once implemented + tested → spec's job is done:

  • Behavior captured in code
  • API contract in API schema
  • Business context in PRD

Archive after implementation:

If feature needs modification later:

  1. Start from PRD (business requirements still current?)
  2. Generate NEW specification (fresh guide)
  3. Don't try to "update" old specification
What Is a PRD?

A Product Requirements Document defines WHAT and WHY at the business level.

PRD contains:

  • Problem statement (what problem does this solve?)
  • Users and personas
  • Functional requirements
  • Constraints (technical, business, performance)
  • Success metrics (post-deployment)
  • Out of scope

PRD is written for: Product managers, stakeholders, developers, future team members

PRD is NOT: As detailed as specification, implementation guide, or runtime documentation for agents

PRDs Are For Humans, Not Agent Discovery

Critical distinction: PRDs are historical records for human reference, NOT documentation that agents browse during development.

What Agents Read to Understand the System
DocumentWhen Agent Reads
API SchemaEvery API change
CLAUDE.mdStart of every session
Code + TestsWhen modifying features
ADRsWhen working on related code
PRDsOnly when human explicitly references it
When Agents DO Read PRDs

Only when explicitly directed:

Typical workflow (PRD not involved):

  1. Human: "Add pagination to task list endpoint"
  2. Agent reads: OpenAPI, CLAUDE.md, code (current state)
  3. Agent generates specification and implements
  4. Agent never browsed PRD folder

Think of PRDs as blueprints: Once the house is built, you tour the house (code), not old blueprints.

PRD vs Specification

PRD says (high-level):

Specification says (precise):

Key differences:

AspectPRDSpecification
PurposeBusiness requirementsImplementation guide
DetailHigh-level (WHAT/WHY)Precise (HOW)
LifetimePersistentTemporary
MetricsPost-deployment analyticsPre-merge verification
Agent AccessOnly when human directsRead during implementation
Verification Criteria vs Success Metrics

This distinction is critical for AI-assisted development.

Verification Criteria (Specification)

Definition: Agent can verify BEFORE merging code.

Success Metrics (PRD)

Definition: Measured AFTER deployment based on user behavior.

Examples

❌ BAD (in Specification - agent can't verify):

✅ GOOD (in Specification):

✅ GOOD (in PRD):

PRD Structure Best Practices

Template:

Key Principles:

✅ DO include in PRD:

  • Business problem (past/neutral tense)
  • Functional requirements (what system must do)
  • Post-deployment success metrics (clearly marked)

✅ DO include in Specification (not PRD):

  • Exact API contracts
  • Validation rules with regex
  • Test cases agent can run
  • Performance benchmarks to hit before merge

❌ DON'T include (or mark as post-deployment):

  • User adoption rates (can't verify during dev)
  • Long-term engagement metrics (need time + real users)
  • Business KPIs dependent on user behavior
PRD Versioning (Rare)

PRDs change only when fundamental business requirements evolve.

Example:

v1.0: Tasks have priority 1-5 (numeric)
Problem discovered: 68% of users confused by numeric scale

v2.0 PRD:

AI-Assisted PRD Generation

The Process:

  1. Human provides informal requirements (1-2 paragraphs)
  2. Claude analyzes codebase (models, API patterns, CLAUDE.md)
  3. Claude generates architecture-aware PRD (references actual code)
  4. Human reviews for business accuracy (right problem? feasible constraints?)
  5. Claude refines based on feedback
  6. Approved PRD → input for specification generation

Important: Once approved and implemented, PRD lives in Git as historical record. Agents won't browse it unless human explicitly directs them to.

Example: Task Tags PRD (Structured)

Let's examine a real PRD section by section to understand how each part serves its purpose.

Header and Metadata:

Status field immediately shows this PRD's lifecycle state. "Implemented" means we can reference it for historical context.

Problem Statement:

Notice: Past/neutral tense ("needed"), not "currently broken." The PRD documents what problem existed, not current system state.

Functional Requirements:

Business requirements at high level. Specification would detail exact regex, API endpoints, error codes.

Constraints:

References actual codebase structure. This is what makes AI-assisted PRDs powerful - they understand existing architecture.

Success Metrics (Post-Deployment):

Clearly marked as analytics goals. Agent can't verify these during development - needs real users over time.

Scope Boundaries:

Prevents scope creep. Documents what we're deliberately NOT building.

Integration Notes:

Summary

Key Concepts:

  • Specification lifecycle: PRD (persistent) → Spec (temporary) → Implementation → Archive Spec
  • PRDs are for humans: Historical records, not agent discovery docs
  • What agents read: API Schema, CLAUDE.md, Code, ADRs (PRDs only when human directs)
  • PRD structure: Status field, past-tense problems, post-deployment metrics clearly marked
  • Verification vs Success: Agent verifies before merge → Verification. Need real users → Success metric
  • PRD versioning: Rare - only when business requirements fundamentally change

Key Mental Models:

  1. PRDs are blueprints: Tour the house (code), not blueprints
  2. Specifications are scaffolding: Removed after construction
  3. Verification vs Success: Can agent test now? → Verification. Need users? → Success metric

Next: Practice tasks where you'll:

  1. Analyze PRD vs Specification differences
  2. Generate architecture-aware PRDs
  3. Distinguish verification criteria from success metrics
  4. Review PRDs for business accuracy

This establishes the persistent documentation layer that feeds specification work!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal