Welcome back! In Unit 2's lesson, you established your project-level constitution with CLAUDE.md. That defines how your entire project works.
This lesson teaches feature-level specifications — detailed blueprints for individual features that reference your constitution.
The hierarchy:
- 🏛️ CLAUDE.md (Lesson 2) → Project-wide rules
- 📋 Feature Spec (this lesson) → Rules for one feature
Now let's learn what goes into a complete feature specification. A complete specification is like a detailed blueprint for your code. It tells you exactly what needs to be built, how it should behave, and what to watch out for. This is especially important when working with APIs, where clear communication between different parts of a system is critical.
By the end of this lesson, you will know how to break down a feature into a full specification, use AI to help generate specs, and review real examples to see what a good spec looks like.
Let's quickly remind ourselves what a specification is. A specification, or spec, is a document that describes what a feature or component should do. It does not describe how to implement it, but rather what the expected behavior is.
In the last lesson, you saw a simple spec for an API endpoint. Here's a quick reminder of what a basic spec might look like:
This is a simple example, but as features get more complex, specs need to be more detailed. That's what we'll focus on next.
A complete specification has several key sections. Let's go through each one, using the User Profile feature (with avatar, bio, and location) as our running example.
This section explains what the component or feature is for, in one sentence.
Explanation:
The purpose should be clear and concise. It helps everyone understand why this feature exists.
Here, you list the inputs and outputs, along with their types and any constraints.
Explanation:
- Inputs are the data the feature receives.
- Outputs are what it returns or produces.
- Constraints (like "max 160 characters") help prevent errors and keep data clean.
Describe what happens when the feature is used, but not how it's implemented.
Explanation:
Focus on what the system should do, not the code or logic behind it.
List any rules that must always be true.
Explanation:
Constraints help ensure the data is always valid.
Think about unusual or boundary situations.
Explanation:
Edge cases help you think about what could go wrong or be unexpected.
Describe when the feature should fail and how.
Explanation:
This section helps you plan for problems and how to handle them.
Provide concrete input/output pairs.
Explanation:
Examples make the spec concrete and easy to understand. They show exactly what should happen in real situations.
Now that you understand each section, here's the complete template you can use for any feature:
For API endpoints, also include:
When to use this template:
- When designing a new feature or component
- When documenting an API endpoint
- When clarifying requirements before implementation
- When you need to communicate feature details to AI or team members
Key differences for API specs:
- Include HTTP status codes for all possible responses
- Provide error response examples with actual JSON structure
- Show both success and failure cases in examples
Now, let's see how you can use AI to help generate a complete specification. On CodeSignal, you can use a prompt pattern to ask the AI to create a spec for a feature.
Here's the prompt pattern:
Let's try it for the User Profile feature:
Sample Output:
Explanation:
The AI follows the template and fills in each section. You can then review and adjust the output as needed.
Now that you understand the template, let's look at two real specifications from the TaskMaster project: specs/user-model-v1.0.md and specs/auth-api-v1.0.md. These files are in your workspace and demonstrate complete specifications for real features.
Here's the complete User Model spec:
What Makes This Spec Good:
- ✅ Clear purpose: One sentence explains what this model does
- ✅ Detailed interface: All fields listed with types and constraints
- ✅ Behavior described: Methods explained without implementation details
- ✅ Constraints explicit: Rules like "alphanumeric + underscore only"
- ✅ Error conditions: Specifies when errors occur
What Could Be Improved:
- ⚠️ Missing examples showing actual usage
- ⚠️ No edge cases listed (e.g., what happens with whitespace in email?)
Here's a portion of the Auth API spec (open specs/auth-api-v1.0.md to see the complete version):
What Makes This Spec Good:
- ✅ Concrete examples: Shows exact JSON for requests and responses
- ✅ HTTP status codes: Indicates expected response codes (201, 200)
- ✅ Security considerations: Lists important security details
- ✅ Complete interface: Every endpoint is documented
What Could Be Improved:
- ⚠️ Missing error response examples (What does 409 Conflict look like?)
- ⚠️ No edge cases (What if password is empty string?)
- ⚠️ Could specify error conditions more explicitly
Good specifications make implementation straightforward because there's no ambiguity. When reviewing specs, ask yourself: "Could someone implement this feature without asking me any questions?" If not, the spec needs more detail. In many cases, these specifications act as the primary API Schema for the agent during the build phase.
In this lesson, you learned how to break down a feature into a complete specification using a clear template. You saw how each section — Purpose, Interface, Behavior, Constraints, Edge Cases, Error Conditions, and Examples — helps make your specs more useful and reliable. You also learned how to use AI to generate specs and how to review real-world examples for quality.
Next, you'll get hands-on practice: you'll use AI to generate a specification for a User Profile feature, then review and validate its completeness. This will help you build the skills to write and review specs for any feature you work on.
