Up to this point, we have focused on building features using smart AI agents. We've learned how to organize tasks and run development tracks in parallel. However, in the professional world, writing code that just works is only half the battle. To make an API "production-ready," we must ensure it can handle mistakes, block hackers, and stay fast when many people use it at once.
We do this using a Quality Pipeline. This is a series of checks that every piece of code must pass before it reaches our users. Think of it like a safety inspection for a car. It doesn't matter how fast the car is if the brakes don't work or the doors don't lock.
The Quality Pipeline focuses on four main areas:
Coverage: Do our tests check every single line of code, including the parts where things go wrong?Security: Can a user access or delete data that belongs to someone else?Performance: Does theAPIstay fast when 50 people use it at the same time?Documentation: Is the instruction manual (OpenAPI) up to date?
In this lesson, we will move through each of these stages to finish our Task Comments feature.
Test coverage tells us what percentage of our code is actually executed during our tests. If you have 90% coverage, it means 10% of your code has never been tested. Usually, that 10% contains the error paths — the code that runs when a user makes a mistake. Our goal for production is usually 95% or higher.
First, we check our current status using a tool called pytest-cov. On CodeSignal, this is already set up for you. You can run this command in your terminal:
The output might look like this:
This tells us we are missing 5 lines. To fix this, we need to add tests for edge cases. Let's start by testing if our service correctly rejects a comment that is too long.
In this snippet, we use pytest.raises(ValueError) to tell our test that we expect an error. If the code doesn't crash, the test fails. This checks the boundary of our input limits.
Next, we can add a test for a race condition — what happens if two comments are created at the exact same time? While we won't write the full complex logic here, we add tests that try to trigger these specific scenarios. After adding these missing pieces (like empty content or unauthorized users), we run our coverage again.
A Security review is a systematic check to find vulnerabilities. Even if a user is logged in, they shouldn't be allowed to do everything. A common mistake is forgetting to check if a user actually owns the data they are trying to change.
Let's look at a typical API route for deleting a comment:
In the code above, Depends(get_current_user) ensures the person is logged in. However, any logged-in user could delete any comment just by knowing the comment_id. This is a high-priority security flaw.
To fix this, we must add an ownership check:
By adding that if statement, we've protected the data. A professional security review involves going through every endpoint and asking:
- Is the user logged in?
- Does the user own this specific resource?
- Is the input (like the comment text) safe and within length limits?
Performance testing ensures that your API doesn't slow down when many people use it. We often measure p95 latency. This means that 95% of requests are faster than this value — in other words, only the slowest 5% of users experience longer wait times. We want our p95 to be under 500ms (half a second) so that almost everyone has a fast experience.
We can build a simple script to test this. First, we need a way to simulate a single user's actions.
Now, we need to run many of these sessions at the same time and calculate the results.
If the result is 420ms, we pass! If it is 2000ms (2 seconds), we know something is wrong. Usually, slowness is caused by the database. If we find a bottleneck, we might add an "index" to the database or fix a loop that is making too many requests.
In this lesson, we learned that a feature isn't finished just because the code runs. We followed a systematic process to ensure quality:
Coverage: We usedpytest --covto find untested lines and added tests foredge casesto reach 95% coverage.Security: We audited our routes to ensure users can only access their own data, fixing a major vulnerability in the delete endpoint.Performance: We wrote a script to simulate concurrent users and verified that ourp95latency stays under500ms.- Checklist: We combined all these into a reusable checklist to ensure every future feature is built to the same high standard.
Everything you've learned in this course — from using AI agents to manage complex tasks, to merging parallel features, and finally passing the Quality Pipeline — has prepared you to build real-world software.
You're nearly at the end! In the next unit, we'll cover production documentation with ADRs.
