Introduction: From Building Features to Testing Them

Welcome back! In our last lesson, we built a complete Product Reviews feature with a database model, API endpoints, and validation. Now, it is time to ensure that the code actually works and does not break in the future. We will apply the same workflow loop from the previous lesson, this time to Testing.

In the professional world, achieving at least 80% test coverage matters because it helps ensure that the vast majority of your application's logic is being tested before it reaches real users. However, coverage measures executed code, not the quality of your tests—good assertions and thoughtful edge cases are equally important.

Planning Test Cases with OpenCode

Test Planning is the process of writing down exactly what scenarios need to be checked before you write any test code. This matters because it ensures you cover all possible ways a user might interact with your feature, rather than just testing the perfect scenario. We can ask OpenCode to help us outline this plan. We might ask it: "What test cases do we need for the reviews feature? List them organized by category."

This output gives us a clear roadmap. The Happy Path covers what happens when everything goes right. Validation ensures our rules (like keeping a rating between 1 and 5) work properly. Auth checks that our security middleware is doing its job and blocking unauthorized users. Finally, Edge Cases test the tricky situations, like a user trying to review a single product twice. With this clear plan in place, we know exactly what we need to build.

Setting Up Test Fixtures in conftest.py

Before we write the tests, we need to set up our environment. A Pytest Fixture is a reusable piece of code that provides a fixed baseline for your tests to run on, like creating a fake database or a test user. This matters because it keeps your test files clean and prevents you from writing the exact same setup code in every single test function. We place these fixtures in a special file called conftest.py. Let's create an application fixture and a test client.

Here, we use the @pytest.fixture decorator. The app fixture creates a fresh testing version of our application and sets up the database. The yield keyword means it hands the app to our tests, and once the test is done, it cleans up by dropping the database tables. The client fixture lets us send fake API requests to this test app without running a real server. Next, we can create fixtures for our data.

We create fixtures for our database models. The sample_user fixture creates a real record in our test database. The fixture is incredibly useful — it generates a real login token for our and formats it so we can easily attach it to our test API requests. Note that in the environment, testing libraries like come pre-installed, so we can focus purely on writing these fixtures instead of installing packages.

Writing Comprehensive Test Cases

Now we move to the Implement step. Test Cases are individual functions that verify one specific behavior of your code. This matters because if a test fails, you want to know exactly which rule was broken. We group related tests into classes and give them highly descriptive names. Let's write our first happy path test in a new file called tests/test_api/test_reviews.py.

We pass our client, auth_headers, and sample_product fixtures directly into the function parameters, and pytest automatically injects them. We send a POST request with valid JSON data and headers. Then, we use the assert keyword to verify that the status code is 201 (Created) and that the returned data matches what we sent. Let's look at how we test validation and authentication next.

Forcing Hard-to-Reach Error Paths with unittest.mock.patch

Sometimes a coverage report shows that your main logic is well tested, but your error handlers never run. This usually happens with code paths that are difficult to trigger naturally, such as a database failure during commit(). A Mock is a fake replacement for a real function during a test. This matters because it lets us verify our application's error handling without needing to actually break the database.

Python includes a built-in tool for this: unittest.mock.patch.

The idea is simple: patch() temporarily replaces a real function with a fake one inside a with block. When the block ends, the original function is restored automatically. We can also use side_effect to tell the fake function to raise an exception when it is called.

Here is how we can force the review creation route to hit its database error handler:

We are not testing SQLAlchemy itself here. We are testing that our route responds correctly when commit() fails. This is exactly the kind of blind spot coverage reports help us find.

We can use the same pattern for the delete route’s fallback exception handler:

Measuring Coverage and Filling Gaps

With our initial tests written, we move to the Verify and Edge Cases steps. Test Coverage is a metric that tells you exactly what percentage of your actual application code was executed during your tests. This matters because it highlights "blind spots" — code that has never been tested and might contain hidden bugs. We check coverage by running pytest with the --cov flag in our terminal.

This report shows that our initial tests executed 87% of the reviews.py file but still missed several branches, including lines 67-69, which handle the SQLAlchemyError database failure. After adding an Edge Case test that deliberately triggers a database error, we can run coverage again and confirm that we improved our results.

Now our tests execute 92% of the file, which shows that the new edge-case test closed one of the important gaps in our coverage. The remaining uncovered lines represent other branches we have not tested yet, giving us a clear roadmap for what to add next.

Cleaning Up Test Code

The final step in our workflow loop is Cleanup. Code Formatting and Linting involve running tools to automatically fix styling issues and catch syntax problems. This matters because test code is just as important as production code; if tests are messy and hard to read, other developers will struggle to maintain them in the future.

We use a tool called black to verify that our spacing, quotes, and indentation match standard Python guidelines. We use flake8 to check for unused imports or lines that are too long. Additionally, notice that every test function we wrote earlier includes a docstring (e.g., """Test successful review creation with valid data."""). This explains exactly what the test does in plain English, making the test file serve as living, readable documentation for our feature.

Summary and Practice Preparation

Great job! In this lesson, we successfully applied our workflow loop to testing. We planned our test cases by categorizing them into happy paths, validation, authentication, and edge cases. We used Pytest fixtures to keep our setup code clean and reusable. Then, we wrote comprehensive tests, measured our code coverage to identify hidden gaps, and cleaned up our test files using formatting tools.

Testing is what separates fragile code from professional software. Now, it is time for you to put this into practice. In the upcoming exercises, you will use the CodeSignal IDE to write and run these exact tests for the Product Reviews feature. Let's get to testing!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal