Welcome back! In our previous lessons, we built and tested a brand-new Product Reviews feature. Now, we are going to shift our focus from creating new features to improving existing ones. This process is called Refactoring, which means restructuring existing computer code without changing its external behavior.
Refactoring matters because, as projects grow, older code can become disorganized, difficult to read, or insecure. In real-world projects, you will spend a lot of time cleaning up legacy code to ensure it meets modern standards. Today, we will look at an older file in our project called cart.py. It has some common legacy issues, such as missing security checks and inconsistent error handling. We will fix it using the same workflow loop we have applied throughout this course.
Before we change any code, we need to know what is wrong with it. In software development, we look for Code Smells. A code smell is a surface indication that usually corresponds to a deeper problem within the system. Identifying code smells matters because it helps us find hidden bugs, security flaws, and maintenance headaches before they cause major application crashes. We can use OpenCode to analyze our legacy cart.py file against the project rules we defined in our AGENTS.md file.
The AI assistant easily spots our code smells. The most critical issue is the missing authentication, which means anyone could potentially modify another person's shopping cart! It also highlights that the error responses are inconsistent and business logic validation is missing. With this analysis, we now have a prioritized checklist. We will tackle the security and error handling first, add our business logic validation next, and finally clean up the code formatting.
Now, we move to the implementation step. We need to apply our Canonical Error-Handling Pattern. This is a standardized way of writing try/except blocks to catch and format errors consistently across the entire application. This matters because, if every API endpoint returns errors differently, frontend developers will have a difficult time trying to display helpful messages to the user. Let's look at how we refactor the add to cart endpoint to meet these standards.
In this refactored code, we first add the @require_auth decorator directly below the cart_bp.route definition. This fixes our major security bug by ensuring only logged-in users can access the endpoint. Next, we wrap our logic in a standard try/except block. If the request succeeds, we return a standardized {"data": ...} dictionary with a 201 HTTP status code, which stands for "Created." If an unexpected error occurs, we catch it, roll back any broken database changes, and return a safe, standardized {"error": "Internal server error"} response with a 500 status code. This follows the AGENTS.md rule to avoid exposing internal error details to the client.
A secure API does not just check if a user is logged in; it also verifies that they are allowed to perform the specific action. Business Logic Validation is the process of enforcing real-world rules in your code. This matters because it prevents users from doing impossible or unauthorized things, such as purchasing an item that is sold out or modifying someone else's shopping cart. Let's add these checks directly into our try block.
First, we query the database for the requested product using Product.query.get(data["product_id"]). We check if the product exists and if the stock is greater than or equal to the requested quantity. If it is not, we immediately return a 400 Bad Request error. Next, we handle duplicate cart items gracefully. Instead of creating a second row in the database for the same product, we look for an existing_item belonging to the current_user.id using CartItem.query.filter_by.
If we find one, we simply update the existing_item.quantity. Notice how we explicitly use current_user.id — this is our ownership check, ensuring users can only modify their own carts. Finally, we commit the transaction safely with .
After refactoring, we must verify that our changes did not break anything. Regression Testing is the practice of running your automated tests after changing code to ensure old features still work. This matters because refactoring is meant to improve code structure, not change how the application behaves from the outside. We will run our tests and check our changes using standard terminal commands.
Running pytest confirms that all our existing tests still pass, which means our refactoring was successful. We can also run git diff src/api/routes/cart.py in the terminal to see a line-by-line comparison of exactly what we changed. While reviewing the diff, we should make sure we followed our standard error-handling pattern closely. For example, a best practice we established in earlier lessons is to catch SQLAlchemyError separately from a generic Exception. This allows us to handle database connection issues differently from simple logic bugs, providing more accurate error logs for developers.
Even with passing tests, we must think about Edge Cases. Edge cases are unusual or extreme situations that occur outside normal user behavior. Handling edge cases matters because users will inevitably click the wrong buttons or send invalid data, and your application needs to handle it gracefully instead of crashing. We can ask OpenCode to help us brainstorm these scenarios for our refactored cart code.
By asking OpenCode to review our logic, we confirmed that our stock and ownership checks are working perfectly. However, the AI pointed out a new edge case we missed: a user could add an item with a quantity of zero! To fix this, we would simply add a quick check at the top of our endpoint to ensure the requested quantity is greater than zero, returning a specific 400 error message if it is not. Specific error messages are crucial because they tell the frontend exactly what went wrong, rather than just returning a generic error.
The final step in our workflow loop is cleanup. We need to add Type Hints and Docstrings to our code. Type hints explicitly state what type of data a variable should be, while docstrings provide a human-readable summary of what a function does. This matters because it makes your code much easier for other developers to read and understand. Let's add these to our refactored function.
We added Python type hints (current_user: User and -> Tuple[Response, int]) to define our inputs and outputs clearly. We also added a Google-style docstring that explains the function's purpose, arguments, and return values. Finally, we run deterministic tools such as black to format the spacing and flake8 to check for unused imports. It is important to remember the split in our tools: our AGENTS.md file guides the AI on how to write the docstrings and error handling, but our continuous integration (CI) hooks, such as black, flake8, and pytest, forcefully ensure the code is actually correct before it can be merged.
Great work! In this lesson, we successfully refactored a legacy cart.py file to meet modern project standards. We applied our familiar workflow loop, proving that this process works just as well for old code as it does for new code.
We planned our attack by identifying code smells, applied canonical error handling, added strict security and ownership checks, and finalized the file with professional type hints and docstrings. Now, it is your turn to take the wheel. In the upcoming CodeSignal exercises, you will practice identifying these code smells and writing the refactored code yourself. Let's jump into the practice!
