Welcome to the fourth and final lesson in the Functional Patterns & Pattern Matching in Python course! You've made tremendous progress throughout this journey. In the first lesson, you built production-ready decorators with retry logic and exponential backoff. In the second lesson, you explored single-dispatch generic functions to create type-aware JSON serializers. In the third lesson, you mastered structural pattern matching to build a declarative command router.
Today, we're exploring composable error handling, a functional approach that minimizes exception handling noise while making error paths explicit and composable. Traditional try-except blocks scatter error handling throughout your code, making it difficult to chain operations cleanly. We'll implement a lightweight Result type that encapsulates success or failure, along with helpers that let you transform and chain operations without breaking the flow. We'll also build a configurable decorator that logs specific exceptions before re-raising them. By combining both approaches, you'll be able to write robust pipelines where errors are handled consistently and predictably. This lesson completes your toolkit for writing expressive, maintainable Python code using functional patterns.
Exception handling with try-except blocks is Python's standard error mechanism, but it has some drawbacks when building complex data pipelines. Consider a scenario where you need to parse a string into an integer, then use that integer as a divisor, and finally round the result. Each step can fail: parsing might encounter invalid input, division might hit zero, and rounding requires a valid number.
With traditional exceptions, you'd wrap each operation in a try-except block or wrap the entire sequence and handle all possible exceptions together. The first approach leads to deeply nested code with repetitive error handling. The second approach makes it hard to distinguish which step failed and why. Additionally, exceptions break the normal control flow: when an exception occurs, execution jumps immediately to the handler, making it difficult to compose operations in a functional style where each step receives input and produces output.
Functional programming offers an alternative: represent potential failure as a value rather than an exceptional event. This approach makes error handling explicit in function signatures and allows operations to chain naturally, even when any step might fail. The error becomes part of the return value, flowing through the pipeline like any other data.
The Result type is a container that explicitly represents either success with a value or failure with an error message. This pattern, common in functional languages like Rust and Haskell, provides a structured way to handle operations that might fail without throwing exceptions:
The Result is a frozen dataclass with two fields: success indicates whether the operation succeeded, and value holds either the successful result of type T or an error message string. The Generic[T] makes Result work with any value type, maintaining type safety through the pipeline. We use frozen=True to make Result immutable, preventing accidental modification, and slots=True for memory efficiency.
The value field has type T | str because it holds different types depending on success: when success is True, it contains a value of type T; when is , it contains an error message string. This dual nature requires careful handling but makes the error path explicit in the type system.
Rather than creating Result instances directly, we provide factory methods that make the success or failure intent clear:
The ok method wraps a successful value, setting success to True. The err method wraps an error message, setting success to False. These static methods serve as smart constructors: they encode the intention clearly and ensure consistency. When you see Result.ok(42), you immediately know it represents success with the value 42. When you see Result.err("invalid_input"), you know it represents failure.
The err method returns Result[Any] because error results don't carry a typed value, just an error message. This allows error Results to be compatible with any Result[T] type, which is crucial for chaining operations where different steps might produce different value types.
The first helper function transforms the value inside a successful Result without handling failures explicitly:
The map_result function takes a Result and a transformation function. If the input Result is already a failure, it passes the error through unchanged using cast to adjust the type. If the Result is successful, it applies the function to the value, wraps the result in Result.ok, and handles any exceptions by converting them to error Results. This means even if fn raises an exception, the error stays contained within the Result type rather than propagating as an exception.
The cast calls are necessary because the type system doesn't track the correlation between success and value types. When we know success is False, we can safely cast to any because the value field contains an error string, not a typed value. When is , we cast from to because we know it holds the successful value.
While map_result works for functions that always succeed, we often need to chain operations where each step can fail and returns its own Result. This is where bind_result comes in:
The structure mirrors map_result, but the key difference is that fn returns Result[U] rather than plain U. When the input is successful, we call fn and return its Result directly without wrapping it again. This prevents nested Results like Result[Result[U]], which would be awkward to work with. The function name comes from the monadic operation "bind" or "flatMap" in functional programming, which chains computations that produce wrapped values.
This pattern allows chaining multiple fallible operations: parse a string to an integer, then use that integer for division, then validate the result. Each step receives a plain value (not wrapped in Result) and returns a Result. The bind_result function handles propagating errors automatically: if any step fails, subsequent steps are skipped, and the error flows through to the final result.
Let's implement a function that parses strings into integers using the Result pattern:
The function first strips whitespace and validates that the string contains only valid integer characters (digits, optional sign). If validation fails, it returns an error Result immediately. If validation passes, it attempts conversion with int(), catching any exceptions and converting them to error Results. This approach makes the error cases explicit: we either get a successful integer or a descriptive error message.
Notice that the function signature parse_int(s: str) -> Result[int] clearly communicates that parsing might fail. Callers must handle both success and failure cases. This is different from a function that returns int and might raise ValueError, where the possibility of failure is implicit and easy to overlook.
Division by zero is a classic error case. Let's implement a safe division function using Result:
This function explicitly checks for zero divisors before performing division. Instead of letting Python raise ZeroDivisionError, we return a descriptive error Result. For successful cases, we wrap the division result in Result.ok. This makes division a pure function: given the same inputs, it always returns the same Result, with no side effects or exceptions.
The explicit error checking might seem verbose compared to catching exceptions, but it makes the function's behavior predictable and composable. We can now chain this with other operations using bind_result, and errors will propagate automatically without try-except blocks.
Now we can combine these operations to build a multi-step pipeline. Let's parse a string, divide 100 by the result, and see how errors propagate:
The first pipeline parses "10" successfully, then divides 100 by 10, yielding 10.0. The second pipeline fails at the parsing step because "x42" is not a valid integer. The lambda function never executes because bind_result detects the parse failure and propagates the error. This demonstrates the key benefit: we write the happy path naturally, and error propagation happens automatically.
We can extend the pipeline further by chaining more operations. Let's add a rounding step to the successful parse:
This nested chain parses "5", divides 50 by 5 to get 10.0, then rounds to two decimal places. Each bind_result chains to the next operation, and any failure at any step would short-circuit the rest. The final Result.ok(round(q, 2)) wraps the rounded result because we're in a bind_result chain and must return a Result.
To visualize the results, let's create a helper that serializes Results to JSON:
This helper converts any Result into a JSON string with two fields: success and value. The compact separators remove unnecessary whitespace. Now we can test our pipelines and see the results clearly:
The first result shows successful parsing and division yielding 10.0. The second shows failure at the parsing step with the error message "not_an_int." The third shows successful parsing, division, and rounding. Notice how the error message from parse_int flows through unchanged; we never needed a try-except block to handle it.
While Result types handle predictable failure cases, sometimes we want to log exceptions before re-raising them, especially for debugging production issues. Let's build a configurable decorator for this:
The decorator takes keyword-only arguments: exceptions specifies which exception types to log, prefix adds context to log messages, and logger allows using a custom logger. If no logger is provided, we use a default named "errors." This configuration flexibility lets you tune logging per function without changing the decorator itself.
The signature is complex because the decorator itself is a higher-order function: it takes configuration and returns a decorator, which takes a function and returns a wrapped function. This pattern allows passing arguments to decorators using the @log_and_reraise(exceptions=(...)) syntax.
The decorator implementation wraps the target function and intercepts specific exceptions:
The wrapper calls the original function inside a try block. If one of the specified exception types is raised, we log it with the prefix and exception details, then re-raise it using bare raise. This preserves the original exception and traceback, ensuring that callers see the same exception they would without the decorator. The @wraps(fn) decorator preserves the original function's metadata (name, docstring, annotations).
The log message includes the prefix, exception type name, and exception message. This provides context about where and why the exception occurred. The decorator doesn't suppress exceptions; it only observes them, making it safe to add for debugging without changing behavior.
Let's apply the decorator to a function that parses and adds numbers from a dictionary:
The decorator is configured to log ValueError (from invalid int conversions) and KeyError (from missing dictionary keys) with the prefix "CRITICAL: ". The function attempts to parse "count" as an integer, then parses "offset" with a default of "0," and returns their sum. Each step can fail, and the decorator will log the failure before letting the exception propagate.
The function signature doesn't change; it still returns int and can raise exceptions. The decorator is purely observational: it adds logging without altering the function's contract. This is useful when you need exception-based control flow but want visibility into failures for monitoring or debugging.
Let's test the decorated function with both valid and invalid inputs:
We configure logging to the CRITICAL level so only errors appear. The first call succeeds with valid string numbers, printing the sum. The second call passes an invalid offset, causing int("x") to raise ValueError, which the decorator logs before re-raising:
The output shows the successful sum followed by the exception type name from the caught error. The decorator logged the exception, which would appear in the logs as:
Let's test other failure modes to verify the decorator catches different exception types:
The first call passes an empty dictionary, causing data["count"] to raise KeyError. The second call passes None instead of a dictionary, causing None["count"] to raise TypeError. Each exception is logged by the decorator before being re-raised:
Notice that TypeError was not in our configured exception tuple (ValueError, KeyError), so the decorator didn't log it before re-raising. This demonstrates selective logging: you can choose which exceptions warrant logging versus which should propagate silently. In other words the logs would look like these, containing only the KeyError:
In production, you might log unexpected errors while letting expected validation errors pass through quietly.
The Result type and decorator patterns serve different purposes and can coexist in the same codebase. Use Result types for operations where failure is expected and should be handled explicitly: parsing user input, validating data, querying optional resources. Results make error handling part of the return value, forcing callers to consider both success and failure paths.
Use decorators for operations where exceptions are exceptional: database connections failing, file system errors, network timeouts. These errors typically can't be handled locally and need to propagate to higher-level handlers, but you want to log them for debugging. Decorators provide observability without changing the function's interface or requiring callers to unwrap Results.
In our code, parse_int and safe_div return Results because parsing and division failures are predictable and should be handled explicitly. The critical_counter function raises exceptions because it's processing supposedly valid data, and failures indicate bugs or data corruption that should be investigated. The decorator ensures those failures are logged even if the exception gets caught and handled further up the call stack.
You've now completed the fourth and final lesson of the Functional Patterns & Pattern Matching in Python course! This has been an intensive journey through advanced Python patterns, and you should be proud of reaching this milestone. We started with production-ready decorators featuring retry logic and exponential backoff. We explored single-dispatch generic functions for type-aware behavior. We mastered structural pattern matching for declarative data handling. And today, we built composable error handling with Result types and logging decorators.
Throughout this lesson, you implemented a lightweight Result type that encapsulates success or failure, created map_result for transforming successful values, and built bind_result for chaining fallible operations. You saw how these helpers enable building clean data pipelines where errors propagate automatically without try-except noise. You also created the configurable log_and_reraise decorator that logs specific exceptions before re-raising them, providing observability while preserving exception semantics. By combining both approaches, you can handle predictable failures with Results while logging exceptional conditions with decorators.
These functional patterns work together to create expressive, maintainable code: decorators add behavior, single dispatch adapts implementations, pattern matching routes logic, and Results handle errors compositionally. You now have a comprehensive toolkit for writing robust Python applications that leverage functional programming principles while staying idiomatic to Python's design.
As you move forward, the next course in this learning path is Concurrency & Async I/O, where you'll explore Python's concurrency models and learn to write high-performance asynchronous code. You'll discover when to use threads versus processes, master the event loop, and build resilient async pipelines with backpressure and retries. But before that journey begins, dive into the upcoming practice exercises to solidify your understanding of composable error handling and make these patterns your own!
