In this lesson, we'll learn how to batch multiple commands using Boost.Redis. Instead of sending commands one at a time and waiting for each response, we'll group several commands together and send them in a single request. This is one of the most powerful techniques for improving Redis performance.
We'll build a practical example that demonstrates:
- Batching
INCRandSETcommands in one request - Receiving and printing the result of each batched command
- Batching two
GETcommands in a second request to read back the values - Using focused completion handlers to process each batch's results
Why batch commands? When you send commands one at a time, each command requires a separate network round trip to the Redis server:
- Send command 1 → Wait for response 1
- Send command 2 → Wait for response 2
- Send command 3 → Wait for response 3
With batching, you send all commands together in one network round trip:
- Send commands 1, 2, 3 → Wait for responses 1, 2, 3
This dramatically reduces network latency and improves performance.
How Boost.Redis handles batching:
- Create a
requestobject - Push multiple commands into that request using
push() - Execute the entire batch with a single
async_exec()call - Receive all responses together in a typed
responseobject
First, we include the necessary headers and create namespace aliases for convenience:
The key types here are:
request: A container where you push multiple commands to batch them togetherresponse<Types...>: A typed container that holds the replies for all commands in your batchconnection: The Redis client connection that executes your batched requests
We'll use small helper functions as completion handlers. This keeps main readable and clearly shows what happens after each batch executes.
The first handler processes the results of our second batch (two GET commands):
This batch contains two GET commands, so the response type is response<std::string, std::string> (both GET commands return strings). We access each command's reply using std::get<index>:
std::get<0>(get_resp): Result of the firstGETcommandstd::get<1>(get_resp): Result of the secondGETcommand
Next, we handle the results of our first batch, which contains INCR and SET commands:
Here's what happens with batching:
- We use a typed response:
response<std::int64_t, std::string>because:- The first command (
INCR) returns an integer - The second command (
SET) returns a status string (usually"OK")
- The first command (
- We access each batched command's result using
std::get<index> - After processing the first batch, we create a second batch with two
GETcommands
Now we can put everything together in main to demonstrate command batching.
connection::async_run(...) starts the connection’s internal asynchronous loop (connecting, reading replies, writing requests, handling reconnects, etc.). Like any async operation in Asio, it takes a completion token that decides how the completion handler is represented and invoked.
-
net::detachedmeans: start the async operation, but don’t provide a user handler to be called on completion.
In other words, we’re saying “run the connection in the background; I don’t care about the final completion callback.” -
Problem: when you use
detached, there is no user handler object that naturally “owns” anything. That matters because asynchronous operations must not outlive the objects they use. If theconnectionwere destroyed whileasync_runis still active, the program could crash (use-after-free) or behave unpredictably. -
net::consign(token, value...)solves this by attaching extra objects to the completion handler, ensuring they stay alive until the operation completes.
Sonet::consign(net::detached, conn)effectively means:“Run this operation detached, but keep
connalive untilasync_runfinishes.”
In short:
detachedremoves the callback
What Batching Does:
- Reduces Network Round Trips: Batching bundles multiple commands into a single network request, eliminating the wait time between individual commands.
- Improves Throughput: By sending commands in batches, you increase overall throughput, especially when dealing with high-latency connections.
- Preserves Command Order: Redis processes batched commands in the order they were pushed, and responses come back in the same order.
- Simplifies Code: With typed responses, you get all results at once with clear types for each command.
What Batching Doesn't Do:
- No Atomicity: Batching does not guarantee that all commands execute as a single atomic unit. Other clients can interleave their commands between yours.
- No Conditional Execution: If one command in a batch fails, subsequent commands still execute. There's no rollback mechanism.
- No Dependency Handling: A batched command cannot use the result of a previous command within the same batch. Each command is independent.
Batching is excellent for performance optimization when you need to send multiple independent commands quickly. However, if you require atomicity or need commands to depend on each other's results, you would need Redis transactions (using MULTI/EXEC commands), which we'll explore in a later lesson.
Batching commands is one of the most important performance optimizations you can make when working with Redis. Every network round trip adds latency—typically 1-50ms depending on your network. By batching commands, you can turn 10 separate round trips (10-500ms) into a single round trip (1-50ms), a potential 10x performance improvement.
With Boost.Redis, batching is straightforward:
- Push multiple commands into a
request - Execute them all with one
async_exec()call - Receive typed responses for each command's result
