Welcome back! You've mastered sequential workflows with prompt chaining and conditional workflows with intelligent routing. Now it's time to unlock dramatic performance improvements by learning parallel processing — executing multiple independent Claude API calls simultaneously instead of waiting for each one to complete.
In this lesson, you'll discover how to transform workflows that take minutes into operations that complete in seconds. You'll learn the difference between synchronous and asynchronous programming, master TypeScript's native async/await patterns, and build a system that asks multiple questions to Claude at the same time.
Before diving into the technical details, let's understand the high-level pattern we'll be implementing. This workflow has two distinct phases that work together to provide both speed and comprehensive results:
Phase 1: Parallel Research Gathering
- Launch multiple independent
ClaudeAPI calls simultaneously. - Each call researches a different aspect of your topic (attractions, transportation, culture).
- All questions run concurrently, completing in roughly the time of the slowest individual request.
- Results are collected and preserved in their original order.
Phase 2: Sequential Result Synthesis
- Combine all parallel research into a single comprehensive dataset.
- Send the aggregated information to Claude with instructions for synthesis.
- Generate a unified, actionable final result (like a complete travel guide).
- This sequential step ensures all information is properly integrated.
This two-phase approach maximizes both efficiency and quality: you get the speed benefits of parallel processing for data gathering while maintaining coherent analysis through sequential aggregation. It's particularly powerful for research tasks, analysis workflows, and any scenario where you need to quickly gather diverse information and synthesize it into actionable insights.
When working with the Anthropic API in TypeScript, you use a single unified Anthropic client class. Unlike some other languages, TypeScript doesn't require separate client types for synchronous versus asynchronous operations. Instead, the distinction is made at the method call level using the await keyword.
Every API call to Claude returns a Promise — TypeScript's built-in mechanism for handling asynchronous operations. You can choose to wait for each Promise to complete before moving on (synchronous style), or you can start multiple Promises and let them run concurrently (asynchronous style).
In contrast, when you want to run multiple operations in parallel, you start them without immediately awaiting them, then use Promise.all() to wait for all of them to complete:
In summary:
- Use sequential
awaitcalls for simple workflows where each step depends on the previous one. - Use
Promise.all()when you want to launch multiple independent Claude API calls at once, dramatically improving performance for batch or parallel tasks.
TypeScript has built-in support for asynchronous programming through Promises and the async/await syntax. Unlike some languages that require external libraries, TypeScript's async model is native to the language and works seamlessly with the Anthropic SDK.
The async keyword transforms a regular function into one that returns a Promise, while await pauses execution until a Promise resolves. This makes asynchronous code read like synchronous code while maintaining the performance benefits of non-blocking operations:
This approach is particularly effective for I/O-bound operations like API calls, where much of the time is spent waiting for network responses. TypeScript's runtime automatically manages the event loop, allowing your code to handle multiple concurrent operations efficiently.
To execute async functions in TypeScript, you simply call them. Since async functions return Promises, you need to use await when calling them from another async context:
This pattern of wrapping your async code in a main() function is a common approach for organizing async programs. TypeScript handles all the Promise management automatically, making it simple to work with asynchronous operations.
The real power of async programming comes from running multiple operations concurrently. Promise.all() starts multiple Promises simultaneously and waits for all of them to complete, returning results in the original order:
The key insight: while one API call waits for Claude's response, the runtime can initiate or continue processing other API calls. This transforms sequential waiting time into concurrent execution time.
Now that you understand the fundamentals, let's build the foundation of our parallel workflow by creating an async function specifically designed for Claude API calls. This function will handle individual questions while being optimized for concurrent execution.
The console.log statements help visualize when each question starts and completes, while returning a tuple of [question, answer] makes it easy to match responses back to their original questions when processing parallel results. The system prompt ensures consistent, focused responses from Claude. Note the type annotation : Promise<[string, string]>, which indicates this function returns a Promise that resolves to a tuple of two strings.
With our async function ready, let's define the independent research questions that will form the parallel component of our workflow. Parallel processing shines when you have independent problems that don't rely on each other's answers:
These questions cover different aspects of travel planning (attractions, transportation, culture) and are completely independent of each other, making them perfect candidates for parallel execution.
Now let's put Promise.all() to work by creating multiple tasks that execute simultaneously. This is where the parallel magic happens:
The .map() method creates an array of Promises representing work to be done, while Promise.all() starts all Promises simultaneously and returns results in the original order regardless of completion sequence. Each result is a tuple containing the question and its corresponding answer.
With all our parallel research complete, let's build the aggregation phase that synthesizes everything into a comprehensive result. This sequential step ensures all information is properly integrated:
Note the type annotations: results: [string, string][] indicates an array of tuples, and : Promise<string> shows the function returns a Promise that resolves to a string. The function uses proper type checking when accessing the response content to ensure type safety.
Let's bring it all together into a complete workflow that demonstrates the full power of parallel processing followed by intelligent aggregation:
When you run this workflow, you'll see the power of parallel execution unfold in three distinct stages:
- Instant Launch: All three "🔄 Asking" messages appear immediately as the API calls fire off simultaneously.
- Concurrent Completion: The "✅ Answered" messages arrive as Claude finishes each response — often in a different order than they were asked, proving your requests are truly running in parallel.
- Intelligent Synthesis: All this concurrent research gets woven together into a comprehensive travel guide that combines the speed benefits of parallel processing with thoughtful analysis.
This visual progression clearly demonstrates how your requests execute concurrently rather than waiting for each other, transforming what could be a slow sequential process into a fast, efficient workflow that delivers both speed and quality.
This two-stage approach provides significant performance benefits while maintaining result quality. The parallel research phase completes in roughly the time of the slowest individual question, while the aggregation phase ensures all information is properly synthesized into a usable travel plan.
This pattern works well for any scenario where you need to:
- Research multiple independent topics quickly
- Aggregate diverse information into a unified result
- Balance speed with comprehensive analysis
The performance benefits are most significant when you have many independent research topics or when individual API calls have high latency.
You've mastered parallel processing patterns that transform slow sequential workflows into lightning-fast concurrent operations. The combination of parallel research gathering and sequential result synthesis provides both speed and quality, making it ideal for complex analysis tasks like travel planning, market research, or technical evaluations.
In the upcoming exercises, you'll apply these patterns to real-world scenarios and learn to handle the nuances of concurrent Claude workflows. Remember: use parallel processing for independent research tasks, then aggregate results sequentially for comprehensive final analysis.
