Introduction & Context

Welcome back! In the previous lessons, you built a fully concurrent agent system that can handle multiple conversations in parallel and execute tools concurrently using Ruby threads. Now, we're ready to take this parallelization to the next level by building an orchestrator agent that can delegate work to specialized agents.

In this lesson, you'll discover how to wrap agents as tools using the create_agent_tool helper from your codebase. You'll understand how the orchestrator pattern enables complex problem-solving through specialization and parallel delegation, and you'll see how Ruby threads make concurrent agent execution possible without any explicit async/await syntax.

Understanding Agent Orchestration

Agent orchestration is a pattern in which one coordinator agent manages and delegates work to multiple specialized agents. Think of it as a project manager who receives a complex task and breaks it down into smaller pieces, assigning each piece to a team member with the right expertise. The orchestrator doesn't need to know how to do every task itself; it just needs to understand the problem well enough to delegate effectively.

In our case, we'll build an orchestrator that handles complex research requests by delegating independent research tasks to specialized researcher agents. When you ask the orchestrator to compare the economic outlook of two different industries, it recognizes that each industry's research is independent and can happen in parallel. Instead of researching both industries sequentially, the orchestrator delegates each one to a separate researcher agent call.

These agent calls execute concurrently via Ruby threads, just like the parallel tool execution you implemented in the previous lesson. The beauty of this pattern is that it combines the strengths of specialization and parallelization, creating a scalable system in which adding more specialized agents or handling more complex problems doesn't require rewriting your core logic.

Building the Researcher Agent

Before we can create the orchestrator, we need to build the specialized agent that will handle the actual research. This researcher agent will be equipped with a search tool, making it an expert at gathering and synthesizing information:

The researcher agent is a straightforward agent with a focused purpose. Its system_prompt clearly defines its role as a research assistant that uses the search tool to gather information and produce summaries.

We provide it with the search tool, which simulates looking up information on a topic. Notice that we define the tool schema inline as a Ruby hash, describing the search tool's interface to Claude. The researcher doesn't need to know anything about orchestration or delegation; it only needs to be effective at researching the specific topics it receives, which makes the system easier to understand and maintain.

Wrapping Agents as Tools

To enable our orchestrator to delegate work to the researcher, we need to wrap the researcher agent as a tool. The create_agent_tool helper function does exactly this:

This helper function creates a tool from an agent by wrapping it in a lambda. When the tool is called, it prints a delegation message, runs the agent with the provided message, and returns the agent's response. The function returns both a callable tool_function (a lambda that accepts keyword arguments) and a tool_schema (a hash describing the tool's interface to Claude).

The key insight is that from Claude's perspective, this agent tool looks just like any other tool — it has a name, description, and input_schema. But under the hood, calling this tool actually triggers a complete agent , which may itself involve multiple tool calls and reasoning steps. This creates a powerful abstraction: agents can use other agents as tools, enabling hierarchical orchestration patterns.

Creating the Orchestrator

Now, let's create our orchestrator using the agent wrapper to demonstrate the orchestration pattern:

We use create_agent_tool to wrap our researcher agent, producing a research_fn lambda and a research_schema hash. The manager's system_prompt instructs it to break down complex problems and make multiple researcher_tool calls in parallel.

When the manager makes multiple tool calls in a single turn, the Agent#run method will execute them concurrently using Ruby threads — this is the same parallel tool execution mechanism you implemented in the previous lesson.

The manager doesn't need to know anything about threads or concurrency primitives. It simply decides which tool calls to make, and the Agent class handles executing them in parallel. This separation of concerns keeps the code clean: the focuses on orchestration logic, while the class handles execution mechanics.

Direct Tool Calls vs. Agent Delegation

At this point, it's worth understanding what we gain by using Agent Delegation instead of just giving the manager direct access to the search tool. Let's compare two approaches:

Direct Tool Access

  • ✅ Simpler setup — just pass search directly to the manager
  • ✅ Fewer layers of abstraction
  • manager must handle low-level tool calls and synthesis together
  • ❌ No separation between data gathering and summarization
  • ❌ Harder to reuse research logic across different orchestrators

Agent Delegation (via create_agent_tool)

  • ✅ Clean separation of concerns — researcher handles search + synthesis
  • researcher agent can be reused by multiple orchestrators
  • researcher can make multiple search calls and reason about results
  • manager focuses purely on high-level coordination
  • ❌ One extra layer of indirection

For simple cases, direct tool access works fine. But as your system grows, agent delegation provides better modularity and reusability. The researcher agent becomes a reusable component that encapsulates both the capability and the logic for synthesizing search results into useful summaries. Multiple orchestrators can delegate to the same without duplicating this logic.

Sequential vs. Parallel Delegation

Another key distinction is how the orchestrator issues its delegations. When Claude decides to call multiple tools in a single turn, our Agent#run implementation executes them in parallel using Ruby threads.

Sequential Delegation (Hypothetical)

  • Turn 1: Call researcher_tool for tech industry.
  • Turn 2: Call researcher_tool for manufacturing industry.
  • Turn 3: Synthesize results.

Parallel Delegation (What actually happens)

  • Turn 1: Call researcher_tool for tech AND manufacturing (in parallel).
  • Turn 2: Synthesize results.

The parallel approach is significantly faster because both research tasks execute concurrently via Ruby threads. The orchestrator doesn't need to wait for the first research task to complete before starting the second one. This is the same threaded tool execution mechanism you built in the previous lesson — it works seamlessly whether you're executing low-level tool calls or delegating to entire agent runs.

Running the Orchestrator

Let's see the orchestrator in action with a complex problem that requires multiple independent research tasks:

We create a prompt asking for a comparison between the economic outlooks of two industries. The manager will recognize that each industry's research is independent and delegate them to separate researcher agent calls running in parallel.

Because our Agent#run method executes tool calls using Ruby threads, both researcher agents will run concurrently. Let's examine the output to see this efficient parallel execution in action.

Observing Parallel Agent Delegation

When we run the orchestrator, the output shows parallel delegation in action:

Notice how the manager makes two researcher_tool calls right at the start. Because our Agent#run method executes tools in parallel using Ruby threads, both researcher agents start working concurrently.

You can see the interleaved tool calls from both researcher agents: they are both calling search with different queries at roughly the same time, demonstrating true parallel execution at both the orchestrator and researcher levels.

The "🦾 Delegating to researcher..." messages appear together, confirming that both delegations started concurrently. Then, you see search queries from both researchers interleaved, showing that both researcher agents are actively working in parallel. This concurrency is achieved through Ruby threads — when encounters multiple tool uses in a single assistant message, it spawns a thread for each tool call and executes them concurrently.

Examining the Final Response

After the two researcher agents return their results, the manager synthesizes them into a simple comparison report. Because mock_search returns placeholder text ("Result for #{query}: Data point XYZ"), the final output should also look like a synthesized mock result rather than a detailed factual industry analysis:

The manager successfully broke down the problem, delegated the independent research tasks to run in parallel via Ruby threads, and then combined the results into a well-formatted, comprehensive report with tables, analysis, and synthesis.

This demonstrates the power of the orchestrator pattern: complex problems get solved efficiently through parallel delegation to specialized agents, while maintaining a clean separation of concerns between coordination (manager) and execution (researchers).

Summary

You've just completed the third lesson in this course on building concurrent Claude agent systems with Ruby! Building on the agent foundations established in earlier courses, you first learned how to run conversations concurrently with Ruby threads, then how to execute tools in parallel, and now you've built a complete orchestrator system that can coordinate multiple specialized agents running concurrently.

In this lesson, you learned how to wrap agents as tools using the create_agent_tool helper, enabling hierarchical orchestration patterns. You explored the trade-offs between direct tool access and agent delegation, understanding when each approach makes sense. You discovered how sequential and parallel delegation differ, and you saw how Ruby threads enable concurrent execution of multiple agent calls without requiring explicit async/await syntax.

The orchestrator pattern you just implemented represents a powerful approach to agentic system design. You now understand how to break down complex problems, delegate work to specialized agents, and leverage Ruby's threading capabilities to execute multiple agent calls concurrently. You've seen how the Agent#run method handles parallel tool execution transparently, whether those tools are simple functions or complete agent runs.

The key insight is that all the concurrency happens automatically through Ruby threads. When Claude decides to call multiple tools in a single turn, Agent#run spawns a thread for each tool call and executes them in parallel. This works seamlessly whether you're calling low-level tools like or delegating to entire agent runs via .

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal