In the previous lesson, you learned how to securely inject sensitive data into your agent workflows using the RunContextWrapper
. Now it's time to take your agent control skills to the next level by learning how to monitor and control the entire lifecycle of your agent workflows using lifecycle hooks.
When you build real-world AI applications, you need visibility into what your agents are doing. You might want to know when agents start and stop, which tools they're using, when handoffs occur between agents, and how long different operations take. This kind of observability is crucial for debugging, performance monitoring, compliance logging, and understanding how your AI system behaves in production.
By the end of this lesson, you will be able to create and attach both types of hooks to an agent system, giving you comprehensive control over your agent workflows.
Hooks are callback functions that get triggered automatically when specific events happen during your program's execution. Think of them as "event listeners" that allow you to tap into important moments in your application's lifecycle.
In the context of AI agents, hooks let you monitor and control what happens during agent workflows. For example, you might want to know when an agent starts working, when it uses a tool, or when control passes from one agent to another. Instead of manually checking for these events, you can create hook functions that the SDK calls automatically at the right moments.
Hooks are particularly valuable for:
- Logging and monitoring: Track what your agents are doing in real-time
- Performance measurement: Time how long different operations take
- Dynamic configuration: Inject data or modify behavior based on runtime conditions
- Error handling: Detect and respond to issues as they occur
- Compliance: Maintain detailed audit trails for regulatory requirements
Now let's explore how the OpenAI Agents SDK implements this concept with two specialized hook types.
The OpenAI Agents SDK provides two main types of hooks: RunHooks
for monitoring the entire workflow across all agents and AgentHooks
for controlling specific agent behaviors. Understanding when and how to use each type is essential for building robust agent systems.
RunHooks are global lifecycle callbacks that monitor events across your entire agent workflow. When you attach RunHooks
to a run, they receive notifications about everything that happens during that run, regardless of which specific agent is active. This makes them perfect for system-wide monitoring and compliance logging.
AgentHooks, on the other hand, are per-agent callbacks that focus on events specific to a particular agent. When you attach AgentHooks
to an agent, those hooks only receive notifications about events involving that specific agent. This makes them ideal for agent-specific customization and behavior modification.
These two hook types work together seamlessly. You might use RunHooks
to maintain a global log of all system activities while simultaneously using AgentHooks
to perform specialized setup tasks for specific agents.
RunHooks
provide system-wide visibility into your agent workflow. When you subclass RunHooks
, you can override any of these methods to monitor events across all agents in your workflow:
These methods are perfect for logging and auditing, performance monitoring, error handling, and compliance reporting. Each method receives a RunContextWrapper
containing your custom context object, plus parameters specific to the event being monitored.
AgentHooks
provide fine-grained control over individual agent behavior. When you subclass AgentHooks
and assign it to agent.hooks
, you get these callbacks limited to that specific agent:
The key difference is scope — these methods only trigger for the specific agent they're attached to. Notice how the on_handoff
method receives the source
agent (the one handing off) rather than both agents, since the hook is already attached to the target agent.
AgentHooks
are ideal for dynamic context injection, agent-specific setup and teardown, and customizing behavior for particular agents. Like RunHooks
, they receive the same RunContextWrapper[T]
that allows you to access shared state and dependencies across your workflow.
When you use hooks in the OpenAI Agents SDK—whether global RunHooks
or per-agent AgentHooks
—each callback method receives the exact same RunContextWrapper
instance that was originally passed into Runner.run()
. If you didn't pass a context to Runner.run()
, the hooks will receive a RunContextWrapper
with None
as the wrapped context object. This means all hooks, tools, agents, and handoff events within a single run share access to the same context object. Any changes you make to context.context
in one hook (such as adding new fields, updating values, or attaching user-specific data) will be immediately visible to all subsequent hooks and components throughout that workflow. This shared, mutable state makes it easy to coordinate complex workflows, log important events, or inject new data as needed during execution.
In practice, hooks often use their entry points like on_start
(for AgentHooks
) or on_agent_start
(for RunHooks
) to inject or refresh data right before an agent or tool runs. For example, you might use a hook to fetch user profile information from a database and store it in context.context
so that tools and other hooks can access it later in the same run. Because the same context instance is passed throughout the workflow, anything you store or modify remains available for downstream agents, tools, or hook methods. This pattern gives you robust and flexible control over shared state, without leaking local details to the language model.
Let's implement a practical example of RunHooks
that provides comprehensive monitoring across your entire agent workflow. You'll create a GlobalHooks
class that extends RunHooks
and implements several key monitoring methods.
This GlobalHooks
class demonstrates three essential monitoring capabilities:
- Agent tracking: The
on_agent_start
method logs whenever any agent in your system becomes active using theagent.name
property - Tool monitoring: The
on_tool_end
method captures tool execution results by accessing thetool.name
and the actualresult
returned by the tool - Handoff visibility: The
on_handoff
method tracks transitions between agents using both thefrom_agent.name
andto_agent.name
properties
Each method receives a context
parameter (the same RunContextWrapper
you learned about in the previous lesson) plus parameters specific to the event being monitored, giving you complete visibility into your agent system's behavior.
While RunHooks
provide excellent global monitoring, AgentHooks
give you fine-grained control over individual agent behavior. Let's implement a TravelGenieHooks
class that demonstrates dynamic context injection.
Imagine you have a function that returns up-to-date data:
Now you can create agent hooks that inject this fresh data dynamically:
This TravelGenieHooks
class showcases two key agent-specific capabilities:
- Dynamic context injection: The
on_start
method injects fresh user data into the context just before the agent starts processing, ensuring you always have the most up-to-date data - Agent completion tracking: The
on_end
method captures and logs what the agent accomplished, providing visibility into the agent's results at the individual agent level
Instead of passing sensitive data to Runner.run(context=your_data)
, you can inject context dynamically when a specific agent becomes active. The on_start
method is called every time the Travel Genie agent becomes active, and by setting context.context = fetch_user_data()
, you're providing agent-specific context injection that ensures data freshness and separation of concerns.
Now let's see how to properly attach both types of hooks to your workflow. For AgentHooks
, you attach them directly to individual agents during creation:
For RunHooks
, you attach them to the entire run by passing them to the Runner.run()
method:
The SDK automatically coordinates both types of hooks during execution, ensuring that your hooks are called at the right times with the right parameters.
When you run a workflow with both types of hooks attached, you'll see comprehensive monitoring output that shows the complete lifecycle of your agent system:
This output demonstrates how the hooks capture the complete flow: the triage agent starting, the handoff to Travel Genie, Travel Genie starting (with context injection happening automatically), the tool execution result, and finally the agent's completion output. This gives you real-time visibility into your agent system's behavior, which is essential for production deployments, debugging complex workflows, and understanding how your agents interact with each other and external tools.
In this lesson, you've mastered the OpenAI Agents SDK's powerful hook system for gaining comprehensive control and visibility into your agent workflows. You explored how RunHooks
provide global monitoring across all agents while AgentHooks
offer fine-grained control for specific agents, learning to implement practical examples that monitor agent starts, tool executions, handoffs, and dynamic context injection.
Now that you understand these lifecycle hooks fundamentals, you're ready to experiment with creating your own custom monitoring and control solutions in the following practice exercises. These hands-on activities will deepen your expertise in agent workflow control and help you build more sophisticated AI systems with robust observability.
