Welcome back! In the previous lesson, you learned how to define and run your first OpenAI agent using the Agents SDK. You also saw how to extract the agent’s final output from the result
object. In this lesson, we will take a closer look at the result
object that is returned after running an agent. Understanding the structure and properties of this object is key to building more advanced applications, debugging your agent’s behavior, and making the most of the SDK’s features.
By the end of this lesson, you will know how to inspect and interpret the different attributes of the result
object, including the final output, the original input, the last agent that ran, new items generated during the run, and the raw responses from the language model. You will also see how these properties can help you understand what happened during the agent’s run and how to use this information in your own projects.
When you run an agent synchronously (using Runner.run_sync
) or asynchronously (using Runner.run
), you receive a RunResult
object. If you run an agent in streamed mode (using Runner.run_streamed
), you receive a RunResultStreaming
object instead. Both types of result objects provide detailed information about the agent’s execution.
Some of the most relevant properties include:
final_output
: The final output produced by the last agent that ran. Its type can vary depending on the agent’s configuration—it may be a string or a more complex object if the agent specifies anoutput_type
.input
: The original input or prompt provided to the agent at the start of the run.last_agent
: The agent instance that produced the final output. This is especially useful in workflows involving multiple agents or handoffs.new_items
: A list of items generated during the run, such as messages, tool calls, or handoffs. These items provide a step-by-step record of the agent’s reasoning and actions.raw_responses
: The raw outputs from the language model for each step in the agent loop, including generated text, tool calls, and finish reasons.
By inspecting these properties, you gain a comprehensive view of the agent’s execution, including the input, output and intermediate steps. This structure is designed to support both simple and advanced use cases, from basic logging to complex multi-agent workflows.
For the examples in this lesson, we'll use a simple travel assistant agent:
We'll run this agent with a sample query and then explore the different properties of the result object:
As a reminder from the previous lesson, the most important property in the result
object is usually final_output
. This is the agent’s final answer after completing its reasoning, tool use, or any handoffs to other agents. You can access it directly with result.final_output
.
For example, after running our Travel Genie agent:
You might see output like this:
The final_output
property contains the complete, formatted response from the agent. In more complex scenarios, the result
object may also include a finish reason, which explains why the agent stopped (for example, because it reached a final answer, hit a token limit, or encountered a content filter). This information can be useful for debugging or for handling special cases in your application.
Beyond the final output, the result object provides several other useful attributes. The input
property of the result
object preserves the exact prompt or question you provided to the agent at the start of the run. This is especially useful for logging, debugging, or tracing how the agent responded to specific user queries. For instance, after running your agent, you might want to confirm what input triggered the response:
If you had asked, "What's your top recommendation for adventure seekers?", the output would reflect that original prompt:
This makes it easy to correlate the agent’s output with the user’s request, which is essential for both transparency and troubleshooting.
In scenarios where multiple agents might collaborate or hand off tasks, it becomes important to know which agent produced the final output. The last_agent
property provides this information, allowing you to identify the agent responsible for the answer. You can access the agent’s name directly:
Suppose your workflow involves a travel assistant agent named "Travel Genie." If this agent produced the final response, you would see:
This detail is particularly valuable in multi-agent systems, where understanding the flow of responsibility helps with both debugging and auditability.
As the agent processes your input, it may generate a sequence of intermediate items—such as messages, tool calls, or handoffs to other agents. These are collected in the new_items
property, which provides a step-by-step record of the agent’s reasoning and actions during the run. To review this sequence, you can iterate through the list:
You might observe output like the following, which shows a message output item generated by the "Travel Genie" agent:
By examining these items, you gain insight into the agent’s internal process, making it easier to understand how the final answer was constructed and to diagnose any unexpected behavior.
The raw_responses
property is another valuable part of the result
object. It contains the raw outputs from the language model for each step in the agent loop. These responses include detailed information such as the generated text, the role (assistant or tool), the status, and sometimes the finish reason (for example, "stop" or "length").
Inspecting raw_responses
is especially useful for debugging or for understanding exactly how the language model responded at each step. You can also see if the agent made any tool calls, what the tool outputs were, and how the agent processed those results.
Here is how you might print the raw responses:
A sample output could look like this:
By reviewing the raw_responses
, you can see the full details of each step, including the text generated, the number of tokens used, and the finish reason. This level of detail is helpful for troubleshooting, optimizing your agent’s instructions, or simply understanding how the agent thinks.
In this lesson, you learned how to inspect and interpret the different properties of the result
object returned by the OpenAI Agents SDK. You saw how to access the final output, review the original input and last agent, examine the sequence of new items, and analyze the raw responses from the language model. Understanding these properties will help you debug your agents, optimize their behavior, and build more advanced applications.
In the next part of the course, you will get hands-on practice with these concepts through interactive exercises. You will have the chance to experiment with different agent configurations, inspect result properties, and deepen your understanding of how agents work under the hood. When you are ready, move on to the practice exercises to apply what you have learned!
