Introduction & Lesson Overview

Welcome to a new course in your learning path! In the previous course, you learned how to connect your OpenAI agents to external tools and data sources using the Model Context Protocol (MCP). You saw how to safely manage connections and extend your agent's abilities by integrating with MCP servers. Now, you are ready to take on a new challenge: handling sensitive data securely within your agent workflows.

In this lesson, you will learn how to inject sensitive information—such as user names, passport numbers, or other private details—into your agent's runtime in a way that keeps this data hidden from the language model (LLM) itself. This is a crucial skill for building real-world applications, where privacy and security are top priorities. You will see how to use the RunContextWrapper class from the OpenAI Agents SDK to wrap and manage sensitive context, ensuring that only your trusted code and tools can access it, while the LLM remains unaware of any private details.

By the end of this lesson, you will be able to securely pass sensitive data to your agent's tools and keep it out of the LLM's reach.

Understanding the Risks of Exposing Sensitive Data

Before we dive into the technical details, let's remind ourselves why handling sensitive data with care is so important. When you work with LLMs, any data you send to the model could potentially be exposed in its outputs. This means that if you pass private information—like a user's passport number or personal address—directly to the LLM, there is a risk that this data could leak out in a response, be logged, or even be accessed by someone who shouldn't see it.

These risks are not just theoretical. Data leakage can lead to privacy breaches, security vulnerabilities, and even legal trouble if you violate regulations like GDPR or CCPA. For example, if an LLM is "jailbroken" or manipulated, it might reveal information it was never supposed to share. That's why it's critical to keep sensitive data out of the LLM's input and output streams whenever possible. Instead, you want to keep this data local—only accessible to your own code and trusted tools.

Managing Sensitive Data with RunContextWrapper

To help you manage sensitive data securely, the OpenAI Agents SDK provides the RunContextWrapper class. This class acts as a secure container for any context you want to pass into your agent's runtime. When you use RunContextWrapper, your sensitive data is kept local to your application and is never sent to the LLM. Instead, it is only available to your function tools, lifecycle hooks, or other trusted code that you control.

Here's how the SDK enables secure context injection in simple terms:

  1. You create your sensitive data object - This can be any Python object (like a dataclass or Pydantic model) containing private information you want to keep secure.

  2. You pass it to Runner.run() as context - When you call Runner.run(context=your_data), the SDK automatically wraps your data in a RunContextWrapper behind the scenes.

  3. The LLM sees your tool function description, not your data - When you define a tool function like book_hotel(context: RunContextWrapper[UserData], hotel_name: str), the LLM only sees a simplified description like "book_hotel(hotel_name: str)". The context parameter is completely hidden from the LLM's view.

  4. The LLM calls the function normally - Based on the user's request ("book me a room at Grand Plaza Hotel"), the LLM decides to call book_hotel(hotel_name="Grand Plaza Hotel"). It doesn't know about or need to provide the sensitive context.

  5. The SDK automatically injects your sensitive data - When the LLM calls the function, the SDK intercepts this call and automatically adds your sensitive context as the first parameter before executing your function.

  6. Your function receives both the LLM's parameters and your sensitive data - Your function gets called with both the context (injected by the SDK) and the hotel name (provided by the LLM).

  7. The LLM only sees the function's return value - After your function completes, the LLM receives only the return value (like "Booking confirmed for Alice Smith..."), never the sensitive data itself.

The RunContextWrapper is automatically created when you pass a context object to Runner.run(). However, it's important to understand that this wrapper is not magically injected into all functions. Only functions that explicitly declare a parameter of type RunContextWrapper[YourContextType] as their first parameter will receive the context. This design ensures that only functions requiring access to the run context receive it, while others remain unaffected. The wrapper also provides a usage attribute, which tracks things like token usage for the current run, but the most important feature for this lesson is its ability to keep your context private and secure.

Defining a Sensitive Data Context with a Model Class

To make your code clear and secure, it's a good idea to define a dedicated model for your sensitive data. In Python, you can use either a Pydantic model or a standard dataclass for this purpose—both are supported by the OpenAI Agents SDK when used with RunContextWrapper.

Pydantic models are a popular choice because they provide automatic validation and type checking. For example, to store a user's name and passport number, you can define a Pydantic model like this:

Alternatively, you can use a standard Python dataclass if you don't need Pydantic's validation features:

Both approaches are valid. Using a model class makes it clear what data is considered sensitive, and helps prevent mistakes by providing structure and type safety. When you pass an instance of this model as the context to your agent, you know exactly what data is being handled, and you can be confident that it will stay secure inside the RunContextWrapper.

Creating a Tool Function That Accesses Sensitive Context

To access the sensitive context within your tool functions, you must explicitly declare a parameter of type RunContextWrapper[YourContextType] as the first parameter. This tells the SDK that this function needs access to the run context. Here's how to create a tool function that can securely access sensitive data:

Notice that the context parameter is the first parameter and has the type RunContextWrapper[UserData]. This explicit declaration is what allows the SDK to inject the context into this function. Functions without this parameter will not receive the context, ensuring that only designated functions have access to sensitive data.

Defining the Agent with the Tool Function

Next, define your agent and register the tool function that will use the sensitive context. The agent is configured with instructions and a list of available tools, just as you would in any OpenAI Agents SDK workflow:

Here, the travel_genie agent is set up to use the book_hotel tool. The agent will rely on this tool to handle booking requests, and the tool will have access to any sensitive context you provide.

How the Agent Sees Your Tool Function

It's important to understand what information the LLM actually receives about your tool functions. Even though your book_hotel function has a RunContextWrapper[UserData] parameter, the LLM never sees this sensitive parameter. Let's examine what the agent actually knows about the tool:

When you run this code, you'll see output like this:

Notice what's missing: there's no mention of the context parameter anywhere! The LLM only sees that the book_hotel function exists and requires a hotel_name parameter. The RunContextWrapper[UserData] parameter is completely hidden from the LLM's view. This is how the SDK keeps your sensitive data secure—the LLM doesn't even know that sensitive context exists, so it can never accidentally expose or request it.

When the LLM decides to call this function, it will only provide the hotel_name parameter. The SDK then automatically injects your sensitive context as the first parameter before your function executes, giving you access to both the LLM's input and your secure data.

Creating an Object with Sensitive Data

Now that you understand how the LLM sees your tools (without any knowledge of sensitive context), let's create the actual sensitive data that will be securely injected into your functions. Create an instance of your sensitive data model. This object will hold the private information you want to keep secure and make available only to your tool functions—not to the LLM:

This object matches the structure expected by your tool function, ensuring that the sensitive data can be accessed seamlessly and safely within your business logic.

Running the Agent with Secure Context Injection

Next, run the agent and pass the sensitive data object as the context argument to Runner.run(). The OpenAI Agents SDK will automatically wrap this context in a RunContextWrapper and inject it directly into your tool function when the agent calls it. Importantly, this sensitive data never passes through the LLM—it is only accessible to your trusted code:

When you execute this code, the agent will process the user's request and, when it needs to use the sensitive data, the SDK will inject it directly into the book_hotel tool function via the RunContextWrapper. At no point does the LLM have access to the sensitive context—only your tool function can see and use this information. This approach keeps private data secure while still allowing your agent to perform complex, real-world tasks.

Here's an example of the debug output produced by your tool function. This confirms that the sensitive data is being accessed securely inside your trusted code, and not exposed to the LLM:

This output demonstrates that the sensitive context is available exclusively within your tool function, even though it was never included in the user's message input.

Printing the Final Output

After the agent completes its run, you can print the final output returned by the agent. This output is generated by the agent using the results from your tool function, but without ever exposing the sensitive data to the LLM:

The final output might look like this:

This shows that your agent can use sensitive data to complete real-world tasks, while keeping that data secure and hidden from the LLM at all times.

Examining the Complete Conversation Flow

After the agent completes its run, you can also examine the complete conversation flow to see exactly what data was exchanged between the user, the LLM, and your tools. This helps you verify that sensitive data never appears in the conversation history:

When you run this code, you'll see output like this:

This conversation flow reveals several important security aspects:

  1. The user's original request contains no sensitive data—just the hotel name request.

  2. The function call arguments show only {"hotel_name":"Grand Plaza Hotel"}. Notice that the sensitive passport number (P123456789) is completely absent from the function call arguments that the LLM generated.

  3. The function output contains the user's name (Alice Smith) because your tool function chose to include it in the return value, but the sensitive passport number remains hidden.

  4. The final assistant response uses the information returned by your function, but again, no sensitive data that wasn't explicitly returned appears in the conversation.

This demonstrates that your sensitive context data (like the passport number) never enters the conversation flow between the user and the LLM. It exists only within your secure tool function execution, exactly as intended.

Summary & Preparation for Practice Exercises

In this lesson, you learned how to securely inject sensitive data into your agent's runtime using the RunContextWrapper in the OpenAI Agents SDK. You saw why it is important to keep sensitive data hidden from the LLM, and how to use a dedicated context model to structure and manage private information. By explicitly declaring RunContextWrapper[YourContextType] as the first parameter in your tool functions, you can ensure that only designated functions have access to sensitive context, while keeping this data completely separate from the LLM.

In the upcoming practice exercises, you will get hands-on experience defining your own context models, building tools that use sensitive data, and running agents with secure context injection. This is an important step toward building real-world AI applications that respect user privacy and comply with security best practices. Well done for reaching this advanced topic—your skills are growing, and you are on your way to becoming an expert in secure AI agent development!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal