Introduction & Lesson Overview

Welcome to a new course in your learning path! In the previous course, you learned how to connect your OpenAI agents to external tools and data sources. You saw how to safely manage connections and extend your agent's abilities by integrating with various tools. Now, you are ready to take on a new challenge: handling sensitive data securely within your agent workflows.

In this lesson, you will learn how to inject sensitive information—such as user names, passport numbers, or other private details—into your agent's runtime in a way that keeps this data hidden from the language model (LLM) itself. This is a crucial skill for building real-world applications, where privacy and security are top priorities. You will see how to use the context parameter in the OpenAI Agents SDK for TypeScript to manage sensitive data, ensuring that only your trusted code and tools can access it, while the LLM remains unaware of any private details.

By the end of this lesson, you will be able to securely pass sensitive data to your agent's tools and keep it out of the LLM's reach.

Understanding the Risks of Exposing Sensitive Data

Before we dive into the technical details, let's remind ourselves why handling sensitive data with care is so important. When you work with LLMs, any data you send to the model could potentially be exposed in its outputs. This means that if you pass private information—like a user's passport number or personal address—directly to the LLM, there is a risk that this data could leak out in a response, be logged, or even be accessed by someone who shouldn't see it.

These risks are not just theoretical. Data leakage can lead to privacy breaches, security vulnerabilities, and even legal trouble if you violate regulations like GDPR or CCPA. For example, if an LLM is "jailbroken" or manipulated, it might reveal information it was never supposed to share. That's why it's critical to keep sensitive data out of the LLM's input and output streams whenever possible. Instead, you want to keep this data local—only accessible to your own code and trusted tools.

Managing Sensitive Data with Context

To help you manage sensitive data securely, the OpenAI Agents SDK for TypeScript provides a context mechanism. When you run an agent, you can pass a context object that contains any data you want to make available to your tools, but this context is never sent to the LLM. Instead, it's only available to your function tools during execution.

Here's how secure context injection works in TypeScript:

  1. You create your sensitive data object - This can be any TypeScript object or interface containing private information you want to keep secure.

  2. You pass it to run() as context - When you call run(agent, input, { context: yourData }), the SDK makes your data available to tools but keeps it hidden from the LLM.

  3. The LLM sees your tool function description, not your data - When you define a tool like bookHotel that only takes hotelName as a parameter, the LLM only sees that simplified interface. The context data is completely hidden from the LLM's view.

  4. The LLM calls the function normally - Based on the user's request ("book me a room at Grand Plaza Hotel"), the LLM decides to call bookHotel with { hotelName: "Grand Plaza Hotel" }. It doesn't know about or need to provide the sensitive context.

  5. Your function receives both the LLM's parameters and your context - When your tool's execute function runs, it receives both the parameters from the LLM and the context as a second parameter.

  6. The LLM only sees the function's return value - After your function completes, the LLM receives only the return value (like "Booking confirmed for Alice Smith..."), never the sensitive data itself.

Defining a Sensitive Data Interface

To make your code clear and type-safe, it's a good practice to define an interface for your sensitive data. In TypeScript, you can use an interface or type to structure your data:

This interface makes it clear what data is considered sensitive and provides type safety when accessing the context in your tools. When you pass an object matching this interface as context to your agent, you know exactly what data is being handled, and TypeScript will help prevent mistakes with its type checking.

Creating a Tool Function That Accesses Sensitive Context

To access the sensitive context within your tool functions, you use the second parameter of the execute function. Here's how to create a tool function that can securely access sensitive data:

Notice that the tool only declares hotelName in its parameters schema. The sensitive data comes through the context parameter, which is automatically provided by the SDK when the tool is executed. The context.context contains the actual data you passed to run().

Defining the Agent with the Tool Function

Next, define your agent and register the tool function that will use the sensitive context. The agent is configured with instructions and a list of available tools:

Here, the travelGenie agent is set up to use the bookHotel tool. The agent will rely on this tool to handle booking requests, and the tool will have access to any sensitive context you provide.

How the Agent Sees Your Tool Function

It's important to understand what information the LLM actually receives about your tool functions. Even though your bookHotel function receives sensitive context data, the LLM never sees this. Let's examine what the agent actually knows about the tool:

When you run this code, you'll see output like this:

Notice what's missing: there's no mention of the context parameter anywhere! The LLM only sees that the bookHotel function exists and requires a hotelName parameter. The sensitive context data is completely hidden from the LLM's view. This is how the SDK keeps your sensitive data secure—the LLM doesn't even know that sensitive context exists, so it can never accidentally expose or request it.

When the LLM decides to call this function, it will only provide the hotelName parameter. The SDK then automatically provides your sensitive context to the function during execution, giving you access to both the LLM's input and your secure data.

Creating an Object with Sensitive Data

Create an object that contains the sensitive data you want to make available to your tools:

This object will be passed as context when running the agent, making it available to all tools during that run.

Running the Agent with Secure Context Injection

Run the agent and pass the sensitive data object as the context option to run(). The SDK will make this context available to your tool functions while keeping it hidden from the LLM:

When you execute this code, the agent will process the user's request and, when it needs to use the sensitive data, the SDK will inject it directly into the bookHotel tool function via the context parameter. At no point does the LLM have access to the sensitive context—only your tool function can see and use this information. This approach keeps private data secure while still allowing your agent to perform complex, real-world tasks.

Here's an example of the debug output produced by your tool function. This confirms that the sensitive data is being accessed securely inside your trusted code, and not exposed to the LLM:

This output demonstrates that the sensitive context is available exclusively within your tool function, even though it was never included in the user's message input.

Printing the Final Output

After the agent completes its run, you can print the final output returned by the agent. This output is generated by the agent using the results from your tool function, but without ever exposing the sensitive data to the LLM:

The final output might look like this:

This shows that your agent can use sensitive data to complete real-world tasks, while keeping that data secure and hidden from the LLM at all times.

Examining the Conversation Flow

After the agent completes its run, you can also examine the complete conversation flow to see exactly what data was exchanged between the user, the LLM, and your tools. This helps you verify that sensitive data never appears in the conversation history:

When you run this code, you'll see output like this:

This conversation flow reveals several important security aspects:

  1. The user's original request contains no sensitive data—just the hotel name request.

  2. The function call arguments show only {"hotelName":"Grand Plaza Hotel"}. Notice that the sensitive passport number (P123456789) is completely absent from the function call arguments that the LLM generated.

  3. The function output contains the user's name (Alice Smith) because your tool function chose to include it in the return value, but the sensitive passport number remains hidden.

Summary & Preparation for Practice Exercises

In this lesson, you learned how to securely inject sensitive data into your agent's runtime using the context mechanism in the OpenAI Agents SDK for TypeScript. You saw why it is important to keep sensitive data hidden from the LLM, and how to use interfaces to structure private information. By passing context to the run() function and accessing it in your tool's execute method, you can ensure that sensitive data remains completely separate from the LLM while still being available for trusted code.

In the upcoming practice exercises, you will get hands-on experience defining your own context interfaces, building tools that use sensitive data, and running agents with secure context injection. This is an important step toward building real-world AI applications that respect user privacy and comply with security best practices. Well done for reaching this advanced topic—your skills are growing, and you are on your way to becoming an expert in secure AI agent development!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal