Welcome back! In the previous lessons, you learned how to make your OpenAI agents more powerful by integrating hosted tools, creating your own custom function tools, and even turning agents themselves into callable tools for modular workflows. Each of these steps has helped you build agents that are more flexible, maintainable, and capable of handling complex tasks.
Today, you will take another important step: connecting your agent to external tools and data sources using the Model Context Protocol (MCP). This lesson will show you how to link your agent to an MCP server, discover its available tools, and use them in your workflows. By the end of this lesson, you will know how to connect to both local and remote MCP servers using different transport mechanisms, and how to enable tool list caching for better performance. This will open up even more possibilities for your agents, allowing them to interact with a wide range of external systems.
MCP stands for Model Context Protocol. It is an open standard developed to make it simple and efficient for AI agents—like the ones you’re building—to connect with external tools, data sources, and services. Instead of having to write custom integration code for every new tool or service, MCP provides a universal way for agents to discover, access, and use any tool that is made available by an MCP-compatible server.
An MCP server acts as a hub that “advertises” the tools it offers—these could be anything from booking tickets, checking the weather, sending emails, searching databases, or even running custom business logic. When your agent connects to an MCP server, it can automatically retrieve a list of all available tools and interact with them as needed to fulfill user requests. This means you can easily extend your agent’s capabilities by simply connecting it to different MCP servers.
MCP is designed to be flexible and secure, supporting both local and remote connections, and is already being adopted by major AI platforms and tool providers. By using MCP, you can build agents that are modular, maintainable, and ready to interact with a growing ecosystem of external services.
If you want to dive deeper—understanding how MCP works, creating your own MCP server, and integrating it with OpenAI Agents—explore our dedicated MCP learning path.
MCP supports two main ways for your agent to connect to a server: Stdio and SSE.:
-
Stdio (Standard Input/Output) is used when your MCP server is running on the same machine as your agent. It’s fast and simple, making it a great choice for local development or when you need low latency.
-
SSE (Server-Sent Events) is used for connecting to remote MCP servers over the network. It allows your agent to receive real-time updates from the server, which is useful for cloud-based services or distributed systems.
In short, use Stdio
for local, low-latency connections and SSE
for remote, real-time streaming. The OpenAI Agents SDK
makes it easy to use either method, depending on your needs. A new transport mechanism called Streamable HTTP is also emerging in the MCP ecosystem. Streamable HTTP offers unified, bidirectional, and resumable communication over a single HTTP endpoint, addressing some of the limitations of SSE and providing more flexibility for advanced use cases. However, since SSE and Stdio are currently the most widely supported and stable options in both the OpenAI Agents SDK and most MCP servers, this lesson will focus on these two mechanisms. As Streamable HTTP matures and gains broader adoption, it will likely become a standard choice for future integrations.
Suppose you have written an MCP server in Python called mcp_server.py
, and it provides tools to book museum visits. You want your agent to be able to use these tools by running the server as a subprocess and connecting to it locally. The easiest way to do this is with the MCPServerStdio
class from the OpenAI Agents SDK.
You should use an async with
block to manage the connection. This makes sure the connection to your MCP server is opened and closed safely, and that everything is cleaned up when you’re done—even if something goes wrong.
Here’s how you can set it up:
What do these parameters mean?
params
: This tells Python how to start your MCP server. In this example, it runspython mcp_server.py
as a subprocess.name
: This is just a label for your connection. It helps you identify which server you’re talking to, especially if you connect to more than one.cache_tools_list
: If you set this toTrue
, your agent will remember the list of tools the server provides. This makes repeated lookups much faster, because it doesn’t have to ask the server for the list every time.
Why use async with
?
Using async with
is the recommended way to manage connections in modern Python. It makes sure the connection to your MCP server is properly opened and closed, even if your code hits an error or you stop it early. This helps prevent bugs and keeps your program running smoothly.
Let’s say your MCP server for booking museum visits is running on another machine or in the cloud, and you want your agent to connect to it over the network. In this case, you can use the MCPServerSse
class from the OpenAI Agents SDK, which connects using Server-Sent Events (SSE).
Just like with the local connection, you should use an async with
block to manage the connection. This ensures the connection is safely opened and closed, and resources are cleaned up automatically.
Here’s how you can set it up:
What’s different here?
- Instead of launching a subprocess, you provide the URL of the remote MCP server’s SSE endpoint in the
params
dictionary. - The rest of the setup is the same: you give your connection a name and can enable tool list caching for better performance.
This approach lets your agent use tools from an MCP server running anywhere on your network or in the cloud, just as easily as if it were running locally.
Once you have established a connection to your MCP server (for example, using an async with
block), you can interact with it directly before involving your agent. One useful thing you can do is check which tools the server provides by calling the list_tools()
method on your connection object. This helps you see what capabilities are available and verify that your server is set up correctly.
Here’s how you can list the available tools:
When you run this code, you’ll see output similar to the following, showing the names, descriptions, and input schemas for each tool provided by your MCP server:
These tool details—such as the name, description, and input schema—are exactly what your agent will be able to see and use once you connect the MCP server to it, just like any other tool you add to the agent (such as function tools or hosted tools). For example, the book_museum_visit
tool allows the agent to book a museum visit by providing the museum name, number of visitors, and the date of the visit. This information helps the agent understand what actions it can perform and how to use each tool correctly.
After you’ve confirmed that your MCP server is accessible and you know which tools it provides, you can connect your agent to the server. To do this, pass the connection object (such as mcp_server
) to your agent by setting the mcp_servers
parameter to a list containing your connection object when you create the agent. This tells the agent which MCP server (and its tools) it should use.
Here’s how you can provide the MCP server to your agent:
With this setup, your agent will be able to automatically discover and use any tools available from the connected MCP server, as well as any other tools you provide directly to the agent. This makes it easy to extend your agent’s capabilities and combine multiple sources of functionality as your needs evolve.
After configuring your agent with the MCP server, you can run it and see how it uses the tools provided by the server to complete a user request. For example, to book 3 tickets to the Louvre for June 10, 2025, you can use the following code:
When you run this code, you’ll see output like the following:
This shows that the agent successfully discovered and used the book_museum_visit
tool from the MCP server to fulfill the user’s request, providing a clear and user-friendly confirmation message.
It’s important to note that all of this happens within the async with
block. Performing all interactions with the MCP server—including creating and running your agent—inside the async with
block ensures that the connection is properly established before use and safely closed afterward. If you try to use the MCP server or the agent outside of this block, the connection will no longer be active and you may encounter errors. Always keep your agent’s operations and any direct MCP server interactions within the async with
context to guarantee reliable, safe, and bug-free execution.
The async with
block is the recommended way to manage your connection to an MCP server, regardless of whether you are using SSE or Stdio as the transport mechanism. It automatically handles connecting to the server when you enter the block and disconnecting (cleaning up resources) when you exit, even if an error occurs. This makes your code safer and less error-prone, as you don't have to remember to manually connect or disconnect.
However, if you prefer not to use an async with
block, you must handle connecting and disconnecting yourself using a try-except-finally pattern. This ensures proper cleanup even when errors occur:
The try-except-finally pattern ensures that:
- You attempt to connect to the server and run your agent code
- If any errors occur, you can handle them appropriately
- Most importantly, the
cleanup()
method is always called in thefinally
block, guaranteeing that resources are released properly
If you forget to call connect()
, your agent won't be able to use the MCP server's tools. If you forget to implement the finally
block with cleanup()
, you may leave open connections or other resources hanging, which can lead to bugs or resource leaks.
In summary:
- Using
async with
is the safest and most convenient way to manage MCP server connections, as it handles setup and cleanup for you automatically. - If you don't use
async with
, implement a try-except-finally pattern to ensure proper resource management, especially for error cases.
In this lesson, you learned how to connect your OpenAI agent to external MCP servers using both Stdio
and SSE
transport mechanisms. You saw how MCP provides a standardized, secure way to integrate external tools and data sources, and how the OpenAI Agents SDK makes it easy to discover and use these tools in your agent workflows. You also learned how to enable tool list caching for better performance and how to inspect your agent’s execution flow to understand its decision-making process.
You are now ready to practice these skills in the hands-on exercises that follow. Keep experimenting with different MCP servers and transport options to see how they can extend your agent’s abilities. You are making great progress — each lesson brings you closer to building truly powerful and flexible AI systems!
