Welcome back! In the previous lessons, you learned how to make your OpenAI agents in TypeScript more powerful by integrating hosted tools, creating your own custom function tools, and even turning agents themselves into callable tools for modular workflows. Each of these steps has helped you build agents that are more flexible, maintainable, and capable of handling complex tasks.
Today, you'll take another important step: connecting your agent to external tools and data sources using the Model Context Protocol (MCP
). This lesson will show you how to link your agent to an MCP
server, discover its available tools, and use them in your workflows. By the end of this lesson, you'll know how to connect to both local and remote MCP
servers using different transport mechanisms, and how to enable tool list caching for better performance. This will open up even more possibilities for your agents, allowing them to interact with a wide range of external systems.
MCP stands for Model Context Protocol. It's an open standard designed to make it simple and efficient for AI agents — like the ones you're building — to connect with external tools, data sources, and services. Instead of having to write custom integration code for every new tool or service, MCP
provides a universal way for agents to discover, access, and use any tool that's made available by an MCP
-compatible server.
An MCP
server acts as a hub that "advertises" the tools it offers — these could be anything from booking tickets, checking the weather, sending emails, searching databases, or even running custom business logic. When your agent connects to an MCP
server, it can automatically retrieve a list of all available tools and interact with them as needed to fulfill user requests. This means you can easily extend your agent's capabilities by simply connecting it to different MCP
servers.
MCP
is designed to be flexible and secure, supporting both local and remote connections, and is already being adopted by major AI platforms and tool providers. By using MCP
, you can build agents that are modular, maintainable, and ready to interact with a growing ecosystem of external services.
MCP
supports two main ways for your agent to connect to a server: Stdio and Streamable HTTP.
-
Stdio (Standard Input/Output) is used when your
MCP
server is running on the same machine as your agent. It's fast and simple, making it a great choice for local development or when you need low latency. In TypeScript, you can use theMCPServerStdio
class to connect to a localMCP
server by launching it as a subprocess. -
Streamable HTTP is used for connecting to remote
MCP
servers over the network. It provides unified, bidirectional, and resumable communication over a single HTTP endpoint, making it suitable for cloud-based services or distributed systems. In TypeScript, you can use theMCPServerStreamableHttp
class to connect to a remoteMCP
server via HTTP.
In short, use MCPServerStdio
for local, low-latency connections and MCPServerStreamableHttp
for remote, real-time streaming over HTTP. The OpenAI Agents SDK makes it easy to use either method, depending on your needs.
Suppose you have written an MCP
server in TypeScript called mcp-server.ts
, and it provides tools to book museum visits. You want your agent to be able to use these tools by running the server as a subprocess and connecting to it locally. The easiest way to do this is with the MCPServerStdio
class from the OpenAI Agents SDK.
You should use a try
/finally
block to manage the connection. This ensures the connection to your MCP
server is opened and closed safely, and that everything is cleaned up when you're done — even if something goes wrong.
Here's how you can set it up:
What do these parameters mean?
name
: This is just a label for your connection. It helps you identify which server you're talking to, especially if you connect to more than one.fullCommand
: This tells the SDK how to start yourMCP
server. In this example, it runsnpx tsx mcp-server.ts
as a subprocess.
Let's say your MCP
server for booking museum visits is running on another machine or in the cloud, and you want your agent to connect to it over the network. In this case, you can use the MCPServerStreamableHttp
class from the OpenAI Agents SDK, which connects using a streamable HTTP endpoint.
Just like with the local connection, you should use a try
/finally
block to manage the connection. This ensures the connection is safely opened and closed, and resources are cleaned up automatically.
Here's how you can set it up:
What's different here?
- Instead of launching a subprocess, you provide the URL of the remote
MCP
server's HTTP endpoint in theurl
property. - The rest of the setup is the same: you give your connection a name and can enable tool list caching for better performance.
This approach lets your agent use tools from an MCP
server running anywhere on your network or in the cloud, just as easily as if it were running locally.
After you've confirmed that your MCP
server is accessible and you know which tools it provides, you can connect your agent to the server. To do this, pass the connection object (such as mcpServer
) to your agent by setting the mcpServers
property to an array containing your connection object when you create the agent. This tells the agent which MCP
server (and its tools) it should use.
Here's how you can provide the MCP
server to your agent:
With this setup, your agent will be able to automatically discover and use any tools available from the connected MCP
server, as well as any other tools you provide directly to the agent. This makes it easy to extend your agent's capabilities and combine multiple sources of functionality as your needs evolve.
After configuring your agent with the MCP
server, you can run it and see how it uses the tools provided by the server to complete a user request. For example, to book 3 tickets to the Louvre for June 10, 2025, you can use the following code:
When you run this code, you'll see output like the following:
This shows that the agent successfully discovered and used the book_museum_visit
tool from the MCP
server to fulfill the user's request, providing a clear and user-friendly confirmation message.
It's important to note that all of this happens within the try
/finally
block. Performing all interactions with the MCP
server — including creating and running your agent — inside this block ensures that the connection is properly established before use and safely closed afterward. If you try to use the MCP
server or the agent outside of this block, the connection will no longer be active and you may encounter errors. Always keep your agent's operations and any direct server interactions within the managed context to guarantee reliable, safe, and bug-free execution.
In this lesson, you learned how to connect your OpenAI agent to external MCP
servers using both Stdio
and Streamable HTTP
transport mechanisms in TypeScript. You saw how MCP
provides a standardized, secure way to integrate external tools and data sources, and how the OpenAI Agents SDK makes it easy to discover and use these tools in your agent workflows. You also learned how to enable tool list caching for better performance and how to inspect your agent's execution flow to understand its decision-making process.
You are now ready to practice these skills in the hands-on exercises that follow. Keep experimenting with different MCP
servers and transport options to see how they can extend your agent's abilities. You are making great progress — each lesson brings you closer to building truly powerful and flexible AI systems!
