Welcome back! Now that you've built your MCP server and exposed your shopping list tools, it's time to make them available to an OpenAI agent. The OpenAI Agents SDK has built-in support for MCP tools, allowing you to seamlessly integrate any MCP server with your agents. In this lesson, you'll learn how the SDK uses MCP clients to connect to servers, and how to set up connections using both stdio and HTTP streaming transport.
By the end of this lesson, you'll be able to:
- Understand how the OpenAI Agents SDK integrates with MCP servers through clients
- Connect an OpenAI agent to your MCP server using both stdio and HTTP streaming transport
- Provide MCP servers to agents so they can discover and use your tools
- Run and test the integration, verifying that the agent can answer queries using your shopping list service
Let's walk through each step in detail.
The OpenAI Agents SDK includes built-in support for the Model Context Protocol (MCP). It achieves this by using MCP clients that connect to your MCP servers. These clients handle all the communication details, allowing your agent to:
- Discover available tools from connected MCP servers
- Read tool documentation and input schemas
- Execute tools in response to user queries
- Aggregate tools from multiple MCP servers
The SDK provides different client implementations depending on how your MCP server is running:
MCPServerStdio
for local processes communicating via standard input/outputMCPServerStreamableHttp
for servers accessible over HTTP
These clients abstract away the complexity of the MCP protocol, making it simple to integrate any MCP server with your agents.
When your MCP server is a local TypeScript file or executable, you can use the MCPServerStdio
class. This approach spawns your server as a subprocess and communicates through standard input/output streams.
Here's how to connect using stdio:
Key parameters:
name
: A friendly name for debugging and loggingcommand
: The command to execute (in this case,npx
)args
: Arguments to pass to the command (tsx server.ts
runs your TypeScript server)
This approach is perfect for development and testing, as it automatically manages the server lifecycle alongside your agent.
When your MCP server is running as a standalone HTTP service — whether locally or remotely — you'll use the MCPServerStreamableHttp
class. This allows the agent to communicate with your server over HTTP, making it suitable for distributed or production environments.
Here's how to connect using HTTP streaming:
Key parameters:
name
: A friendly name for debugging and loggingurl
: The HTTP endpoint where your MCP server is listening
This approach gives you more flexibility in deployment, as your MCP server can run anywhere accessible via HTTP.
Once you have an MCP client connected to your server, you can provide it to your agent through the mcpServers
property. The agent will automatically discover all tools exposed by your connected servers.
Here's a complete example using HTTP streaming:
When you run this code, you'll see an output like:
Notice how the agent understood your request and automatically made multiple tool calls — it added each ingredient individually using add_item
, then called get_items
to show the complete list. The agent knew which ingredients were needed for chocolate chip cookies without being explicitly told, and formatted everything into a helpful response showing quantities and purchase status. This seamless interaction demonstrates the power of MCP integration: your agent discovered and used your tools automatically based on their schemas and documentation.
When you provide MCP servers to the agent, it automatically:
- Connects to each server and requests the list of available tools
- Reads the documentation and input schemas for each tool
- Aggregates all tools into a single tool set
- Uses these tools to respond to user queries
For example, when asked to "Add ingredients for chocolate chip cookies to my shopping list", the agent:
- Recognizes it needs to use the
add_item
tool multiple times - Calls the tool with appropriate parameters for each ingredient
- Uses
get_items
to retrieve the full list - Formats a helpful response showing all items
This automatic discovery and tool usage is what makes MCP integration so powerful — your agent can flexibly use any tool you expose without additional programming.
To understand how your agent uses MCP tools, we'll use a helper function that explores the execution history. The printToolHistory()
function parses through the agent's history and displays each tool call in a readable format:
This helper function:
- Iterates through the agent's execution history
- Identifies completed function calls (tool uses)
- Formats the arguments passed to each tool
- Matches each call with its corresponding result
- Displays everything in a clean, readable format
Using this function after running your agent:
Produces output like:
In this lesson, you learned how the OpenAI Agents SDK supports MCP tools through built-in clients. You saw how to connect to MCP servers using both stdio (for local TypeScript files) and HTTP streaming (for standalone services). You learned how to provide these connections to your agent, enabling automatic tool discovery and usage.
You explored real examples of how an agent uses your tools — making multiple add_item
calls to add ingredients, then using get_items
to retrieve and display the full shopping list. You also learned how to use the printToolHistory()
helper function to explore the agent's execution history, giving you deep insight into how your tools are being used.
You're now ready to practice these skills by building and testing your own agent-server integrations. This is a major milestone — your tools are now available to intelligent agents that can use them in flexible, conversational ways. In the next exercises, you'll get hands-on experience with these integrations and deepen your understanding of TypeScript-based MCP development.
