Welcome! Now that you’ve built your MCP server and exposed your shopping list tools, it’s time to make them available to an OpenAI agent. In this lesson, you’ll learn how to connect your MCP server to an agent using both local (stdio
) and remote (SSE
) transports, how to provide the server to the agent, and how to test that the integration works as expected.
By the end of this lesson, you’ll be able to:
- Connect an OpenAI agent to your MCP server using both
stdio
andSSE
transports. - Provide the MCP server to the agent so it can discover and use your tools.
- Run and test the integration, verifying that the agent can answer queries using your shopping list service.
Let’s walk through each step in detail.
The simplest way to connect your MCP server to an agent is by using the stdio
transport. This is ideal for local development, where your server runs as a subprocess on the same machine as your agent. Communication happens over standard input and output, making it fast and easy to set up.
Here’s how you can launch your MCP server and connect to it via stdio
using the OpenAI Agents SDK:
- The
command
andargs
specify how to launch your MCP server script. - The
MCPServerStdio
context manager handles starting and stopping the server process for you.
This setup is perfect for development and testing on your own machine.
If your MCP server is running remotely—perhaps on another machine or in the cloud—you’ll want to use the SSE
(Server-Sent Events) transport. This allows the agent to communicate with your server over HTTP, making it suitable for distributed or production environments.
Here’s how to connect using SSE:
- Replace the URL with the address of your running MCP server.
- The
MCPServerSse
context manager manages the HTTP connection for you.
This approach is great for connecting to servers that are not running on your local machine.
Once you have an mcp_server
object—whether from MCPServerStdio
or MCPServerSse
—you can provide it to your agent. The agent will automatically discover all the tools your server exposes, read their documentation and input schemas, and use them to answer user queries.
Here’s a complete example using the stdio
transport:
- The
mcp_servers
argument is a list, so you can provide one or more MCP server connections. - The agent will aggregate all available tools from the connected servers.
- When you run the script, the agent connects to your MCP server, discovers the tools, and uses them to answer the query.
You can use the same approach with MCPServerSse
by swapping out the context manager.
When you provide the MCP server to the agent, the agent automatically connects and fetches all available tools. It reads the documentation and input schemas you defined with the @mcp.tool()
decorator. This means the agent knows what each tool does and how to use it—no extra programming required.
For example:
- If you ask,
“Give me my shopping list”
, the agent will recognize it can use thefetch_items
tool. - If you say,
“Add 3 bananas to my shopping list”
, the agent will use theadd_item
tool with the correct parameters.
This automatic discovery and aggregation of tools is what makes MCP integration so powerful. The agent can flexibly use any tool you expose, based on the user’s request.
In this lesson, you learned how to connect an OpenAI agent to your MCP server using both stdio
and SSE
transports. You saw how to provide the MCP server to the agent, allowing it to automatically discover and use your shopping list tools in response to natural language queries. You also learned how to run and test the integration, verifying that your tools are accessible to the agent.
You’re now ready to practice these skills by building and testing your own agent-server integrations. This is a major step forward—your tools are now available to intelligent agents that can use them in flexible, conversational ways. In the next exercises, you’ll get hands-on experience with these integrations and deepen your understanding even further.
