Introduction & Context

Welcome back! In the previous lesson, you learned how to build a basic MCP server and client using the stdio transport. This allowed you to connect two processes on the same machine and see how MCP enables communication between AI agents and external tools. As a quick reminder, stdio is great for local development and testing, but it is limited to one-to-one connections on the same device.

In this lesson, you will take the next step: exposing your MCP server over the network using the Streamable HTTP transport. This is a major milestone because it allows your server to be accessed remotely by clients running on different machines or even in the cloud. By the end of this lesson, you will know how to set up an MCP server that listens for HTTP requests and how to connect to it using an MCP client. You will also learn how to interpret the output and troubleshoot common issues. This will prepare you for real-world scenarios where remote connectivity is essential.

Streamable HTTP Transport

Let’s break down what Streamable HTTP transport means in simple terms.

In the last lesson, you used stdio transport, which lets two programs on the same computer talk to each other by sending messages back and forth through their input and output streams. This is great for testing, but it only works if both programs are running on the same machine, and only one client can connect to the server at a time.

Streamable HTTP transport is a way for your MCP server and client to talk to each other over the internet or a local network, using the same technology that powers websites: HTTP. With HTTP, your server can listen for requests from any computer, not just the one it’s running on. This means you can have clients and servers running on different machines, or even in the cloud.

The “streamable” part means that, in addition to sending a single response, the server can also send a stream of messages back to the client if needed (for example, to provide progress updates or results as they become available). This is similar to how some websites can update information in real time without you having to refresh the page.

Here’s a simple comparison:

  • stdio transport: Only works locally, one client at a time, good for testing.
  • Streamable HTTP transport: Works over the network, supports many clients, and can send multiple messages in a single connection if needed.

Most real-world MCP servers use HTTP transport because it’s flexible, widely supported, and allows remote access. That’s why learning how to use Streamable HTTP is an important step in building practical MCP applications.

Why We Need a Web Server Framework

Now that you understand the benefits of Streamable HTTP transport, you might be wondering: "How exactly do we implement this? Can the McpServer or StreamableHTTPServerTransport classes handle HTTP requests directly?"

The answer is no, and here's why: The McpServer class is designed to process MCP messages and manage the protocol logic, while the StreamableHTTPServerTransport class handles the transport layer for converting MCP messages to and from HTTP format. However, neither of these classes can actually listen for incoming HTTP requests or run a web server on their own.

Think of it this way: the MCP classes know how to speak the MCP protocol, but they need something else to handle the underlying HTTP communication. This is where a web server framework comes in. A web server framework provides the infrastructure to:

  • Listen for incoming HTTP requests on a specific port
  • Route requests to the appropriate endpoints (like /mcp)
  • Handle the HTTP request/response cycle
  • Manage multiple concurrent connections

The StreamableHTTPServerTransport is designed to work with an existing HTTP server, not replace it. It expects to receive HTTP request and response objects that have already been processed by a web server framework.

For this lesson, we'll use Express.js as our web server framework because it's lightweight, well-documented, and perfect for this use case. Express will handle the HTTP server functionality, while our MCP classes will handle the MCP protocol logic.

Setting Up Express as the Web Server

Let's start by creating the foundation for our HTTP-based MCP server. We'll use Express to create a web server that can listen for HTTP requests. Here's the basic setup:

This code creates an Express web server that listens on port 3000. The express.json() middleware is important because MCP messages are sent as JSON, and this tells Express to automatically parse JSON data from incoming requests.

Creating the MCP Endpoint Handler

Now we need to add a route handler that will process MCP requests. We'll create an endpoint at /mcp that accepts POST requests:

This creates a route that listens for POST requests to the /mcp path. We use POST requests because MCP clients send JSON-RPC messages (like tool calls, resource requests, and other method invocations) in the request body, which is the standard way to send structured data to a server. When a client sends an MCP message, it will arrive as a POST request to this endpoint.

Creating MCP Server and Transport (Stateless Mode)

Now comes the key part: integrating the MCP server and transport within our Express route handler. In this lesson, we'll use stateless mode, which means we create a new server and transport for each request. This ensures complete isolation between different client requests:

The sessionIdGenerator: undefined parameter tells the transport to operate in stateless mode, where each request is handled independently without maintaining session state between requests.

Connecting Server and Transport

Next, we need to connect the MCP server and transport so they can work together to process MCP messages:

The server.connect(transport) call links the server and transport together. Then transport.handleRequest() processes the actual MCP message from the HTTP request and sends the response back to the client. The logging statement helps us understand what's happening when we run our client and server together - we'll see exactly which methods are being processed.

Adding Error Handling and Cleanup

Finally, we need to handle errors and clean up resources when the connection closes:

The res.on("close") event handler ensures that we clean up the server and transport when the HTTP connection ends. The error handling sends a proper JSON-RPC error response if something goes wrong, following the MCP protocol standards.

Complete Server Code

Here's how all the pieces fit together in the complete server.ts file:

When you run this server, you will see output like:

This means your MCP server is now accessible over HTTP and ready to handle remote connections from MCP clients.

Understanding MCP Streamable HTTP Communication

The MCP Streamable HTTP transport uses standard HTTP methods to handle communication between clients and servers. Here's how it works:

Client-to-Server Communication (POST Requests): When an MCP client wants to send a message to the server—such as calling a tool, requesting a resource, or sending a ping—it sends an HTTP POST request to the server's MCP endpoint (like http://localhost:3000/mcp). The request body contains a JSON-RPC message, which may be a single object or an array for batched requests. This is the primary way clients initiate communication with servers.

Server-to-Client Communication (Responses and Streaming): When the server receives a POST request, it has two response options:

  1. For most simple cases, it responds with a standard JSON response (Content-Type: application/json).
  2. If the request or session requires sending multiple messages (like progress updates or streamed results), the server can respond with a Server-Sent Events (SSE) stream (Content-Type: text/event-stream). This allows the server to send multiple messages over the same HTTP connection.

Additionally, clients may open a persistent GET request to the MCP endpoint to receive asynchronous SSE messages from the server, enabling the server to push notifications or updates independently of any particular POST.

What This Means for Our Code: In our example, we're only handling POST requests. The client sends a POST with an MCP message, the server processes it, and responds with a single JSON object. We do not support SSE streaming or handle GET requests for server-initiated messages in this basic setup.

If you want your server to clearly communicate that streaming (GET/SSE) is not supported, you can add a GET handler at the /mcp endpoint that returns a “Method Not Allowed” error:

Building and Connecting the MCP Client for HTTP

Next, let's see how to build a client that connects to your MCP server over HTTP. Here is the code for client.ts:

In this example, the client is configured to connect to the MCP server at http://localhost:3000/mcp. The StreamableHTTPClientTransport handles all the details of sending and receiving MCP messages over HTTP. When you call client.connect(transport), the client establishes a connection to the server. You can then send requests, such as client.ping(), and receive responses. The client logs each step, so you can see when the connection is established, when the ping result is received, and when the connection is closed.

Analyzing the Communication Flow and Output

When you run the server and then the client, you will see output in your terminal that demonstrates the complete MCP communication cycle. Let's examine the expected output and understand what each message represents.

When you run the client, you'll see the following messages in your terminal:

What the client messages mean:

  1. Connection establishment: "Connected to MCP server using Streamable HTTP transport" - The client successfully establishes an HTTP connection to the server
  2. Ping execution: "Server ping result: {}" - The client receives a successful ping response (empty object is the expected format)
  3. Session cleanup: "Disconnected from the MCP server" - The client properly closes its connection

Meanwhile, the server will display these messages as it processes the requests:

What the server messages mean:

  1. Server initialization: "MCP server running at http://localhost:3000/mcp" - The Express server starts and begins listening for HTTP requests
  2. Client introduction: "MCP server ready to handle: initialize" - The server receives the client's introduction message (like saying "Hello, I'm a client and here's my information")
  3. Confirmation received: "MCP server ready to handle: notifications/initialized" - The server receives the client's confirmation that it's ready to start working together (like saying "Got it, I'm ready!")
  4. Actual work: "MCP server ready to handle: ping" - The server processes the actual ping request from the client

You'll also notice "Closing transport and server" appears after each request. This happens because our server runs in stateless mode, creating fresh server and transport instances for each HTTP request and then cleaning up those resources when the request is complete.

Why Multiple Server Messages?

The server displays three distinct request-handling cycles, with resource cleanup happening after each one. This occurs because we configured our server to operate in stateless mode - meaning it spins up fresh server and transport instances for every incoming HTTP request.

Before the client and server can work together, they need to go through an introduction process (like two people meeting for the first time). The client first introduces itself ("initialize"), then confirms it's ready to work ("initialized notification"), and only then can it make actual requests like "ping". Since each request is handled independently in our setup, you see this introduction sequence happen for the complete session, with proper resource cleanup after each step.

This output confirms that your client and server are communicating correctly over HTTP and following the proper MCP introduction process with appropriate resource management.

Summary & Next Steps

In this lesson, you learned how to extend your MCP server to support remote connectivity using Streamable HTTP transport. You saw how to set up an Express-based MCP server that listens for HTTP requests and how to build a client that connects to it over the network. You also learned how to interpret the output from both the client and server, and how to troubleshoot common connectivity issues.

This is a significant step forward from the previous lesson, where you worked with local stdio transport. Now, your MCP server can be accessed remotely, opening up many new possibilities for integration and deployment.

In the next section, you will get hands-on practice with these concepts. You will run your own MCP server and client using HTTP transport, experiment with sending requests, and explore how remote connectivity works in practice. Congratulations on reaching this important milestone in your MCP learning journey!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal