Welcome to the first lesson of our course on developing Model Context Protocol (MCP) servers in TypeScript. In this course, you will learn how to expose tools and capabilities to the outside world using the Model Context Protocol, enabling AI models to access and utilize powerful external functionalities.
MCP is quickly becoming the standard for connecting AI models to real-world applications, and major AI labs are adopting it to streamline their workflows. In this lesson, you will learn how to set up and run a basic MCP server using the official TypeScript SDK.
While MCP supports multiple transport mechanisms, we will start with the most fundamental approach: standard input/output (stdio) transport. This method is perfect for local development as it allows processes on the same machine to communicate directly without network complexity. By the end of this lesson, you will be able to launch your own MCP server and create clients that can connect to and interact with it using stdio transport.
The Model Context Protocol (MCP) is an open standard developed by Anthropic in late 2024. Its main goal is to make it easy for LLMs to interact with external systems, tools, and data sources in a consistent and reliable way. Before MCP, developers had to create custom solutions for every integration, which were time-consuming and often fragile. MCP solves this by providing a universal framework for exchanging context and commands between AI models and software environments.
With MCP, AI models can easily connect to external tools to perform tasks like solving math problems, searching the web, querying databases, or managing files — without needing custom code for each integration. This means you can give your AI agent new abilities simply by plugging in the right MCP-compatible tool, making it much easier to extend what your AI can do in real-world applications.
MCP has quickly gained traction in the AI community. OpenAI, for example, added MCP support to its Agents SDK and ChatGPT desktop apps in early 2025. Google DeepMind and other major labs have also adopted MCP to improve how their models connect with external tools. This widespread adoption shows that MCP is not just a passing trend — it is becoming the backbone of modern AI integrations.
Before you start building your own MCP server, it's helpful to understand the three main roles in the MCP ecosystem: host, client, and server. These terms come up often when working with MCP, and knowing what each one does will make it much easier to design and debug your integrations.
-
Host:
The host is the main application that brings everything together. Think of it as the "home base" where AI models run and interact with external tools. Examples of hosts include AI-powered apps like ChatGPT Desktop, Claude Desktop, or custom agent frameworks. The host is responsible for managing connections to different tools and making sure everything works smoothly and securely. -
Client:
A client acts as a bridge between the host and a single external server. Each client manages exactly one connection to one MCP server — this is a one-to-one relationship. The client's job is to send requests from the host to its specific server and to deliver responses back. If a host needs to connect to multiple servers, it must create multiple clients, one for each server connection. This design helps keep things organized and secure by isolating each server connection so that tools don't interfere with each other. -
Server:
The server is the external tool or service that provides new capabilities to the host. For example, an MCP server might let the AI model access a database, call a web API, or control a smart device. Servers can run locally on your computer or remotely on another machine. Each server can handle connections from multiple clients simultaneously when using network-based transports like HTTP, but each client connects to only one server. When you build an MCP server in TypeScript, you are creating one of these external tools that hosts (via their clients) can connect to and use.
The diagram below gives a visual overview of how a host application with clients and multiple servers interacts using the MCP protocol. Notice how each client connects to exactly one server, while servers can accept connections from multiple clients.
In short:
- The host is the main AI app or environment.
- The client is the connector that links the host to exactly one specific server (1:1 relationship).
In the Model Context Protocol, tools are the primary primitives that enable AI agents to perform actions and computations beyond their inherent capabilities. By integrating tools, agents can interact with external systems, execute functions, and retrieve real-time data, thereby enhancing their autonomy and functionality.
MCP utilizes JSON-RPC 2.0, a stateless, lightweight remote procedure call protocol that employs JSON for message formatting, to facilitate seamless interaction between AI agents and external tools. This standardized approach ensures consistent and reliable communication, allowing agents to dynamically discover and invoke tools as needed.
The interaction process between agents and tools via MCP involves several key steps:
-
Tool Discovery: The agent queries the MCP server to retrieve a list of available tools, each accompanied by a name, description, and the parameters it accepts.
Example Request:
Example Response:
-
Tool Invocation: When the agent determines that a specific tool is needed to fulfill a task, it sends a request to the server, specifying the tool's name and providing the necessary parameters.
Example Request:
-
Receiving Results: The server processes the request, executes the tool, and returns the results to the agent, which can then incorporate this information into its response or further processing.
Example Response:
By adhering to the JSON-RPC 2.0 standard, MCP ensures consistent and reliable communication between AI agents and external tools. This standardized communication allows AI agents to seamlessly integrate external functionalities, such as querying databases, sending emails, or performing calculations, thereby extending their utility beyond their built-in capabilities.
You might hear people say that MCP is like the "USB-C port for AI." This comparison helps highlight what makes MCP so powerful and important.
Before USB-C, connecting devices to your computer was messy — different gadgets needed different cables and ports, and not everything worked together. USB-C changed that by providing a single, universal connector that works for charging, data transfer, displays, and more. Now, you can plug almost anything into a USB-C port and expect it to just work.
MCP does something similar for AI agents and external tools. In the past, every time you wanted your AI model to use a new tool or data source, you had to build a custom integration — often with lots of tricky, one-off code. With MCP, there's now a universal "port" or protocol that any tool or service can use to connect to any AI agent that supports MCP. This means:
- Plug-and-play: You can add new tools to your AI agent as easily as plugging in a USB-C device — no custom wiring required.
- Interoperability: Tools and agents from different companies can work together, as long as they speak MCP.
- Future-proofing: As new tools and capabilities are developed, they can be added to your AI environment without major rewrites.
In short, MCP is to AI integrations what USB-C is to hardware: a single, flexible, and reliable way to connect everything together.
To make it easy for developers to build and integrate MCP-compatible servers in TypeScript, Anthropic — through the ModelContextProtocol GitHub organization — provides an official MCP TypeScript SDK. This SDK abstracts away the details of the MCP specification and transport mechanisms, so you can focus on your server's core functionality instead of protocol implementation. The SDK is open source and available on GitHub.
The SDK provides both server and client implementations, making it easy to create MCP servers that expose tools and resources, as well as clients that can connect to and interact with those servers. The package includes TypeScript definitions and modern ES modules support, making it a natural fit for Node.js applications.
To install the MCP TypeScript SDK in your own projects, you can use npm
:
Or, if you prefer using yarn:
You'll also need a TypeScript runtime like tsx
to execute TypeScript files directly:
However, for this course, you will be working in the CodeSignal coding environment, where all the necessary installation steps have already been taken care of. You can start using the MCP SDK right away without any setup.
With the SDK ready to use in your environment, you're all set to start building your own MCP server and connecting it to clients.
When you run an MCP server, you need to decide how it will communicate with clients (other programs or applications that want to use your server). In this lesson, we'll focus on the stdio transport mechanism.
Stdio (Standard Input/Output) is the most basic and traditional way for programs on the same computer to talk to each other. With stdio, one program sends messages by writing text to its output, and the other program reads those messages as input. This is how many command-line tools work together. In MCP, stdio creates a direct one-to-one communication channel between a single client and a single server process. This is a specific limitation of stdio transport — each server process using stdio can only communicate with one client at a time. This differs from other transport mechanisms like HTTP, where a single server can handle multiple clients simultaneously.
This transport is particularly useful for MCP because it allows for simple, reliable communication between processes without needing to manage network connections or ports. This makes it ideal for local development and testing scenarios. When using stdio transport specifically, each client-server pair communicates through their own dedicated input/output streams, maintaining the strict one-to-one relationship. If multiple clients need to connect to the same server functionality via stdio, each would need to launch its own separate server process.
To create your own MCP server, you'll use the McpServer
class from the official MCP TypeScript SDK. When you set up your server, it's important to give it a clear name, version, and description. The name helps clients identify your server, the version tracks compatibility, and the description explains what your server can do.
Here's how to define and run a basic MCP server using the stdio
transport:
In this code:
- You import the
McpServer
class andStdioServerTransport
from the SDK. - You create a server instance with metadata including name, version, and description.
- You create a stdio transport instance that handles communication via standard input/output.
- You connect the server to the transport using
await server.connect(transport)
.
When you use stdio
as the transport, your server is designed to be run as a background process by another program, such as an AI agent or a command-line tool. The server will wait for requests from the client and respond through standard input and output streams.
To run this server, you can use:
The server will start and wait for incoming connections via stdio. However, if you run this command directly, you won't see any output or activity because stdio servers are not meant to be run by themselves. They are designed to be launched and controlled by a client process that connects to their standard input and output streams.
Now that we have our MCP server code ready, let's create a client that can connect to it and see the communication in action. Remember, our stdio server isn't meant to run by itself — it needs a client to launch it and communicate with it.
When we create an MCP client using stdio transport, something interesting happens behind the scenes. The client doesn't just connect to an already-running server. Instead, the client actually launches the server process itself and then communicates with it through standard input and output streams.
The first step is to create a transport that tells the client how to launch and communicate with our server:
Here, we create a StdioClientTransport
and tell it exactly how to launch our server. The command: "npx"
and args: ["tsx", "server.ts"]
mean "run the command npx tsx server.ts
" — the same command we would use to start the server manually.
Next, we create the client itself with its own metadata:
We create a Client
instance with its own metadata (name and version). This helps identify our client when it communicates with the server.
Now we can connect to the server and send our first request:
When we call await client.connect(transport)
, several things happen:
- The client launches a new server process using the command we specified
- The client connects to the server's standard input and output streams
- Both client and server exchange initial handshake messages to establish the connection
Once connected, we can send requests to the server. We start with a simple ping()
request, which is like saying "hello" to make sure the server is responding properly.
We always close the connection in the finally
block. This is important because it tells the server process to shut down gracefully and allows our client program to exit.
When we run this client with:
We'll see output similar to:
This output shows us that:
- The client successfully launched and connected to the server
- The server responded to our ping request (the empty
{}
object is a valid response) - The connection was properly closed
What's happening behind the scenes: While our client is running, there are actually two processes: our client process and the server process that the client launched. They're communicating through pipes (the stdio streams), sending JSON-RPC messages back and forth. When our client finishes and closes the connection, the server process also terminates.
This demonstrates the fundamental stdio transport pattern: the client controls the server's lifecycle and communicates with it through standard input/output streams, maintaining that crucial one-to-one relationship we discussed earlier.
In this lesson, you learned about the Model Context Protocol (MCP) and why it is becoming the standard for connecting AI agents to external tools and data. We introduced the MCP TypeScript SDK and explored how to create both servers and clients using the stdio
transport mechanism for local communication.
You saw how to set up an MCP server using the McpServer
class and StdioServerTransport
, and how to create a client using the Client
class and StdioClientTransport
. The stdio transport provides a simple, reliable way for processes on the same machine to communicate, making it ideal for development and testing scenarios.
Key concepts covered:
- The three main roles in MCP: host, client, and server
- How MCP uses JSON-RPC 2.0 for standardized communication
- Setting up a basic MCP server with TypeScript
- Creating a client that can connect to and ping the server
- Proper connection management and cleanup
In the next section, you will get hands-on practice by setting up and running your own MCP server and client. You'll experiment with the connection process and explore how the stdio transport enables communication between the two processes. This will help solidify your understanding and prepare you for more advanced topics where you'll add actual tools and functionality to your MCP server.
