Introduction & Lesson Overview

Welcome back! In the last lesson, you learned how to manage multi-turn conversations with your agent, enabling your TypeScript applications to remember and build on previous exchanges. This is essential for creating natural, interactive experiences. Now, you’re ready to take the next step: extending your agent’s capabilities beyond just generating text.

In real-world applications, agents often need to do more than just chat — they might need to search the web, perform calculations, or interact with external systems. The OpenAI Agents SDK for TypeScript makes this possible by allowing you to add “tools” to your agents. Tools can be built-in (like web search) or custom function tools that you define yourself using the tool helper and Zod schemas for parameter validation. By the end of this lesson, you’ll know how to add both types of tools to your agent, making it much more powerful and useful.

Overview Of Built-In Tools

The OpenAI Agents SDK for TypeScript comes with several built-in tools that you can add to your agent with just a few lines of code. One of the most useful is the webSearchTool(), which allows your agent to search the web for up-to-date information. This is especially helpful when your agent needs to answer questions about recent events or look up facts that are not part of its training data.

Let’s look at a simple example. Suppose you want to create an agent that can answer questions by searching the web. You can do this by adding the webSearchTool() to the tools array of your agent:

With this setup, your agent can now use the web search tool whenever it needs to find information. This is a big step up from a basic text-only agent, as it can now provide answers based on the latest information available online.

Creating Custom Function Tools

While built-in tools are powerful, sometimes you need your agent to perform a specific task that is unique to your application. This is where custom function tools come in. The OpenAI Agents SDK for TypeScript allows you to turn any function into a tool that your agent can use by defining it with the tool helper and specifying the expected input parameters using a Zod schema.

For example, suppose you want your agent to calculate the number of years between two historical events. You can define a custom function tool like this:

Here, the Zod schema defines that the function expects an object with two numeric fields: year1 and year2. The tool helper registers the function as a tool, and the description provides a human-readable explanation that the agent can use to decide when and how to use the tool. By using Zod schemas, you ensure that the agent understands the required input structure, which is essential for reliable and accurate tool usage.

Integrating Tools With An Agent

Now that you have both a built-in tool (webSearchTool()) and a custom function tool (calculateYearsBetween), you can combine them in a single agent. This allows your agent to use both tools as needed to answer user questions.

Here’s how you can create an agent with multiple tools:

In this example, the agent is given clear instructions on how to use its tools: first, search the web to find the years of the events, and then use the calculation tool to find the difference. By listing both tools in the tools array, you make them available for the agent to use during its reasoning process.

Verifying Your Custom Function Tools

If you want to verify that the custom function tool you created is correctly set up for your agent to use, you can inspect the agent's list of tools and print out details for each one. This is especially useful for checking that your tool's name, description, and parameter schema are registered as expected.

To do this, you can iterate through the agent's tools array and print information for each tool:

This will output details for each tool. For our example agent with both the web search tool and the custom calculation tool, you'll see output like this:

Notice how the agent sees each tool differently. The built-in web search tool appears as a hosted_tool with provider-specific configuration, while your custom function tool appears as a function type with the exact name and description you provided. Most importantly, the parameters field shows the JSON schema that was automatically generated from your Zod schema — this is what the agent uses to understand what arguments it needs to provide when calling your tool. The agent can read the description to understand when to use the tool and examine the parameters schema to know exactly what data structure to pass when invoking it.

Running The Agent With Tools

Once your agent has its tools, you can run it just like before. The difference is that now, the agent can decide when to use each tool to answer the user's question. For example, if you ask, "How many years are there between Benjamin Franklin's discovery of electricity and ChatGPT?", the agent will first use the web search tool to find the years of each event, then use the calculation tool to compute the difference.

Here's how you can run the agent and see the result:

Understanding The User Message

By inspecting the result.history, you can view a detailed, step-by-step breakdown of how the agent used its tools — including each tool invocation, the arguments passed, and the outputs returned. The history starts with the user's question:

Web Search Tool In Action

Next, you'll see how the agent uses the web search tool to find information about the historical events:

Agent's Research Response

After the web search, the agent provides its findings with citations:

Custom Function Tool Usage

Finally, you can see how the agent uses your custom calculation tool to compute the exact difference:

This shows how the agent used both tools to answer the question, providing a clear and accurate response.

The Challenge of Tool Integration and the Promise of MCP

Integrating tools with different agentic systems can be difficult and time-consuming. Each platform often has its own way of defining and connecting tools, leading to duplicated work, inconsistent APIs, and maintenance headaches. As you add more tools or want to support multiple agent frameworks, these challenges multiply.

The Model Context Protocol (MCP) is a game changer. MCP is an open standard that lets you develop a tool once and make it available to any agentic system that supports the protocol. It standardizes how tools are described, discovered, and invoked, making integration much simpler and more reliable. With MCP, you can build tools that work across platforms, reduce integration effort, and improve security and governance.

In the next course, you’ll learn what MCP is, how it works, and how to develop and integrate your own MCP server—unlocking even more powerful and flexible agentic applications.

Summary & Preparing For Practice Exercises

In this lesson, you learned how to extend your agent’s capabilities by adding both built-in and custom function tools using the OpenAI Agents SDK for TypeScript. You saw how to use the webSearchTool() for real-time information and how to create your own function tools with the tool helper and Zod schemas for clear input validation. You also learned how to combine multiple tools in a single agent, verify tool registration, and observe how agents use these tools step by step.

You also explored the broader challenges of integrating tools across different agentic systems and how the Model Context Protocol (MCP) is emerging as a solution to these problems. In the next course, you’ll dive deeper into MCP — learning what it is, how it works, and how to develop and integrate your own MCP server to make your tools universally accessible to any agentic platform.

Now, get ready to put your knowledge into practice with a set of hands-on exercises. These will help you reinforce what you’ve learned by building and using agents with both built-in and custom tools.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal