Introduction to LangChain and Large Language Models

Welcome to the first lesson of the LangChain Chat Essentials in TypeScript course. In this course, we will embark on an exciting journey into the world of conversational AI using LangChain in the TypeScript ecosystem.

LangChain is a powerful framework that simplifies the process of interacting with large language models (LLMs). It provides developers with a set of tools and interfaces to effectively utilize AI capabilities for a wide range of applications, such as chatbots, content generation, and more. LangChain abstracts the complexities involved in model communication, allowing developers to focus on building innovative solutions. Beyond basic interactions, LangChain offers advanced features like conversation history management, context handling, and customizable model parameters. These features make it an excellent choice for developing sophisticated AI-driven applications.

In this lesson, we will concentrate on the essential skills needed to send messages to AI models using LangChain. While LangChain supports a variety of models and providers, we will specifically focus on working with OpenAI, laying the groundwork for more advanced topics in future lessons.

Setting Up the Environment

Before we dive into the code, it's important to ensure that your TypeScript environment is set up correctly. For this lesson, you’ll need to install two packages: langchain and @langchain/openai.

  • The langchain package provides the core framework and essential abstractions for building with LangChain, including message schemas like AIMessage.
  • The @langchain/openai package gives you everything you need to connect LangChain with OpenAI’s models.

To install both packages, use the following npm command:

This will make sure you have all the necessary tools to work with OpenAI models through LangChain.

If you’re working in the CodeSignal environment, everything is already set up for you—no need to install anything. You can jump straight into writing and running your code. If you’re working locally, just make sure to run the install command above before you get started.

Setting the OpenAI API Key as an Environment Variable

In this course, you'll be using the CodeSignal coding environment, where we've already set up everything you need to start working with OpenAI models. This means you don't need to worry about setting up an API key or configuring environment variables — it's all taken care of for you.

However, it's still useful to understand how this process works in case you want to set it up on your own computer in the future. To work with OpenAI models outside of CodeSignal, you need to set up a payment method and obtain an API key from their website. This API key is essential for accessing OpenAI's services and making requests to their API.

To keep your API key secure, you can use an environment variable. An environment variable is like a special note that your computer can read to find out important details, such as your OpenAI API key, without having to write it directly in your code. This helps keep your key safe and secure.

If you were setting this up on your own system, here's how you would do it:

  • On macOS and Linux, open your terminal and use the export command to set the environment variable:

  • For Windows, you can set the environment variable using the set command in the Command Prompt:

  • If you are using PowerShell, use the following command:

These commands will set the environment variable for the current session. But remember, while using CodeSignal, you can skip these steps and jump straight into experimenting with OpenAI models.

Understanding the ChatOpenAI Class

With your OpenAI API key securely set as an environment variable, you can now utilize LangChain to communicate with OpenAI models. The ChatOpenAI class is a crucial part of LangChain that enables communication with OpenAI's chat-based models like GPT-4o and GPT-4o-mini. It acts as a bridge, allowing you to send messages to the AI and receive responses in a conversational format.

In TypeScript, you can import the ChatOpenAI class from the @langchain/openai package and create an instance as follows:

Here, we use TypeScript's type annotations to specify that chat is of type ChatOpenAI. This provides type safety and a better development experience, as your editor can offer autocompletion and catch type errors early. When you create a ChatOpenAI instance, it automatically captures the API key from the OPENAI_API_KEY environment variable, so you don't need to pass it explicitly. By default, the ChatOpenAI object uses OpenAI's default settings and model. While it offers customization options, we'll concentrate on basic usage for now.

Sending a Message

To communicate with the OpenAI model, you can send a message using the invoke method of the ChatOpenAI instance. This method takes a message and returns the AI's response.

In TypeScript, you can annotate the response with the AIMessage type, which is imported from @langchain/core/messages:

In this example, we send a single message, "Hello, how are you?", to the model. The invoke method processes this message and returns a response object of type AIMessage, containing the AI's reply. Note that we're using await to handle the asynchronous nature of the method call, which is a common practice in TypeScript for dealing with promises.

In our coding environment, you can use await at the top level without wrapping it in an async function. In other environments, you might need to wrap this code in an async function or use .then() chains.

Extracting the Response

Once you have the response from the AI model, you need to extract the content to understand the model's reply. In TypeScript, the response is strongly typed as an AIMessage, which means you get type safety and autocompletion when accessing its properties.

Here, we print the AI's response to the console. By accessing response.content, we retrieve the text of the AI's reply. Thanks to TypeScript's type system, you can be confident that content exists on the AIMessage object, reducing the risk of runtime errors.

For example, the output might look like:

Understanding how to extract and interpret the AI's response is essential for building applications that effectively interact with AI models. As you experiment with different messages, observe how the AI responds and think about how you can use this information in your projects.

Complete Code Example

Let's put everything together to see a complete example of sending a message to an OpenAI model using LangChain in TypeScript. Save this code in a file named solution.ts:

This script demonstrates the entire process: importing the necessary classes, creating a ChatOpenAI instance, sending a message to the model, and extracting the response. When you run this code, you'll see the AI's reply printed to the console. TypeScript's type annotations help ensure that your code is correct and easy to maintain.

Working with Other AI Providers in LangChain

One of the powerful features of LangChain is its ability to work with various language models beyond just OpenAI. The interface remains consistent across different model providers, making it easy to switch between them or even compare responses from multiple models for the same prompt. This flexibility allows you to choose the model that best suits your specific needs, budget, or performance requirements.

For instance, LangChain provides seamless integration with Anthropic's Claude models. To use Claude with LangChain in TypeScript, you first need to install the appropriate package:

You'll also need to set up your Anthropic API key as an environment variable:

Then, you can use Claude in a similar way to how we used OpenAI, but with one key difference—you need to specify which Claude model you want to use:

Unlike ChatOpenAI, which uses a default model if none is specified, ChatAnthropic requires you to explicitly select a model like "claude-3-7-sonnet-latest".

This model-agnostic approach is one of LangChain's greatest strengths, allowing you to experiment with different models without significantly changing your code structure. In future lessons, we'll explore more advanced techniques for working with various models and customizing their parameters.

Working with Local Models in LangChain

LangChain also supports integration with local language models, which can be beneficial when you need to work offline, have privacy concerns, or want to reduce API costs. Local models run directly on your machine, eliminating the need for internet connectivity and external API calls.

To use a local model with LangChain, you'll need to install the appropriate packages. For example, to work with Ollama, a tool for running local models like Llama 2:

Once installed, you can use local models in a similar way to cloud-based ones:

This code connects to a locally running Ollama server and uses the Llama 2 model to generate a response. The interface remains consistent with what we've seen for cloud-based models, making it easy to switch between different model providers based on your specific requirements.

Summary and Next Steps

In this lesson, you learned how to use TypeScript with LangChain to send a message to an OpenAI model. You set up your environment, initialized the ChatOpenAI object, sent a simple message, and extracted the AI's response using type-safe code. As you move on to the practice exercises, experiment with different messages and observe how the AI responds. This foundational skill will be built upon in future lessons, where we will explore more advanced topics such as customizing model parameters and managing conversation history.

Congratulations on completing the first step in your journey into conversational AI with LangChain and TypeScript!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal