Building a Chat Engine with Conversation History

Welcome to the second lesson of our course on building a retrieval-augmented generation (RAG) chatbot with Go! In the previous lesson, we built a document processor that forms the retrieval component of our RAG system. Today, we'll focus on the conversational aspect by creating a chat engine that can maintain conversation history and interact with language models.

While our document processor is excellent at finding relevant information, a complete RAG system needs a way to interact with users in a natural, conversational manner. This is where our chat engine comes in. The chat engine is responsible for managing the conversation flow, formatting prompts with relevant context, and maintaining a history of the interaction.

Understanding the Chat Engine

The chat engine we'll build today will:

  1. Manage interactions with the language model using LangChain Go
  2. Maintain a history of the conversation for display or logging
  3. Format prompts with relevant context from our document processor
  4. Provide methods to reset the conversation history when needed

By the end of this lesson, you'll have a fully functional chat engine that can be integrated with the document processor we built previously to create a complete RAG system.

Creating the ChatEngine Struct Structure

Let's begin by setting up the basic structure of our ChatEngine using a Go struct. This struct will encapsulate all the functionality needed for managing conversations with the language model.

Key points in this initialization:

  1. LLM Model: We initialize an OpenAI LLM client configured for chat with the gpt-3.5-turbo model, which is specifically designed for conversational interactions.

  2. System Prompt: We define strict instructions that guide the AI's behavior, telling it to answer questions only based on provided context and to politely decline if no relevant context is available.

  3. Conversation History: We initialize an empty slice to keep track of the conversation for display or logging purposes. This history is maintained locally but not necessarily sent to the model in typical RAG implementations.

  4. Message Struct: We use a Message struct to represent each message in the conversation, with a role (system, user, or assistant) and content.

This structure ensures our chat engine can properly communicate with the language model while maintaining a record of the conversation.

Building the Message Handling System

Now that we have our basic struct structure, let's implement the core functionality: sending messages and receiving responses. We'll create a SendMessage method that formats the prompt with context and interacts with the language model.

The SendMessage method takes three parameters: a context.Context for managing the request lifecycle, userMessage (the question from the user), and context (optional relevant information from our document processor).

Here's what happens in this method:

  1. Template Creation: We use Go's package to create a prompt template that combines the system prompt, context, and question. This is similar to how we formatted prompts in the previous lesson on asking questions with retrieved context.

Implementing Conversation Management

An important aspect of any chat system is the ability to manage the conversation state. Let's implement methods to access and reset the conversation history:

The GetConversationHistory method returns the current conversation history, which can be useful for displaying the chat to users or for logging purposes.

The ResetConversation method clears the conversation history. This is useful when users want to start a new conversation or when testing different scenarios.

Testing Our Chat Engine Without Context

Let's see how our chat engine behaves when we don't provide any context. This is important because, in a RAG system, the assistant should not "hallucinate" answers — it should respond only based on the information it has.

Here's how you can test this scenario:

When you run this code, you should see output similar to:

The assistant correctly refuses to answer because no context was provided, demonstrating that our system prompt is working as intended.

Testing With Context

Now, let's test the chat engine with some relevant context. This simulates the scenario where our document processor has retrieved useful information, and we want the assistant to answer using only that context.

The output will look something like:

Resetting the Conversation

Finally, let's see how to reset the conversation history. This is useful if you want to clear the previous exchanges and start fresh.

After calling ResetConversation(), the conversation history should be empty:

This confirms that the conversation history is cleared and ready for a new interaction.

Summary and Practice Preview

In this lesson, we've built a chat engine for our RAG chatbot using LangChain Go and proper integration with OpenAI's chat models. We've learned how to:

  1. Create a ChatEngine struct that manages conversations with a language model
  2. Initialize an OpenAI LLM client configured for chat interactions
  3. Define system prompts to guide the AI's behavior
  4. Format prompts with context and questions using Go's text/template package
  5. Use llms.GenerateFromSinglePrompt to interact with the language model
  6. Maintain conversation history for display or logging purposes
  7. Implement methods to access and reset conversation history
  8. Test our chat engine with various scenarios

Our chat engine complements the document processor we built in the previous lesson. While the document processor handles the retrieval of relevant information, the chat engine manages the conversation and presents this information to the user in a natural way. In the next unit, we'll integrate the document processor and chat engine to create a complete RAG system. This integration will allow our chatbot to automatically retrieve relevant context from documents based on user queries, creating a seamless experience where users can ask questions about their documents and receive informed, contextual responses.

Get ready to practice what you've learned and take your RAG chatbot to the next level!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal