Lesson 3
Handling AI Interactions with the Chat Service Layer
Handling AI Interactions with the Chat Service Layer

In the previous lesson, we explored the ChatManager class, which plays a crucial role in managing chat data within our application. Now, we will take the next step in our journey by building the Chat Service Layer. This layer is essential for integrating the language model with chat sessions, allowing us to process user messages and generate AI responses. By the end of this lesson, you will understand how to set up the ChatService class, create chat sessions, and process messages using OpenAI's API.

The service layer acts as a bridge between the model layer, where data is managed, and the AI model, which generates responses. It is responsible for orchestrating the flow of data and ensuring that user interactions are handled smoothly. Let's dive into the details of setting up this important component.

Setting Up the ChatService Class

The ChatService class is the heart of our service layer. It is responsible for managing chat sessions and interacting with the OpenAI client to generate AI responses. To begin, we need to set up the class and its components.

First, we import the necessary modules, including the ChatManager from our previous lesson and the OpenAI client. We also use the uuid module to generate unique chat IDs. Here's how the class is initialized:

Python
1import uuid 2from openai import OpenAI 3from models.chat import ChatManager 4 5class ChatService: 6 def __init__(self): 7 self.chat_manager = ChatManager() 8 self.openai_client = OpenAI() 9 self.system_prompt = self.load_system_prompt('data/system_prompt.txt')

In this setup, we instantiate ChatManager to manage chat data, initialize the OpenAI client, and load the system_prompt using the load_system_prompt method, which we'll discuss next.

Loading the System Prompt

The system prompt is a crucial component that guides the AI's responses. It provides context and instructions for the AI, ensuring that it behaves in a manner consistent with our application's goals. In this section, we'll implement the load_system_prompt method to load the prompt from a file.

Python
1def load_system_prompt(self, file_path: str) -> str: 2 """Load the system prompt from file.""" 3 try: 4 with open(file_path, 'r') as f: 5 return f.read() 6 except Exception as e: 7 print(f"Error loading system prompt: {e}") 8 return "You are a helpful assistant."

This method attempts to read the system prompt from a specified file path. If successful, it returns the prompt as a string. In case of an error, it prints an error message and returns a default prompt. This ensures that the application can continue functioning even if the file is missing or corrupted.

Creating a New Chat Session

Creating a new chat session is a fundamental task of the ChatService. The create_chat method is responsible for generating a unique chat ID and initializing a chat session using the ChatManager.

Python
1def create_chat(self, user_id: str) -> str: 2 """Create a new chat session.""" 3 chat_id = str(uuid.uuid4()) 4 self.chat_manager.create_chat(user_id, chat_id, self.system_prompt) 5 return chat_id

In this method, we generate a unique chat_id using the uuid module. We then call the create_chat method of ChatManager, passing the user_id, chat_id, and system_prompt. This initializes a new chat session, which is ready to receive messages.

Processing User Messages

The process_message method is where the magic happens. It processes user messages, interacts with the OpenAI client to generate AI responses, and updates the chat history. Below, we outline the steps involved in this process, followed by the corresponding code implementation:

  1. Retrieve the chat using get_chat, and raise an error if the chat is not found.
  2. Add the user's message to the chat history.
  3. Send the conversation, including the system prompt and all messages, to the OpenAI client to generate a response.
  4. Add the AI's response to the chat history and return it to the user.
  5. Handle any errors with the AI client gracefully.
Python
1def process_message(self, user_id: str, chat_id: str, message: str) -> str: 2 """Process a user message and get AI response.""" 3 4 # Step 1: Retrieve the chat 5 chat = self.chat_manager.get_chat(user_id, chat_id) 6 if not chat: 7 raise ValueError("Chat not found") 8 9 # Step 2: Add user message to chat history 10 self.chat_manager.add_message(user_id, chat_id, "user", message) 11 12 try: 13 # Step 3: Get AI response 14 conversation = self.chat_manager.get_conversation(user_id, chat_id) 15 16 response = self.openai_client.chat.completions.create( 17 model="gpt-4", 18 messages=conversation, 19 temperature=0.7, 20 max_tokens=500 21 ) 22 23 ai_message = response.choices[0].message.content 24 25 # Step 4: Add AI response to chat history 26 self.chat_manager.add_message(user_id, chat_id, "assistant", ai_message) 27 28 return ai_message 29 30 except Exception as e: 31 # Step 5: Handle errors 32 raise RuntimeError(f"Error getting AI response: {str(e)}")

In the context of a customer service agent, we configure our model with specific parameters to optimize its performance. The temperature is set to 0.7, which balances creativity and coherence in the AI's responses, ensuring they are both engaging and relevant. The max_tokens is set to 500, allowing the model to provide detailed and informative answers without overwhelming the user, thus maintaining a smooth and effective customer service experience.

Example: Simulating a Chat Session

Let's see the ChatService in action by simulating a chat session. We'll use the main.py file to create a chat session and process a user message.

Python
1from services.chat_service import ChatService 2 3# Initialize the chat service 4chat_service = ChatService() 5 6# Simulate a user ID 7user_id = "user123" 8 9# Create a new chat session 10chat_id = chat_service.create_chat(user_id) 11print(f"Chat session created with ID: {chat_id}") 12 13# Simulate sending a message 14user_message = "Hello, how are you?" 15 16try: 17 ai_response = chat_service.process_message(user_id, chat_id, user_message) 18 print(f"AI Response: {ai_response}") 19except Exception as e: 20 print(f"Error: {e}")

In this example, we initialize the ChatService, simulate a user ID, and create a new chat session, printing the chat ID. We then simulate sending a message and print the AI's response, demonstrating the flow from user input to AI response and showcasing the functionality of the ChatService.

Plain text
1Chat session created with ID: 01a17870-8a4f-4b6f-a3ce-f04e1136d597 2AI Response: Hello! I'm here to help with any questions or concerns you might have regarding our IT services. How can I assist you today?

This output illustrates a successful interaction where a new chat session is created, and the AI responds to the user's greeting with a helpful message. The AI's response is tailored to assist with IT services, showcasing the system's ability to provide relevant and context-aware assistance.

Summary and Next Steps

In this lesson, we explored the ChatService class and its role in integrating the language model with chat sessions. We learned how to set up the class, load the system prompt, create chat sessions, and process user messages. The service layer is a vital component of our chatbot application, ensuring that user interactions are handled smoothly and efficiently.

As you move on to the practice exercises, take the opportunity to experiment with the ChatService functionality. This hands-on practice will reinforce the concepts covered in this lesson and prepare you for the next steps in our course. Keep up the great work, and I look forward to seeing your progress!

Enjoy this lesson? Now it's time to practice with Cosmo!
Practice is how you turn knowledge into actual skills.