Welcome back! In the previous lessons, you learned how to generate and select daily prompts for the LLM Prediction Game. Now, we are ready to take the next step: getting a response from the Large Language Model (LLM) itself.
The LLM is the "brain" of our game. It takes the prompt and user question, then generates a response. In our game, players will try to guess the next word the LLM will produce. To make this possible, we need to know exactly what the LLM would say in response to our prompt. This lesson will show you how to send a prompt to the LLM, get its response, and prepare that response for use in the game.
Just as a reminder, you have already learned how to:
- Create and store prompt data for the game. (
prompt_generator.py
anddata.json
) - Select the correct daily prompt using the current date. (
game.py
)
Lets also remember the structure of the project:
Now, with a daily prompt ready, our next task is to send this prompt to the LLM and get its response. This response will be used as the "answer" for the day's game. All this functionality will be contained in the llm.py
file.
To communicate with the LLM, we use the OpenAI API. This requires an API key, which is a secret code that lets you access the service.
First, let's see how to import the necessary library and set up the client:
import os
allows us to access environment variables, which is where we store sensitive information like API keys.from openai import OpenAI
imports the OpenAI client library.client = OpenAI(...)
creates a client object that we will use to send requests to the LLM.os.environ.get("OPENAI_API_KEY")
fetches your API key from the environment variables.
On CodeSignal, the
openai
library is already installed, and the API key is set up for you. However, it's good practice to know how to do this for your own projects.
Now that we have the client set up, let's see how to send a prompt and get a response from the LLM.
Suppose we have a system prompt and a user question. We want to send both to the LLM and get its reply.
Here's how you can do it step by step:
system_prompt
sets the behavior or context for the LLM.user_question
is the actual question or input from the user.client.chat.completions.create(...)
sends the request to the LLM.model="gpt-4o"
specifies which LLM model to use.messages
is a list of message objects. Each message has arole
andcontent
.- The first message is from the "system" (system prompt).
- The second message is from the "user" (user question).
The LLM will process these messages and generate a response.
To get the actual text of the response, you can do:
If you run this code, the output will be:
This is the LLM's answer to the question.
For our game, we need to split the LLM's response into individual words. This allows us to compare the player's guess to the LLM's next word.
Let's see how to do this using a regular expression:
import re
brings in Python's regular expression module.re.findall(r"\b\w+\b|[^\w\s]", text)
finds all words and punctuation in the text.\b\w+\b
matches words.[^\w\s]
matches any punctuation.
Let's try it with our previous response:
The output will be:
Now, each word and punctuation mark is a separate item in the list. This is exactly what we need for the game logic.
Sometimes, things can go wrong when calling the LLM. For example, the API might be down, or your API key might be missing. It's important to handle these errors so your game doesn't crash.
Here's how you can do it:
- The
try
block attempts to get a response from the LLM and split it into words. - If something goes wrong, the
except
block catches the error, prints a message, and returns an empty list.
This way, your game can handle problems smoothly and let you know what went wrong.
In this lesson, you learned how to:
- Set up the OpenAI client to communicate with the LLM.
- Send a prompt and user question to the LLM and receive its response.
- Split the LLM's response into individual words for use in the game.
- Handle errors gracefully to keep your game running smoothly.
You are now ready to practice these steps in the upcoming exercises. In the next section, you will get hands-on experience sending prompts to the LLM and processing its responses, just as we did here. Good luck!
