Welcome back! In the previous lessons, you learned how to generate and select daily prompts for the LLM Prediction Game. Now, we are ready to take the next step: getting a response from the Large Language Model (LLM) itself.
The LLM is the "brain" of our game. It takes the prompt and user question, then generates a response. In our game, players will try to guess the next word the LLM will produce. To make this possible, we need to know exactly what the LLM would say in response to our prompt. This lesson will show you how to send a prompt to the LLM, get its response, and prepare that response for use in the game.
Just as a reminder, you have already learned how to:
- Create and store prompt data for the game (
prompt_generator.jsanddata.json). - Select the correct daily prompt using the current date (
game.js).
Let's also remember the structure of the project:
Now, with a daily prompt ready, our next task is to send this prompt to the LLM and get its response. This response will be used as the "answer" for the day's game. All this functionality will be contained in the llm.js file.
To communicate with the LLM, we use the OpenAI API. This requires an API key, which is a secret code that lets you access the service.
First, let's see how to import the necessary library and set up the client in JavaScript:
require('openai')imports the OpenAI client library for Node.js.process.env.OPENAI_API_KEYfetches your API key from the environment variables.new OpenAI({ apiKey: ... })creates a client object that we will use to send requests to the LLM.
In our environment, the
openailibrary is already installed, and the API key is set up for you. However, it's good practice to know how to do this for your own projects.
Now that we have the client set up, let's see how to send a prompt and get a response from the LLM.
Suppose we have a system prompt and a user question. We want to send both to the LLM and get its reply.
Here's how you can do it step by step in JavaScript:
systemPromptsets the behavior or context for the LLM.userQuestionis the actual question or input from the user.client.chat.completions.create({...})sends the request to the LLM.model: "gpt-4o"specifies which LLM model to use.messagesis an array of message objects. Each message has aroleandcontent.- The first message is from the "system" (system prompt).
- The second message is from the "user" (user question).
- The LLM will process these messages and generate a response.
- The response text is accessed with
completion.choices?.[0]?.message?.content.
If you run this code, the output will be:
For our game, we need to split the LLM's response into individual words. This allows us to compare the player's guess to the LLM's next word.
Let's see how to do this using a regular expression in JavaScript:
text.matchAll(/\b\w+\b|[^\w\s]/g)finds all words and punctuation in the text.\b\w+\bmatches words.[^\w\s]matches any punctuation.
Array.from(...).map(m => m[0])collects the matched words and punctuation into an array.
Let's try it with our previous response:
The output will be:
Now, each word and punctuation mark is a separate item in the array. This is exactly what we need for the game logic.
Sometimes, things can go wrong when calling the LLM. For example, the API might be down, or your API key might be missing. It's important to handle these errors so your game doesn't crash.
Here's how you can do it in JavaScript:
- The
tryblock attempts to get a response from the LLM and split it into words. - If something goes wrong, the
catchblock catches the error, prints a message, and returns an empty array.
This way, your game can handle problems smoothly and let you know what went wrong.
In this lesson, you learned how to:
- Set up the OpenAI client to communicate with the LLM in JavaScript.
- Send a prompt and user question to the LLM and receive its response.
- Split the LLM's response into individual words for use in the game.
- Handle errors gracefully to keep your game running smoothly.
You are now ready to practice these steps in the upcoming exercises. In the next section, you will get hands-on experience sending prompts to the LLM and processing its responses, just as we did here. Good luck!
