Welcome to the second lesson of our course on building your own Deep Researcher. In this lesson, we will explore the concept of making basic LLM (Large Language Model) calls. LLMs, such as OpenAI's models, are powerful tools that can generate human-like text responses. They are integral to AI applications, enabling them to understand and respond to user inputs naturally. By the end of this lesson, you will understand how to make a basic LLM call and interpret its output.
Let's start by understanding the structure of the code used to make an LLM call. We'll build this step-by-step.
First, we need to import the necessary libraries and set up the OpenAI client. This client will allow us to interact with the OpenAI API.
Here, we import the os
module to access environment variables and the OpenAI
class from the openai
library. We then create an OpenAI
client using the API key and base URL stored in environment variables. This client is essential for making requests to the OpenAI API.
Next, we need to define the prompts that will guide the model's behavior. There are two types of prompts: system prompts and user prompts.
- System Prompt: This sets the context for the model. In our example, the system prompt instructs the model to respond like a pirate.
- User Prompt: This is the input from the user. Here, the user is asking how to check if a Python object is an instance of a class.
These prompts are crucial as they shape the model's responses, ensuring they are relevant and contextually appropriate.
To control the model's output, we configure certain parameters like temperature
and the used model
.
- Temperature: This parameter controls the randomness of the model's output. A lower temperature (e.g., 0.2) makes the output more deterministic, while a higher temperature (e.g., 0.8) introduces more randomness and creativity. In our example, a temperature of 0.7 strikes a balance between creativity and coherence.
- Model: This parameter controls the model that will be used to generate the response. You can find a list of available models in the OpenAi website. For this course we will use
gpt-4o-mini
, but you are free to change this parameter to you preferred model.
Now, let's execute the LLM call using the client.chat.completions.create
method.
- Model: We specify the model to use, in this case,
"gpt-4o-mini"
. - Messages: This is a list of messages that includes both the system and user prompts.
- Temperature: We pass the temperature parameter to control the output's randomness.
The create
method sends the request to the OpenAI API, and the response is stored in the completion
variable. We then print the model's response, which is accessed through completion.choices[0].message.content.strip()
.
In this lesson, we explored how to make basic LLM calls using OpenAI's API. We covered the setup of the OpenAI client, the creation of system and user prompts, the configuration of model parameters, and the execution of the LLM call. Understanding these components is essential for leveraging LLMs in your projects.
As you move on to the practice exercises, experiment with different prompts and temperature settings to see how they affect the model's output. This hands-on experience will deepen your understanding and prepare you for more advanced applications in future lessons.
