Lesson 1
Sending a Simple Message to OpenAI Using PHP
Sending a Simple Message to OpenAI

Welcome to the first lesson of our course on creating a chatbot with OpenAI. In this lesson, we will explore the basics of interacting with OpenAI's API, which is a powerful tool for building chatbots. OpenAI provides advanced language models that can understand and generate human-like text, making it an excellent choice for chatbot development. Our goal in this lesson is to send a simple message to OpenAI's language model and receive a response. This foundational step will set the stage for more complex interactions in future lessons.

Setting Up Your Environment

Before we can send a message to OpenAI, we need to set up our development environment. This involves installing the necessary tools and libraries. For this course, you will need the openai-php/client library, which allows us to interact with OpenAI's API.

To install this library, you can use the following command in your terminal:

Bash
1composer require openai-php/client

If you are following the course on CodeSignal, the library is already installed, so you can focus on writing and running your code without worrying about installation.

Setting the OpenAI API Key as an Environment Variable

In this course, you'll be using a coding environment where we've already set up everything you need to start working with OpenAI models. This means you don't need to worry about setting up an API key or configuring environment variables — it's all taken care of for you.

However, it's still useful to understand how this process works in case you want to set it up on your own server in the future. To work with OpenAI models outside of a pre-configured environment, you need to set up a payment method and obtain an API key from their website. This API key is essential for accessing OpenAI's services and making requests to their API.

To keep your API key secure, you can use a .env file or server configuration. Here's how you can do it using a .env file:

  1. Create a file named .env in the root of your project.

  2. Add the following line to the .env file:

    1OPENAI_API_KEY=your_api_key_here
  3. Use a library like vlucas/phpdotenv to load the environment variables in your PHP script:

    php
    1require 'vendor/autoload.php'; 2 3$dotenv = Dotenv\Dotenv::createImmutable(__DIR__); 4$dotenv->load();

This approach helps keep your key safe and secure.

Initializing the OpenAI Client

Once the environment variable is set, you can initialize the OpenAI client in your script. This is done by using the OpenAI\Client class from the openai-php/client library and then creating an instance of it. Here’s how you do it:

php
1require 'vendor/autoload.php'; 2 3$apiKey = $_ENV['OPENAI_API_KEY'] ?? getenv('OPENAI_API_KEY'); 4$baseUrl = $_ENV['OPENAI_BASE_URI'] ?? getenv('OPENAI_BASE_URI'); 5 6// Initialize the OpenAI client 7$client = \OpenAI::factory() 8 ->withApiKey($apiKey) 9 ->withBaseUri($baseUrl) 10 ->make();

By initializing the client in this manner, you ensure that your script is ready to authenticate requests to OpenAI's API securely.

Sending Your First Message to OpenAI

Now that your environment is set up and your API client is configured, it's time to send your first message to OpenAI. We'll start by defining a simple user prompt and then use the chat method to send this message to the AI model.

Here's the code to accomplish this:

php
1// Define a simple user message to test the API 2$prompt = "Hi, can you tell me a joke?"; 3 4// Create a chat completion request to get the AI response 5$response = $client->chat()->create([ 6 'model' => 'gpt-4', 7 'messages' => [ 8 ['role' => 'user', 'content' => $prompt] 9 ] 10]);

In this code, we define a user prompt asking the AI to tell a joke. The chat method of the Client is used to send a message to the AI model and receive a response. It takes some basic parameters to function:

  • The model parameter specifies which AI model to use for generating the response. In this example, we use "gpt-4", which is a version of OpenAI's language model known for its advanced text understanding and generation capabilities.

  • The messages parameter is an array of associative arrays where each array represents a message in the conversation. Each array must include a "role", which indicates the role of the message sender, such as "user" for the person interacting with the AI, and "content", which contains the actual text of the message.

Understanding OpenAI Response

When you send a request to OpenAI's API, it returns a structured JSON response. Below is an example:

JSON
1{ 2 "id": "chatcmpl-12345", 3 "object": "chat.completion", 4 "created": 1677652284, 5 "model": "gpt-4", 6 "choices": [ 7 { 8 "index": 0, 9 "message": { 10 "role": "assistant", 11 "content": "Why don't scientists trust atoms? Because they make up everything!" 12 }, 13 "finish_reason": "stop" 14 } 15 ], 16 "usage": { 17 "prompt_tokens": 10, 18 "completion_tokens": 20, 19 "total_tokens": 30 20 } 21}
Key Fields
  1. choices: Contains the AI's response.

    • message: Holds the AI-generated message.
      • role: Indicates the sender (assistant).
      • content: The AI's response text.
    • finish_reason: Explains why the response ended (e.g., stop means the AI completed its reply naturally).
  2. usage: Tracks token consumption.

    • prompt_tokens: Number of tokens used in the input message.
    • completion_tokens: Number of tokens in the AI's response.
    • total_tokens: Sum of both, useful for monitoring API usage and costs.
Extracting and Displaying the AI's Reply

After sending the message to OpenAI, the next step is to extract the AI's reply from the API response and display it. Here's how you can do that:

php
1// Extract the AI's response from the API result 2$reply = trim($response['choices'][0]['message']['content']); 3 4// Show both sides of the conversation 5echo "Prompt: $prompt\n"; 6echo "Response: $reply\n";

Once the create method is called, it returns a response array. To extract the AI's reply, you need to access the choices array from the response, which contains possible responses generated by the AI. You then select the first choice with choices[0] and retrieve the message content using message['content'], trimming any extra spaces or newlines.

Finally, we print both the prompt and the AI's reply to see the interaction. This helps verify that the message was successfully sent and received. When you run this code, you should see an output similar to the following:

Plain text
1Prompt: Hi, can you tell me a joke? 2Response: Why don't scientists trust atoms? Because they make up everything!

This output demonstrates a successful interaction with the AI, where it responds to the user's prompt with a joke.

Example: Full Code Implementation

Let's look at the complete code example for sending a message to OpenAI. This example includes all the steps we've discussed so far:

php
1require 'vendor/autoload.php'; 2 3use Dotenv\Dotenv; 4 5// Load environment variables 6$dotenv = Dotenv::createImmutable(__DIR__); 7$dotenv->load(); 8 9// Fetch the api key 10$apiKey = $_ENV['OPENAI_API_KEY'] ?? getenv('OPENAI_API_KEY'); 11$baseUrl = $_ENV['OPENAI_BASE_URI'] ?? getenv('OPENAI_BASE_URI'); 12 13// Initialize the OpenAI client 14$client = \OpenAI::factory() 15 ->withApiKey($apiKey) 16 ->withBaseUri($baseUrl) 17 ->make(); 18 19// Define a simple user message to test the API 20$prompt = "Hi, can you tell me a joke?"; 21 22// Create a chat request to get the AI response 23$response = $client->chat()->create([ 24 'model' => 'gpt-4', 25 'messages' => [ 26 ['role' => 'user', 'content' => $prompt] 27 ] 28]); 29 30// Extract the AI's response from the API result 31$reply = trim($response['choices'][0]['message']['content']); 32 33// Show both sides of the conversation 34echo "Prompt: $prompt\n"; 35echo "Response: $reply\n";
Summary and Next Steps

In this lesson, we covered the essential steps to send a simple message to OpenAI's language model. We set up our environment, configured API access, and sent a message to receive a response. This foundational knowledge is crucial as we move forward in building more complex chatbot interactions.

As you proceed to the practice exercises, I encourage you to experiment with different prompts and explore the AI's responses. This hands-on practice will reinforce what you've learned and prepare you for the next unit, where we'll delve deeper into handling API parameters. Keep up the great work, and enjoy the journey of creating your chatbot with OpenAI!

Enjoy this lesson? Now it's time to practice with Cosmo!
Practice is how you turn knowledge into actual skills.