Sending a Simple Message to OpenAI

Welcome to the first lesson of our course on creating a chatbot with OpenAI. In this lesson, we will explore the basics of interacting with OpenAI's API, which is a powerful tool for building chatbots. OpenAI provides advanced language models that can understand and generate human-like text, making it an excellent choice for chatbot development. Our goal in this lesson is to send a simple message to OpenAI's language model and receive a response. This foundational step will set the stage for more complex interactions in future lessons.

Setting Up Your Environment

Before we can send a message to OpenAI, we need to set up our development environment. This involves installing the necessary tools and libraries. For this course, you will need the openai-php/client package, which allows us to interact with OpenAI's API.

To install this package, you can use the following command in your terminal with Composer:

In CodeSignal platform this library is pre-installed, so you can focus on writing and running your code without worrying about installation.

Setting the OpenAI API Key as an Environment Variable

In this course, you'll be using a coding environment where we've already set up everything you need to start working with OpenAI models. This means you don't need to worry about setting up an API key or configuring environment variables — it's all taken care of for you.

However, it's still useful to understand how this process works in case you want to set it up on your own computer in the future. To work with OpenAI models outside of a pre-configured environment, you need to set up a payment method and obtain an API key from their website. This API key is essential for accessing OpenAI's services and making requests to their API.

To keep your API key secure, you can use a .env file with a library like vlucas/phpdotenv. This file acts like a special note that your application can read to find out important details, such as your OpenAI API key, without having to write it directly in your code. This helps keep your key safe and secure.

Here's how you would set it up:

  1. Install vlucas/phpdotenv using Composer:

  2. Create a .env file in the root of your project and add your API key:

  3. Load the environment variables in your PHP script:

Initializing the OpenAI Client

In our coding environment, you don't need to use vlucas/phpdotenv to load environment variables, as OPENAI_API_KEY and OPENAI_BASE_URL are already available and configured for you. This allows you to focus on writing and testing your code without worrying about setting up these environment variables manually.

Once the environment variable is set, you can initialize the OpenAI client in your script. This is done by creating an instance of the OpenAI client using the openai-php/client package.

By initializing the client in this manner, you ensure that your script is ready to authenticate requests to OpenAI's API securely. The if block also makes sure the base URL ends with /v1, because the chat completion endpoint lives under that API version path.

Sending Your First Message to OpenAI

Now that your environment is set up and your API client is configured, it's time to send your first message to OpenAI. We'll start by defining a simple user prompt and then use the chat method to send this message to the AI model.

Here's the code to accomplish this:

In this code, we define a user prompt asking the AI to tell a joke. The chat method of the Client is used to send a message to the AI model and receive a response. It takes some basic parameters to function:

  • The model parameter specifies which AI model to use for generating the response. In this example, we use "gpt-4" as the course default for chat-based examples.

  • The messages parameter is an array of associative arrays where each array represents a message in the conversation. Each array must include a "role", which indicates the role of the message sender, such as "user" for the person interacting with the AI, and "content", which contains the actual text of the message.

Understanding OpenAI Response Structure

When you send a request to OpenAI's API, it returns a structured JSON response. Understanding this structure is essential for extracting the information you need and for debugging your application. Let's examine a typical response:

The OpenAI API response contains several important fields:

  • choices: This array contains the AI's responses. For most simple requests, you'll only have one item in this array (at index 0).

  • message: Within each choice, this object holds the AI-generated message.

  • role: Indicates who sent the message. In responses, this will be "assistant" to show it's from the AI.

  • content: The actual text of the AI's response, which is what we extract in our code.

  • finish_reason: Explains why the response ended. A value of "stop" means the AI completed its reply naturally.

  • usage: This object tracks token consumption, which is important for monitoring API usage and costs.

    • prompt_tokens: Number of tokens used in your input message.

    • completion_tokens: Number of tokens in the AI's response.

    • total_tokens: The sum of prompt and completion tokens.

Understanding this structure helps you properly extract the AI's response and handle any potential errors or edge cases in your application.

Extracting and Displaying the AI's Reply

After sending the message to OpenAI, the next step is to extract the AI's reply from the API response and display it. Here's how you can do that:

Once the chat method is called, it returns a typed response object. To extract the AI's reply, access the choices property from the response, select the first choice with choices[0], and then read the message content with message->content, trimming any extra spaces or newlines.

Finally, we print both the prompt and the AI's reply to see the interaction. This helps verify that the message was successfully sent and received. When you run this code, you should see an output similar to the following:

This output demonstrates a successful interaction with the AI, where it responds to the user's prompt with a joke.

Minimal Error Handling for API Calls

Real applications should not assume every request succeeds. Network errors, rate limits, and invalid parameters can all cause API calls to fail. A small try/catch block makes debugging much easier:

We keep many course tasks focused on one concept at a time, so not every exercise repeats this block, but this is the pattern you should use in production code.

Example: Full Code Implementation

Let's look at the complete code example for sending a message to OpenAI. This example includes all the steps we've discussed so far:

Summary and Next Steps

In this lesson, we covered the essential steps to send a simple message to OpenAI's language model. We set up our environment, configured API access, and sent a message to receive a response. This foundational knowledge is crucial as we move forward in building more complex chatbot interactions.

As you proceed to the practice exercises, I encourage you to experiment with different prompts and explore the AI's responses. This hands-on practice will reinforce what you've learned and prepare you for the next unit, where we'll delve deeper into handling API parameters. Keep up the great work, and enjoy the journey of creating your chatbot with OpenAI!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal