Introduction: The Power of Examples in Prompting

Welcome back! In the last lesson, you learned how to use constraints and requirements to make your prompts more effective. Today, I want to show you another powerful tool: using examples within your prompts.

When you include a clear example, you give the language model a template to follow. This makes your instructions easier to understand and helps the model produce more consistent and accurate answers. In this lesson, you will see how examples can simplify your prompts and reduce the need for complex constraints.

What Happens Without Examples?

Let's start by looking at what happens when you give a prompt without examples. Imagine you want to create a set of quiz cards about large language models (LLMs) for your students.

You might write a prompt like this:

If you give this prompt to an LLM, you might get answers that look like this:

At first glance, this looks fine. But if you try this prompt several times, you might notice:

  • The format of the questions and answers can change each time.
  • Sometimes, the correct answer is not clearly marked.
  • The numbering or lettering might be inconsistent.
  • The model might include an introduction or extra text you didn't ask for.

This happens because the model is making its best guess about what you want, but you haven't given it enough guidance.

How Examples Improve LLM Responses

One of the easiest ways to help the model understand your desired format is to provide an example. Let's add a clear example to our prompt:

By including this example, you are:

  • Showing the exact format you want for each question and answer.
  • Making it clear how to mark the correct answer.
  • Setting the tone and style for the rest of the questions.

When you give this prompt to the LLM, the output is much more likely to match your expectations. The model will follow your provided structure, making the results more consistent and easier to use.

Pro Tip

Note: Most modern large language models (LLMs) generate responses using Markdown, a lightweight markup language for formatting text. When you interact with these models through a chat interface, the platform automatically renders the Markdown, so formatting like headings, lists, bold, italics, and code blocks appear as intended.

For example:

  • If the model outputs # Heading, the chat interface displays it as a large, bold heading.

  • Writing **bold text** will appear as bold text.

  • Lists like:

    Will be shown as:

    • Item 1
    • Item 2

Because of this, when you provide examples or prompts in Markdown, you are explicitly showing the model the exact formatting you want in its response. This helps ensure the output matches your expectations in structure and appearance.

Combining Examples with Constraints

In previous lessons, you learned how to use constraints to control the LLM's output. Sometimes, even with an example, the model might include things you don't want, like repeating your example question or adding an introduction or conclusion. This is where constraints come in.

With these constraints, your output will be cleaner and more focused, matching your needs.

Summary and Next Steps

In this lesson, you learned how providing strong examples in your prompts can make communication with LLMs much easier. Examples help the model understand your expectations and reduce the need for detailed constraints.

As you move on to the practice exercises, try adding examples to your prompts and see how it changes the LLM's responses. This hands-on practice will help you master the skill of using examples to get the results you want.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal