Introduction & Overview

Welcome! In our previous lesson, you learned the fundamentals of communicating with GPT-5 through the OpenAI Responses API. You mastered sending single requests, understanding response structures, and managing multi-turn conversations within a single interaction. Now, you're ready to take the next step and unlock even more powerful ways to work with GPT-5.

In this lesson, you'll discover how to design and implement multi-step workflows using GPT-5, with a special focus on a technique called prompt chaining. By the end, you'll know how to break down sophisticated tasks into manageable, reliable steps and connect them together for robust AI-powered solutions.

What is a Workflow?

A workflow is a structured sequence of steps or actions designed to accomplish a specific goal. In the context of AI systems, workflows help you organize and coordinate tasks so that each step has a clear purpose, defined inputs and outputs, and measurable success criteria. There are many types of workflows — some involve a single interaction, while others may require multiple steps, validation, or branching logic. Well-designed workflows make complex processes more predictable, easier to debug, and simpler to maintain.

Prompt Chaining and Why it Matters

Prompt chaining is one specific workflow pattern where you connect multiple separate GPT-5 calls together, with each call building upon the output of the previous one. Unlike multi-turn conversations that happen within a single session, prompt chaining involves distinct API calls that work together to solve complex problems step by step.

The power of prompt chaining lies in its reliability and modularity. Instead of asking GPT-5 to perform multiple complex tasks in a single prompt (which can lead to inconsistent results), you break the work into focused steps where you can validate and control the output at each stage. This approach makes your AI workflows more predictable and easier to debug.

Design the Workflow Before Coding

Before writing any code, it's important to break your task into clear, manageable steps. For our example, we'll build a simple three-step workflow:

  1. Generate a summary about AI in healthcare, with a strict character limit (around 300 characters).
  2. Validate that the summary meets the character requirement.
  3. Translate the validated summary into Spanish, returning only the translated text.

Each step will have its own focused prompt and clear input/output, making the workflow easy to follow and debug. This approach helps ensure each part works as expected before moving to the next.

Step 1: Generate a Constrained Summary

Let's start building our chain by creating the first step: generating a summary with specific character constraints. This step demonstrates how to use instructions and input messages effectively to get predictable output from GPT-5.

Notice how we separate the instructions from the input messages. The instructions parameter establishes GPT-5's role as a summary writer, while the input list contains the specific task and constraints. This separation makes our prompts more maintainable and allows us to reuse the same instructions for different summary tasks.

We've added the reasoning parameter with "effort": "minimal" to optimize for faster response times. Since summarization is a straightforward task that doesn't require complex logical analysis, minimal reasoning effort is sufficient while keeping the workflow efficient. This balance between quality and speed is particularly important when building multi-step chains.

The user message is explicit about the character requirement. Instead of saying "write a short summary," we specify exactly "300 characters" to make the constraint testable and clear. This precision is essential in prompt chaining because the output of this step becomes the input for the next step.

Step 2: Validate and Guardrail the Output

The second step in our chain adds a crucial validation layer that ensures our summary meets the character requirements before proceeding to translation. This validation step demonstrates how to build reliable guardrails into your prompt chains.

This validation step uses a programmatic check rather than asking GPT-5 to validate its own output. We define a reasonable range (250-350 characters) instead of requiring exactly 300 characters, which gives GPT-5 some flexibility while still meeting our needs.

When the validation fails, we raise a ValueError with a descriptive message that includes both the expected range and the actual character count. This makes debugging easier when your chain encounters problems. In production systems, you might want to implement retry logic here, perhaps asking GPT-5 to revise the summary with tighter constraints.

When the validation passes, you'll see output like:

Without validation, a summary that's too long or too short could cause problems in subsequent steps. By catching and handling constraint violations early, you make your entire workflow more robust.

Step 3: Feed the Output into Translation

The third step demonstrates the core concept of prompt chaining: using the output from one GPT-5 call as input to another. This step takes our validated summary and translates it into Spanish using focused instructions.

The instructions for this step are focused specifically on translation rather than general assistance. This specialization helps GPT-5 understand its role in this step of the chain and produces more consistent results.

Like the summary step, we use "effort": "minimal" for reasoning since translation is a well-defined task that doesn't require complex problem-solving. This keeps the chain executing quickly while maintaining translation quality.

The key insight here is how we safely pass the summary_text variable from step one into the user message for step three. We use an f-string to embed the summary directly into the prompt, creating a clear separation between our instruction ("Return me just the Spanish translation") and the content to be translated.

When you run this final step, you'll see output like:

Each step builds naturally on the previous one, creating a smooth workflow from English summary generation through validation to Spanish translation.

Summary & Prep for Practice

You've successfully designed and implemented a three-step prompt chain that demonstrates the core concepts of sequential AI workflows. Your chain writes a constrained summary, validates that it meets requirements, then uses that validated output as input for translation. The key patterns you've learned include decomposing complex tasks into focused steps, using validation and guardrails between steps, safely passing output from one GPT-5 call as input to the next, and optimizing each step with appropriate reasoning effort levels.

The workflow pattern you've built uses client.responses.create() with separate instructions and input parameters for each step, includes reasoning configuration for optimal performance, and extracts results using the straightforward output_text property. This clean separation of concerns makes your chains easier to understand, test, and maintain.

In the upcoming practice exercises, you'll implement this code yourself and extend it with additional features. The foundation you've built with prompt chaining opens up possibilities for much more sophisticated AI workflows. As you continue through this course, you'll see how these basic chaining concepts extend to tool usage, dynamic workflows, and complex agent behaviors that can handle real-world business problems.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal