Welcome! In our previous lesson, you learned the fundamentals of communicating with GPT-5 through the OpenAI Responses API. You mastered sending single requests, understanding response structures, and managing multi-turn conversations within a single interaction. Now, you're ready to take the next step and unlock even more powerful ways to work with GPT-5.
In this lesson, you'll discover how to design and implement multi-step workflows using GPT-5, with a special focus on a technique called prompt chaining. By the end, you'll know how to break down sophisticated tasks into manageable, reliable steps and connect them together for robust AI-powered solutions.
A workflow is a structured sequence of steps or actions designed to accomplish a specific goal. In the context of AI systems, workflows help you organize and coordinate tasks so that each step has a clear purpose, defined inputs and outputs, and measurable success criteria. There are many types of workflows — some involve a single interaction, while others may require multiple steps, validation, or branching logic. Well-designed workflows make complex processes more predictable, easier to debug, and simpler to maintain.
Prompt chaining is one specific workflow pattern where you connect multiple separate GPT-5 calls together, with each call building upon the output of the previous one. Unlike multi-turn conversations that happen within a single session, prompt chaining involves distinct API calls that work together to solve complex problems step by step.
The power of prompt chaining lies in its reliability and modularity. Instead of asking GPT-5 to perform multiple complex tasks in a single prompt (which can lead to inconsistent results), you break the work into focused steps where you can validate and control the output at each stage. This approach makes your AI workflows more predictable and easier to debug.
Before writing any code, it's important to break your task into clear, manageable steps. For our example, we'll build a simple three-step workflow:
- Generate a summary about AI in healthcare, with a strict character limit (around 300 characters).
- Validate that the summary meets the character requirement.
- Translate the validated summary into Spanish, returning only the translated text.
Each step will have its own focused prompt and clear input/output, making the workflow easy to follow and debug. This approach helps ensure each part works as expected before moving to the next.
Let's start building our chain by creating the first step: generating a summary with specific character constraints. This step demonstrates how to use instructions and input messages effectively to get predictable output from GPT-5.
Notice how we separate the instructions from the input messages. The instructions parameter establishes GPT-5's role as a summary writer, while the input array contains the specific task and constraints. This separation makes our prompts more maintainable and allows us to reuse the same instructions for different summary tasks.
In TypeScript, we declare variables using const for values that won't be reassigned. We can optionally add type annotations like Array<{ role: "user"; content: string }> to make our code more explicit and catch potential errors at compile time. The await keyword is required because the API call is asynchronous, and we need to wait for the response before proceeding.
We've added the reasoning parameter with to optimize for faster response times. Since summarization is a straightforward task that doesn't require complex logical analysis, minimal reasoning effort is sufficient while keeping the workflow efficient. This balance between quality and speed is particularly important when building multi-step chains.
The second step in our chain adds a crucial validation layer that ensures our summary meets the character requirements before proceeding to translation. This validation step demonstrates how to build reliable guardrails into your prompt chains.
This validation step uses a programmatic check rather than asking GPT-5 to validate its own output. We define a reasonable range (250-350 characters) instead of requiring exactly 300 characters, which gives GPT-5 some flexibility while still meeting our needs.
In TypeScript, we access the string length using the .length property rather than a function call. Since TypeScript doesn't support chained comparisons, we write the condition as summaryText.length < 250 || summaryText.length > 350 to check if the length falls outside our acceptable range.
When the validation fails, we throw a new Error with a descriptive message that includes both the expected range and the actual character count using template literal syntax with ${}. This makes debugging easier when your chain encounters problems. In production systems, you might want to implement retry logic here, perhaps asking GPT-5 to revise the summary with tighter constraints.
When the validation passes, you'll see output like:
Without validation, a summary that's too long or too short could cause problems in subsequent steps. By catching and handling constraint violations early, you make your entire workflow more robust.
The third step demonstrates the core concept of prompt chaining: using the output from one GPT-5 call as input to another. This step takes our validated summary and translates it into Spanish using focused instructions.
The instructions for this step are focused specifically on translation rather than general assistance. This specialization helps GPT-5 understand its role in this step of the chain and produces more consistent results.
Like the summary step, we use effort: "minimal" for reasoning since translation is a well-defined task that doesn't require complex problem-solving. This keeps the chain executing quickly while maintaining translation quality. The await keyword is again required to handle the asynchronous API call.
The key insight here is how we safely pass the summaryText variable from step one into the user message for step three. We use a template literal (enclosed in backticks) to embed the summary directly into the prompt with ${summaryText}, creating a clear separation between our instruction ("Return me just the Spanish translation") and the content to be translated.
When you run this final step, you'll see output like:
You've successfully designed and implemented a three-step prompt chain that demonstrates the core concepts of sequential AI workflows. Your chain writes a constrained summary, validates that it meets requirements, then uses that validated output as input for translation. The key patterns you've learned include decomposing complex tasks into focused steps, using validation and guardrails between steps, safely passing output from one GPT-5 call as input to the next, and optimizing each step with appropriate reasoning effort levels.
The workflow pattern you've built uses await client.responses.create() with separate instructions and input parameters for each step, includes reasoning configuration for optimal performance, and extracts results using the straightforward output_text property. TypeScript's async/await syntax makes handling these asynchronous operations clean and readable, while optional type annotations help catch errors early. This clean separation of concerns makes your chains easier to understand, test, and maintain.
Key TypeScript patterns you've applied include declaring variables with const, using template literals with backticks for string interpolation (`text ${variable}`), accessing string length with the .length property, throwing errors with throw new Error(), and handling asynchronous operations with await.
In the upcoming practice exercises, you'll implement this code yourself and extend it with additional features. The foundation you've built with prompt chaining opens up possibilities for much more sophisticated AI workflows. As you continue through this course, you'll see how these basic chaining concepts extend to tool usage, dynamic workflows, and complex agent behaviors that can handle real-world business problems.
