Welcome to your first lesson in building effective agents with Claude! Whether you're completely new to the Anthropic API or have some experience with it, this lesson will ensure you have a solid foundation for the advanced agent-building techniques we'll cover later in the path.
In this lesson, you'll learn how to send messages to Claude using the Anthropic API and understand the complete response structure. By the end, you'll be able to create a TypeScript script that communicates with Claude and examine the full JSON response. This understanding is crucial because, throughout this course, we'll be working with different parts of Claude's responses — from basic text content to tool usage metadata and conversation flow control.
This foundation is essential because later lessons will extend this same pattern to develop more complex workflows with Claude.
To communicate with Claude, you'll need two things: the @anthropic-ai/sdk package and an API key from Anthropic. The SDK handles all the technical details of making API requests, and you'd normally install it using npm install @anthropic-ai/sdk or yarn add @anthropic-ai/sdk. The API key authenticates your requests, and the Anthropic client automatically looks for it in the ANTHROPIC_API_KEY environment variable.
In CodeSignal, we've already configured everything for you — the SDK is pre-installed and your API key is set up, so you can focus on learning the core concepts without worrying about setup details.
Every interaction with Claude follows a structured conversation pattern. Understanding this structure is key to building effective agents, as you'll need to manage conversation state and interpret various response components throughout this course.
Claude recognizes three conversation roles. The system role sets the context and instructions for Claude's behavior — essentially Claude's job description for the conversation. The user role represents messages from you or your application users. The assistant role represents Claude's responses. This role-based system helps Claude maintain context and understand conversation flow, which becomes critical when building multi-step agent workflows.
When you send a request to Claude, you package several pieces of information: the model you want to use, a system prompt that defines Claude's behavior, an array of messages representing the conversation history, and a max_tokens limit for the response length. Tokens roughly correspond to words, so max_tokens: 2000 allows Claude to respond with approximately 1,500–2,000 words.
The request flows to Anthropic's servers, where Claude processes your messages and generates a response. That response returns as a structured JSON object containing Claude's message plus metadata about the interaction — information we'll use extensively in later lessons for tool usage tracking, conversation management, and error handling.
Let's build our first Claude interaction by examining each component. We'll start with the basic imports and client initialization, then define our model and system prompt:
The client automatically finds your API key in the environment variables. Notice we use camelCase naming conventions (like systemPrompt), which are standard in TypeScript. The system prompt influences how Claude responds throughout the conversation — it's like setting Claude's personality and expertise for the entire interaction. Understanding system prompts is crucial because, later in the course, we'll use them to define how Claude should use tools and handle complex agent workflows.
Now we'll create the messages array representing our conversation:
Each message is an object with role and content properties. We use the type annotation : Anthropic.MessageParam[] to tell TypeScript what kind of data this array should contain, which helps catch errors during development. Even for a single message, we use an array because conversations can have multiple exchanges.
With our message prepared, we can now send it to Claude:
The client.messages.create() method sends an HTTP request to Anthropic's servers, where Claude processes your message according to the system prompt and returns a structured response. Notice the await keyword — this is part of TypeScript's async/await pattern for handling asynchronous operations. The API call takes time to complete, so await pauses execution until we receive Claude's response.
Note that max_tokens is a required parameter that limits how long Claude's response can be. Think of Claude as having two token limits: a context window (how much total conversation history it can remember) and a response limit (how much it can write back to you). The context window for Claude Sonnet is around 200,000 tokens, which can hold roughly 150,000 words of conversation history. The max_tokens parameter controls the response limit — setting it to 2000 means Claude can respond with up to about 1,500–2,000 words, leaving the rest of the context window available for your conversation history.
To understand what Claude returns, let's examine the complete response structure:
The JSON.stringify() method converts Claude's response object into a formatted JSON string. The second parameter (null) is for a replacer function (which we don't need), and the third parameter (2) sets the indentation for readability. You'll see output like this:
This JSON structure contains everything you need to understand how Claude processed your request and what it returned.
Understanding this response structure is essential for the rest of the course. Key fields include:
id- Provides a unique identifier useful for logging and debuggingcontent- Contains Claude's response as an array of content blocks. Notice it's an array because responses can contain multiple blocks of different types — text blocks like we see here, but also thinking blocks, tool usage blocks, and other content types we'll explore laterstop_reason- Tells you why Claude stopped generating text."end_turn"means Claude naturally concluded its response, but you'll encounter other values like"tool_use"in later lessons when Claude decides to call a functionusage- Provides detailed token consumption information, which becomes important for monitoring agent performance and costs
Pay special attention to the content array structure. Each block has a type field (here it's "text") and the actual content.
Most of the time, you'll want to access just Claude's text response rather than the full JSON structure. Let's see how to extract the clean text content.
When you're certain that the first content block is text (which is common in simple requests without thinking or tool use), you can use a direct approach:
This accesses the first element of the content array directly using [0] and uses a type assertion to tell TypeScript we know it's a TextBlock. This is concise and works well for straightforward cases.
However, as your agent workflows become more complex, you'll encounter responses with multiple content blocks of different types. In those cases, a more defensive approach is safer:
This code uses the .find() method to search through the content blocks and locate the first one with type === "text". We then use a type guard (if (textContent && textContent.type === "text")) to verify we found a text block before accessing its text property. This defensive programming approach is typical in TypeScript and helps prevent runtime errors when the response structure might vary.
Both approaches produce the clean text output:
This understanding of content block filtering will be crucial as we progress to more complex agent workflows where responses may contain multiple content blocks of different types.
Real conversations don't end after one exchange. To continue our conversation with Claude, we need to maintain the conversation history by adding Claude's response to our messages array, then appending our follow-up question:
Notice how we use .push() to add elements to the array, and we append Claude's entire content array to maintain the conversation structure. This preserves all content blocks and their types, which becomes crucial when working with responses that contain multiple content types. Now let's see Claude's response to our follow-up question:
This produces output like:
The conversation continues naturally because Claude can see the full context of our previous exchange, allowing it to provide a focused answer about training specifically.
Claude can also show its reasoning process through "thinking" — internal deliberation that helps it provide better responses. Let's continue our conversation with thinking enabled to see how Claude works through problems:
When thinking is enabled, Claude's response can contain multiple content blocks of different types.
Let's examine the full response structure to see both Claude's internal reasoning and its final answer:
You'll see output that includes both a thinking block (Claude's internal reasoning) and a text block (the final answer):
Notice how the response now contains two content blocks: one showing Claude's internal reasoning process and another with the polished final answer.
When Claude's response contains multiple content blocks (like thinking and text), we need a way to extract just the parts we want. Let's filter the response to get only the text content that we'd show to a user:
This code uses a functional programming style typical in TypeScript. First, .filter() creates a new array containing only blocks where type === "text". Then .map() transforms each block into just its text content. Notice the type assertion (block as Anthropic.TextBlock) — this tells TypeScript to treat the block as a TextBlock type so we can safely access its text property. Finally, we join all the text pieces together with newlines. This produces the clean final output:
This pattern of filtering content blocks by type is essential for building robust agents. Later in the course, you'll encounter responses with tool usage blocks, multiple text blocks, and other content types that require similar filtering and processing techniques.
You've now successfully understood how to interact with Claude using the Anthropic API, examined the complete response structure, built multi-turn conversations, and explored extended thinking capabilities. You understand how to structure conversations with roles, send requests with system prompts, maintain conversation context, and interpret response metadata, including different content block types.
The key concepts you've learned — conversation management, content block filtering, and response structure analysis — form the foundation for the advanced agent-building techniques we'll cover throughout this course.
In the upcoming practices, you'll get hands-on experience building on these concepts and exploring different ways to interact with Claude. This foundation will serve you well as we progress through more advanced topics in the course!
