In the previous lesson, you learned how to use query() to send prompts and handle streaming responses. Every time you called query(), the agent used default settings — a specific Claude model, no special instructions about how to behave, and a set limit on how long it could work on a task. While these defaults work fine for simple experiments, real applications need more control. This is where ClaudeAgentOptions comes in — it's your configuration object that defines how the agent should behave during a query() run.
ClaudeAgentOptions is a comprehensive configuration object that controls every aspect of your agent's behavior and capabilities. While your prompt tells the agent what you want it to do, ClaudeAgentOptions defines how the agent should operate, what tools it can access, and under what constraints. Think of it as the difference between giving someone a task and setting the complete working environment — the prompt is the task itself, while the options define the working conditions, available resources, permissions, and operational boundaries.
The configuration object gives you control over numerous aspects of agent behavior:
- Model selection — Choose which Claude model powers your agent
- Personality and instructions — Define the agent's role, tone, and behavioral guidelines through system prompts
- Reasoning limits — Control how many thinking cycles the agent can perform
- Tool access — Specify which tools (like file reading, writing, or bash execution) the agent can use
- Permission handling — Configure whether the agent needs approval before executing tools
- Working directory — Set the filesystem location where the agent operates
- Skill loading — Configure sources for loading custom agent skills
- External integrations — Connect Model Context Protocol (MCP) servers for custom tooling
In this lesson, we'll focus on three fundamental settings that shape the agent's basic behavior: model selection, system prompts, and turn limits. These parameters form the foundation of agent configuration and are essential for understanding how to control your agent's core behavior before diving into more advanced capabilities like tool management and permissions.
The model parameter lets you choose which Claude model powers your agent. Different models offer different trade-offs between speed, cost, and capability. Anthropic provides several Claude models, each optimized for different use cases. The model you choose directly impacts how the agent reasons, how quickly it responds, and how much each interaction costs. Here's how you specify a model in your configuration:
You can specify models in two ways:
Using Model Family Names — Use simple family names like "haiku", "sonnet", or "opus" to automatically get the latest version of that model family. This approach is convenient for development and ensures you're always using the latest improvements.
Using Specific Version Identifiers — Use full model identifiers like "claude-haiku-4-5-20251001" to pin to a specific version. This approach is useful for production applications where you want predictable, consistent behavior.
Each Claude model family serves different use cases:
Haiku — The fastest and most cost-effective models, ideal for simple tasks like answering straightforward questions, formatting text, or performing basic analysis.
Sonnet — Balanced models that offer a sweet spot between speed and intelligence, handling more complex reasoning while maintaining good performance.
Opus — The most capable models that provide the highest level of reasoning capability for the most demanding tasks.
When choosing a model, consider the complexity of your task and your budget constraints — you can always start with a faster model and upgrade if the results aren't meeting your needs.
The system_prompt parameter defines the agent's role, tone, and behavior before it even sees your actual prompt. This is your opportunity to give the agent a personality, set expectations about how it should communicate, or provide domain expertise. When working with agents that use tools, the system prompt becomes especially valuable for setting constraints and specifications — you might define code style preferences for a coding assistant, specify output formats for data analysis tools, or establish rules about which files the agent can access. The system prompt acts as a persistent instruction that shapes every response and tool use during the interaction. Here's how you add a system prompt to your configuration:
The system prompt you provide is integrated within the agent's default system prompt, which already contains instructions about available tools and core capabilities. Your custom prompt adds personality, role definitions, and specific constraints on top of these base instructions. Before processing your user prompt, the agent reads both the default system instructions and your custom additions, then adopts the combined behavior. In this example, the agent will respond with enthusiasm, use simple language, and focus on making concepts accessible to beginners — while still maintaining its ability to use tools and perform other agent functions.
The max_turns parameter controls how many reasoning cycles the agent can perform before it must complete the task. A turn represents one complete cycle in which the agent thinks about the task, potentially uses tools, evaluates the results, and decides whether to continue or finish. This parameter serves two important purposes: it prevents runaway costs from tasks that spiral into many reasoning cycles, and it forces the agent to work efficiently within constraints. Here's how you add a turn limit to your configuration:
Setting max_turns=5 means the agent can go through up to five reasoning cycles before the interaction terminates. If the agent reaches the turn limit before completing the task, the agent stops and returns a ResultMessage with subtype='error_max_turns' and result=None. This message signals that the agent exhausted its allowed turns without finishing the task, but note that is_error=False — the SDK doesn't treat this as a failure, just as a completion condition. The ResultMessage still includes useful metadata like cost information, token usage, and the number of turns completed, which you can use for monitoring and optimization.
If you're building an application that makes many agent calls, setting reasonable turn limits helps you predict and control costs. A simple Q&A bot might use max_turns=2, while a complex code analysis tool might allow max_turns=10 or more. You can adjust this parameter based on your specific use case and budget.
Once you've created your ClaudeAgentOptions object with your desired settings, you pass it to the query() function using the options parameter. This applies your configuration to that specific agent interaction. Here's the complete pattern showing how all the pieces fit together:
The code creates a ClaudeAgentOptions object with three settings: it selects the Haiku model for fast, cost-effective responses, sets a system prompt that makes the agent act as an enthusiastic beginner-friendly tutor, and limits the interaction to five turns maximum. When you pass this options object to query(), the agent adopts these settings for this specific interaction.
The streaming pattern you learned in the previous lesson remains the same — you still use async for to iterate over messages, check for AssistantMessage instances, and extract text from TextBlock objects. Now let's see how the agent responds with these configured settings.
When you run the code with your custom configuration, you'll see the agent respond with the personality and constraints you specified:
Notice how the response reflects the "enthusiastic beginner friendly tutor" personality you configured. The agent uses clear structure with headers and tables, explains concepts in accessible language, includes helpful comparisons, and ends with an encouraging emoji and an offer to explain more. This same question with a different system prompt (like "You are a terse senior engineer") would produce a much more concise, technical response. The model choice (Haiku) ensured this response came back quickly and cost-effectively, while the turn limit (5) prevented the agent from over-elaborating on the topic.
You've now learned how to take control of your agent's fundamental behavior through ClaudeAgentOptions. The model parameter lets you choose which Claude model powers your agent, the system_prompt parameter shapes the agent's personality and communication style, and the max_turns parameter limits reasoning cycles to control costs. In the practice exercises ahead, you'll experiment with different configurations to see how each parameter affects the agent's behavior.
