Welcome to the third lesson in our Foundation course. In the previous sessions, you learned how to install OpenCode and manage the daily workflow of creating files, running commands, and handling session history. You have moved from setting up the tool to actively using it for basic coding tasks. However, as your projects become more complex, sticking to the default settings may no longer be efficient.
In this lesson, we focus on customizing how the agent operates to suit your specific needs. We will explore how to monitor your token usage and costs, how to switch between different AI models depending on the difficulty of your task, and how to change the agent's role from a builder to a planner. Finally, we will look at configuring security permissions to control exactly what the agent is allowed to do on your computer.
When you use an AI agent, every message you send and every response you receive consumes tokens. These tokens are the basic units of data that AI models process, and they often translate directly to real-world costs or quota limits. Tracking these metrics is essential for budgeting and understanding how efficiently you work.
Understanding tokens is fundamental to working with AI systems. A token is not exactly a word—it is a chunk of text that the model processes as a single unit. On average, one token equals roughly four characters in English, meaning a 100-word paragraph might consume around 130-150 tokens. This matters because API providers charge based on token consumption, and different operations have different costs. Input tokens (what you send to the model, including file contents) are typically cheaper than output tokens (what the model generates back). Additionally, modern AI systems use caching to reduce costs: if you send similar context repeatedly, the system can reuse previously processed data rather than recomputing everything from scratch.
OpenCode provides a built-in command to view this data directly from your terminal. You run this command from your standard command line, not from inside the interactive OpenCode session:
Example Output:
The output is divided into two main sections. The Overview shows your activity volume: the number of sessions you have started and total messages exchanged. The section breaks down the financial impact and data usage, including the split between tokens (what you typed and the files the AI read) and tokens (the code and text the AI generated). The and metrics show how much data was reused versus newly processed—higher cache read values indicate more efficient sessions. Monitoring this data helps you decide whether you need to be more concise or if you can afford to use more powerful models.
Not every coding task requires the smartest or most expensive AI model available. For simple tasks like writing a small script or fixing a typo, a lighter model is faster and cheaper. For complex system architecture or debugging a difficult error, you need a more capable model. OpenCode allows you to switch between these models instantly while working inside the interactive interface.
AI models exist in tiers because there is an inherent tradeoff between capability and resource consumption. Smaller models have fewer parameters (the internal weights that determine behavior), which means they process requests faster and at lower cost but may struggle with nuanced reasoning or complex multi-step problems. Larger models have billions more parameters, enabling deeper understanding and more sophisticated output, but they require more computational power and time. This tiered approach allows you to match the tool to the task: you would not use a sledgehammer to hang a picture frame, and you would not use a tack hammer to demolish a wall. The same principle applies to AI models. Learning to recognize task complexity and select the appropriate model tier is a skill that develops with experience and directly impacts both your productivity and your budget.
To change the model, use the /models command:
Example Output:
In this example, the user switches from Haiku, which is optimized for speed, to Sonnet, which offers a balance of intelligence and performance. The list also shows , the most capable model, which is ideal for very challenging problems but is usually slower and more expensive.
OpenCode operates using two distinct modes: the Build Agent and the Plan Agent. The Build Agent is the default; it is eager to write code, run commands, and edit files immediately. However, sometimes you want to discuss a complex idea or design a software architecture before a single line of code is written. For this, you use the Plan Agent.
This separation reflects a fundamental principle in software development: thinking and doing are different activities that benefit from intentional separation. When you jump directly into coding without a clear plan, you often end up rewriting significant portions of your work as you discover requirements you had not considered. The Plan Agent enforces a deliberate pause for strategic thinking. It is particularly valuable when you are uncertain about the best approach, when a task involves multiple interconnected components, or when mistakes would be costly to reverse. By contrast, the Build Agent excels when you know exactly what you want and need rapid execution—for instance, when implementing a well-understood pattern or making a straightforward fix. Professional developers often alternate between these modes naturally; having them as explicit options in OpenCode makes this mental shift tangible and encourages disciplined workflows.
You can toggle between these two modes instantly by pressing the Tab key while the input bar is active. Watch the status bar change from "Build" to "Plan":
When Plan is active, the agent focuses on outlining steps, organizing thoughts, and creating a strategy without modifying your file system:
Once the plan is solid, press again to switch back to Build mode and start executing the plan by creating actual files. Using the Plan Agent prevents the AI from rushing into writing poor code and ensures you have a solid roadmap before implementation begins.
While /models lets you switch models during a session, you can also configure which model OpenCode uses automatically when you start. This is useful if you have a clear preference and want to avoid manually changing the model every time you launch the tool.
To set a default model, add a model field to your opencode.json configuration file:
With this setting, every time you launch OpenCode, it will start with claude-sonnet-4-5 instead of the default Haiku model. You can still switch to a different model during your session using /models, but this saves you from having to change the model manually if you have a consistent preference. For example, if you primarily work on moderate-complexity tasks and rarely need the speed of Haiku or the power of Opus, setting Sonnet as your default eliminates a repetitive step from your workflow.
By default, an autonomous agent can be quite powerful: capable of reading files, writing code, and executing terminal commands. To ensure safety and control, OpenCode uses a configuration file to define exactly what the agent is allowed to do. You might want the agent to freely read files but ask for permission before writing anything or running shell commands.
The principle behind permission configuration is least privilege—a security concept stating that any component should have only the minimum access necessary to perform its function. An AI agent that can freely execute arbitrary shell commands without oversight poses significant risk: it could accidentally delete important files, install unwanted software, or make system changes that are difficult to reverse. By configuring permissions thoughtfully, you establish trust boundaries that protect your system while still enabling productive collaboration. The goal is not to hamstring the agent but to create appropriate checkpoints where human judgment intervenes. Over time, as you build confidence in specific workflows, you might loosen certain restrictions. Conversely, when working on sensitive projects or unfamiliar codebases, you might tighten controls to ensure nothing happens without your explicit approval.
These settings are managed in your opencode.json configuration file:
Each permission can be set to one of three values:
The configuration above establishes a balanced security policy. The "read": "allow" line tells it can look at any file in your project freely. However, the other settings use , meaning if the agent wants to create a file, change code, or run a terminal command, it must pause and wait for your approval. This creates a human-in-the-loop workflow where the AI proposes actions while you remain the final authority.
In this lesson, we moved beyond basic commands and took control of the OpenCode environment. You learned how to:
These configuration skills allow you to tailor the AI's behavior to match your personal workflow and safety requirements. In the upcoming practice exercises, you will experiment with checking your stats, switching between different AI models, and navigating the differences between the Plan Agent and Build Agent.
