Welcome to your first lesson! When you join an existing software project, you have to learn its rules, its structure, and its coding style. OpenCode needs to learn these things too. Proper configuration ensures that OpenCode acts as a helpful assistant rather than a confused teammate.
In this lesson, we will apply the configuration skills you have already learned to a Python e-commerce API. You will see how the same two configuration files you have worked with before fit into a real codebase. opencode.json controls the AI model, its permission levels, and which files to ignore. AGENTS.md teaches the agent about project standards and workflows. By the end of this lesson, you will know exactly how to guide OpenCode to write code that perfectly matches an existing project. Let's get started!
Before we can tell OpenCode how to behave, we need to understand the project ourselves. Project exploration is the process of looking through folders and files to understand how an application is built and organized. This matters because you cannot configure tools or write good code if you do not know where things belong or what frameworks are being used. Let's look at a simplified structure of our e-commerce API, which you might see by running a command like ls -R src/ in your terminal.
In this structure, we can clearly see that the src/ folder holds the main application. It is thoughtfully separated into an api/ folder for the web routes and a models/ folder for the database setup. The tests/ folder sits completely outside of the source code, which is a common and clean Python testing pattern. We can also peek at the requirements-dev.txt file to see what tools the project relies on.
Seeing these libraries tells us a lot about the project's ecosystem. We know the team uses pytest for their testing framework, black for formatting, and flake8 for linting. When working inside an session, you can actually ask the AI to do this exploration for you. Asking to will prompt it to read these files and give you a helpful breakdown of the routing and testing conventions already in place.
As you have already seen, the opencode.json file is the main settings file for your OpenCode environment. It defines which AI model powers the assistant, how it connects to the provider, what permissions it has on your machine, and which files it should ignore. Now let's look at what this file looks like when applied to a real project. On CodeSignal, this environment is already set up for you behind the scenes, but it is important to understand how to build it for your own personal projects.
In the first half of the configuration file, we specify that we want to use the claude-sonnet-4-5 model provided by Anthropic. We also tell OpenCode to grab the API key securely from our environment variables instead of hardcoding it in plain text. Hardcoding secret keys is a major security risk, so fetching them from the environment is the best practice. Next, we need to add the permission settings to this file.
The permission block acts as your safety net. We set edit to ask so that if OpenCode needs to create a new file or modify existing code, it will pause and ask for your approval first. We also set to , which means the agent must get explicit permission before running terminal commands on your machine.
As you have already learned, the AGENTS.md file is a Markdown document written specifically to teach AI agents about your project's unique rules. Every development team does things a little differently, and writing down your coding standards prevents the AI from guessing and making frustrating mistakes. It is also worth recalling that AGENTS.md provides guidance, not strict enforcement. The AI reads it to understand what it should do, but deterministic tools like Git hooks or Continuous Integration (CI) pipelines are what actually enforce those rules. Let's look at how we apply this to define the error-handling standard for our e-commerce API.
This section tells OpenCode exactly how to structure new web routes. Instead of the AI using a generic try/except block, it now knows to specifically watch out for database errors using SQLAlchemyError and to roll back the database session. It also establishes that error messages should be formatted cleanly as JSON. We can also add detailed workflows to guide how the AI should test its code.
When OpenCode needs to fix a bug, this workflow tells it the exact terminal commands to run. It forces the AI to check the current state of the application before making any blind changes. Remember, while this file guides the AI to write better code, tools like and (which we saw in our dev requirements) are the actual enforcers that decide if the final code is acceptable to merge into the project.
You have already seen how OpenCode watches your project's file system to stay aware of changes as you work, and how the watcher.ignore block inside opencode.json controls which files and folders it monitors. Now let's see exactly which patterns make sense for a Python project like this one.
Recall that the patterns inside watcher.ignore use glob syntax, the same pattern language used by .gitignore. A pattern like venv/** means "match everything inside the venv/ folder, at any depth," and a pattern like *.py[cod] means "match any file whose extension is .pyc, .pyo, or .pyd." Choosing the right patterns here serves two important goals: focus and security. If OpenCode monitors thousands of files inside a massive virtual environment, it wastes processing power and might get confused by irrelevant code. More critically, we never want the AI tracking changes in local secret files that contain passwords or private keys.
In this first snippet, we block out the virtual environments (venv/** and ). These folders contain external third-party libraries that does not need to monitor to understand your specific application logic. We also block and compiled files because they are unreadable machine files that simply clutter the workspace.
Verification is the process of testing whether OpenCode has successfully read and understood your configuration files. This step matters because you want to be completely sure the AI knows the rules before you ask it to write any actual code for your e-commerce project. The easiest way to verify this is simply by opening an OpenCode chat session and asking it direct questions about the project rules we just set up.
When we ask this question, OpenCode responds by listing out the exact rules we defined in our AGENTS.md file. It confirms that it knows the required try/except structure for routes, the rule against exposing internal error details, and the step-by-step testing workflow. You can do the exact same thing to verify that it understands the overall project architecture.
This output proves that OpenCode has successfully explored the src/ directory without getting distracted by ignored files. It correctly identified that this is a application, found the authentication routes, and noticed the middleware. With this verification complete, you can trust to be a helpful, rule-abiding assistant.
Great work! In this lesson, we applied our existing OpenCode configuration knowledge to a real-world e-commerce API. We started by exploring the project structure to understand its existing patterns and libraries. Then, we put together two familiar configuration files tailored to this specific project. The opencode.json file securely set up our AI model, established safety permissions, and used watcher.ignore with glob patterns to keep the file watcher focused by excluding virtual environments, caches, secrets, and binary database files. The AGENTS.md file provided clear guidance to the AI regarding our coding standards and testing workflows. We also successfully verified that OpenCode understood these rules by testing it with prompts.
Coming up next, you will have a series of hands-on practice exercises. You will navigate an actual project environment, create these two configuration files yourself, and interact directly with OpenCode to verify your setup. Get ready to jump into the code!
