Introduction: Working Smarter with OpenCode

Welcome back! In Lesson 3, we learned how to make large-scale changes across a multi-file project safely. You mastered using file references and coordinated refactoring. Now, we are shifting our focus to making those massive changes faster and more efficiently. Working smarter, not harder, is the key to using AI tools.

In this lesson, we will focus on optimizing OpenCode for speed and performance. We will cover how the AI's memory works, how to batch your operations into single commands, and how to write highly efficient prompts. You will also learn exactly when to clear the AI's memory by starting a fresh session. By the end of this lesson, you will be able to process large codebase updates without slowing down your environment.

Let's get started!

Understanding Context and Performance

Whenever you chat with OpenCode, it needs to remember what you are discussing. The Context Window is the working memory of the AI, containing the current conversation history and any files it has read during the session. This matters because every time you send a message, OpenCode must re-read that entire memory before answering.

If you load too many files or have an extremely long conversation, this memory becomes bloated. A large context window slows responses, consumes more computing power, and can confuse the AI, causing it to mix up details from older, unrelated tasks.

Instead of a code example here, let's look at a conceptual text example of what happens when context becomes too large.

In this example, the AI has to sift through sorting algorithms and 15 styling files just to answer a simple database question. This inefficient prompting leads to slower performance and a higher probability of the AI providing a slightly off-target answer. Keeping your sessions lean and focused ensures OpenCode stays lightning-fast and highly accurate.

Batching Operations Across Files

When you have a repetitive task, you might be tempted to ask the AI to do it one file at a time. Batching Operations is the practice of combining multiple file reads and modifications into a single prompt. This matters because every individual request you make requires OpenCode to pause, process, and reply. By asking OpenCode to handle multiple files at once, you save time, reduce the number of requests, and keep your context window much cleaner.

Let's look at an example of how a beginner might ask OpenCode to update their database models.

This approach requires four separate back-and-forth interactions. The AI has to respond to each individual Read command before it even begins the actual work.

Now, let's look at the efficient, batched version.

By batching the command, OpenCode reads the directory, finds the relevant files, and updates them all in just one step. The output will be a single, concise response detailing the changes made to all three files simultaneously. This results in faster execution, fewer delays, and a much cleaner conversation history.

Writing Efficient Single-Command Prompts

Similar to batching files, you should also batch your intent. Single-Command Prompts are instructions that tell OpenCode exactly what to investigate and what to change in the very same message. This matters because asking exploratory questions like "What is in this file?" followed by "Okay, now fix it" doubles the time required to obtain results. Giving the AI the full context and the final goal immediately allows it to plan and execute the change in one fluid motion.

Let's say we want to add type hints to a utility file.

In this single command, OpenCode knows it needs to read utils.py, figure out what the functions do, and modify the code. It skips the unnecessary step of summarizing the file for you first.

We can apply this same strategy to larger structural changes across multiple directories.

Here, we give OpenCode a clear, project-wide goal. It will scan the tests/ folder and rewrite the import statements in one go. You can also use this for adding standardized logic.

By skipping the "What do the functions do?" prompt, we get straight to the point. OpenCode will immediately read the file and inject the required error handling blocks. Being direct is the best way to get fast results.

Knowing When to Start Fresh

Even with excellent prompting, your conversation will eventually become long. Session Management is the practice of knowing exactly when to clear the AI's memory and start a new chat. This matters because, as we discussed earlier, an overloaded context window slows the AI and leads to confusion. If you switch from fixing a database bug to building a new user interface component, the AI no longer needs to remember the database bug.

You should start a new session when you switch to an unrelated task, begin a totally new feature, or notice that the AI's responses are becoming slow or confusing. You should keep your current session only if you are continuing the exact same feature, debugging related code, or refining changes you just made.

There are two ways to start a fresh session inside OpenCode:

The default leader key in OpenCode is Ctrl+X. So to start a new session with the keyboard, press Ctrl+X first, then N. Both methods work the same way: they instantly wipe the slate clean, closing the old context window and opening a fresh, highly responsive session. Making a habit of using either method between different tasks is the easiest way to ensure OpenCode always runs at peak speed.

Verifying Batched Changes Safely

Because you are now moving faster and asking the AI to change multiple files at once, checking its work is crucial. Verification is the process of reviewing the code modifications before finalizing them. This matters because when you batch operations, a single misunderstanding by the AI can insert errors across dozens of files simultaneously. Fast tools require strong safety nets, and we can use shell commands directly in OpenCode to provide that safety.

As a quick reminder from our previous lesson, you can verify your changes using standard version control commands.

When you run this command inside the OpenCode chat, it prints out a line-by-line comparison of your entire project.

This output clearly shows the __repr__ method that the AI added in green (indicated by the + symbols). Reviewing this diff ensures that your single-command prompt did exactly what you wanted without accidentally deleting anything important. Always verify your batched changes before moving on to the next task.

Summary and Practice Preview

Excellent work! In this lesson, you learned how to optimize OpenCode to work much faster. We explored how the context window works and why keeping it clean improves performance. You learned how to batch operations together to modify multiple files at once and how to write efficient single-command prompts that avoid unnecessary conversational steps.

You also discovered the importance of session management and how to use keyboard shortcuts like Ctrl+X, N to start fresh when switching tasks. Finally, we reviewed the importance of using !git diff to safely verify batched updates.

Coming up next, you will jump into the CodeSignal IDE to practice these optimization strategies firsthand. You will batch import updates, apply error handling across multiple files, and manage your session context. Let's head over to the exercises and speed up your workflow!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal