Welcome back! In our previous lesson, we focused on making your manual interactions with OpenCode lightning-fast. You learned how to batch commands, write efficient single-prompt instructions, and safely manage the AI's memory window.
Now, we are taking the next step: moving from manual efficiency to automated workflows. In this lesson, we will extend OpenCode's capabilities by standardizing how the AI behaves across your entire project. We will explore the critical difference between deterministic automation — things that should happen automatically every time — and AI-assisted tasks that should only happen when you ask for them.
By the end of this lesson, you will know exactly how to encode complex workflows into your project instructions, build custom slash commands, and set up rock-solid development pipelines. Let's dive in!
As a reminder from an earlier course, Project Instructions are rules defined in an AGENTS.md file that tell the AI how to format and write code for your specific project. Previously, we used this file for simple coding standards like language versions and naming conventions.
Now, we are going to expand it to include Multi-Step Workflows. These are step-by-step processes the AI must follow when performing complex tasks like testing or committing code. This matters because giving the AI a repeatable checklist ensures it behaves consistently every time you start a new session, eliminating the need to type out the same long prompt repeatedly.
Let's look at how we can add a testing workflow to our AGENTS.md file.
In this snippet, we give the AI a clear, five-step process for handling tests. Instead of blindly guessing what to do when a test fails, the AI now knows exactly which terminal commands to run and in what order. We can apply this same logic to our commit process.
By encoding this commit workflow, the AI acts as an automated quality gate. It will refuse to commit broken code because step five explicitly tells it to fix issues first. This keeps your project clean and prevents easily avoidable mistakes from reaching your project history.
As we build these workflows, we must clearly understand the difference between two types of tasks. Deterministic Automation refers to processes that produce the same predictable result every time they run. This includes code formatting, style linting, type checking, and running unit tests. This matters because these predictable, strict checks belong in automated gates, like Git hooks and continuous integration pipelines, where they act as rigid pass-or-fail rules.
On the other hand, AI-Assisted Automation refers to tasks where the AI uses deep reasoning to generate unique, human-like output. This includes tasks like analyzing bugs, suggesting code refactors, or writing code reviews. This matters because AI outputs are inherently non-deterministic; the AI might hallucinate or suggest a structural change that requires careful human judgment.
Because AI outputs require a human to review them, you should never put AI-assisted tasks into automated gates. If you force an AI code review script to run automatically every time you save a file, it will slow down your computer and potentially block your work with incorrect suggestions. Instead, AI tasks should always remain on-demand, meaning they run only when you explicitly type a command to trigger them.
A Pre-Commit Hook is a small script that runs automatically every time you try to save a commit in version control. It checks your code for errors before allowing the save to complete. This matters because it acts as a final line of defense, preventing broken or messy code from entering your project history. Furthermore, these hooks serve as an excellent feedback loop for OpenCode. If the AI tries to commit bad code, the hook will fail, and the AI will read the error message and immediately attempt to self-correct.
Let's set up a deterministic hook in the .git/hooks/pre-commit file. (Note: In the CodeSignal environment, tools like black and pytest are pre-installed for you, but in the real world, you would install them via your terminal first). We begin by defining the formatting and linting steps.
In this first chunk, we use set -e to ensure the script stops immediately if any command fails. We then run black in check mode and flake8 to verify styling. Next, we add our type checking and tests.
Here, we run mypy to catch type errors and pytest to run our unit tests. We use the -x flag for pytest so it stops on the very first failure, saving us time. Remember to make this file executable by running chmod +x .git/hooks/pre-commit in your terminal. If all checks pass, you will see an output similar to this:
A Wrapper Script is a custom terminal command that groups several other commands or prompts together into one easy-to-use shortcut. This matters because it allows you to trigger complex AI tasks with a single word in your terminal, rather than typing out a long prompt every time. Because these rely on the AI, they are non-deterministic and should only be run manually when you are ready to review the results.
Let's create an on-demand script called opencode-format.sh that checks for formatting issues and asks the AI to fix them interactively if found.
In this first half, the script runs our deterministic black tool and captures the output. If the code is already perfect, it exits successfully. If it finds errors, we want to hand that error output over to OpenCode.
Here, we use the opencode run command-line tool to send a prompt directly to the AI, injecting the error message we captured earlier. This script beautifully bridges the gap between our strict automated tools and our flexible AI assistant.
We can create a similar script called opencode-review.sh to ask the AI for advice right before we open a Pull Request.
This script finds all the Python files you changed compared to the main branch and asks the AI to review them for hidden bugs. Because the AI might suggest changes you do not actually want, this script must remain completely manual and should never be placed in your automated hooks.
Custom Slash Commands are special markdown files that define a specific word, like /commit, which you can type directly into the OpenCode chat window to trigger a predefined set of instructions. This matters because it brings the power of our workflows directly into the chat interface, reducing friction and standardizing how the AI behaves. By creating these commands, you can execute complex routines with just a few keystrokes.
To create one, we make a new file inside the .opencode/commands/ directory in our project. Let's look at .opencode/commands/commit.md.
In this file, we provide a short description inside the YAML frontmatter at the very top, which is what you will see in the chat's autocomplete menu. Below that, we give the AI its instructions.
Notice how we explicitly tell the AI to "Follow the Commit Workflow defined in AGENTS.md". This is a crucial practice for keeping your rules organized. By referencing the existing AGENTS.md file, we ensure that if we ever update our testing or commit rules, we only have to change them in one central place.
A CI/CD Pipeline (Continuous Integration/Continuous Deployment) is a remote server that runs your deterministic automation every time you push code to a platform like GitHub. This matters because it ensures that no code merges into your main project without passing all required tests, regardless of whether a human or an AI wrote it. A well-designed pipeline separates different types of work into distinct jobs to make failures easier to read and diagnose.
Let's look at a GitHub Actions workflow file located at .github/workflows/ci.yml. We will skip the basic setup steps and focus on the two main jobs. (Again, these tools are pre-installed in your CodeSignal environment, but the pipeline installs them on the remote server).
In this first job, called quality, we run the exact same deterministic tools we used in our pre-commit hook. This ensures our local environment matches our remote server. Next, we add a completely separate job specifically for security.
In the security job, we use bandit to safely scan our Python code for common vulnerabilities (like hardcoded passwords) and safety to check if any of our downloaded libraries have known flaws. By separating the quality job from the security job, if the pipeline fails, you will immediately know whether the AI wrote poorly formatted code or if it accidentally introduced a dangerous security risk.
Great job! In this lesson, you learned how to upgrade OpenCode from a simple chat assistant into a deeply integrated and highly automated development tool. We explored how to encode complex, multi-step routines into AGENTS.md so the AI follows your project rules every time.
You learned the critical difference between deterministic checks — like your pre-commit hooks and CI/CD pipelines — and AI-assisted wrapper scripts that require manual human judgment. You also built custom slash commands to make launching these workflows as simple as typing a single word into the chat interface.
Now, you are ready to put these concepts into practice. In the upcoming exercises within the CodeSignal IDE, you will build your own pre-commit hooks, create an AI-powered code review script, and test out custom slash commands firsthand. Let's jump into the practice environment and start automating!
