Introduction: Connecting OpenCode to the Outside World

Welcome back! In our last lesson, we focused on securing your OpenCode environment using advanced permission patterns. Now that your workspace is safe, it is time to connect OpenCode to the outside world. In this lesson, we will explore how to interact with external data sources like APIs, local databases, and web content.

Connecting to external data matters because real-world applications rarely exist in isolation; they need live data, user records, and web resources to function properly. However, bringing in outside data requires careful handling to ensure security, maintain reliability, and catch errors before they crash your program. By the end of this lesson, you will know exactly how to guide OpenCode to fetch, parse, and safely manage external data to make your projects more dynamic and powerful. Let's get started! Knowing these patterns allows you to write specific, effective prompts for OpenCode and critically evaluate whether its output is architecturally sound and secure.

Fetching Data from Public APIs

A Public API (Application Programming Interface) is an open web endpoint that allows your code to request and receive data from another service. Knowing how to use OpenCode to fetch data from APIs matters because it allows you to easily integrate live, constantly updating information — like weather forecasts or financial data — directly into your applications.

Let's look at how we can ask OpenCode to write a Python script that fetches the latest currency exchange rates and saves them to a file. We will use the requests library, which comes pre-installed in your CodeSignal environment.

In this first chunk, we define our target url and use a try block to make our web request. We include a timeout=10 parameter, which tells the script to stop waiting if the server takes longer than 10 seconds to respond. This is crucial for keeping your program from freezing forever if the API goes down. The raise_for_status() function automatically checks for common HTTP errors (like a 404 Not Found) and triggers an exception if something goes wrong.

Working with Authenticated APIs

An Authenticated API is a web service that requires a secret key or token to prove who you are before it gives you data. Handling authenticated APIs securely matters because if you accidentally paste your secret keys directly into your code, anyone who sees your code can steal those keys and impersonate you, potentially running up massive usage bills.

To solve this, we use Environment Variables. These are hidden values stored securely in your environment rather than in your actual code files. Let's see how OpenCode can write a script that safely uses an API key to fetch user data.

In this setup, we use Python's os.getenv() function to look for an environment variable named EXAMPLE_API_KEY. If the key is missing, we immediately stop the program and raise a clear ValueError. This prevents the script from making a doomed web request that is guaranteed to fail anyway.

Now, let's attach that key to our web request.

To prove who we are to the API, we create a dictionary called headers. We pass our secret key inside the header using the standard format. We then pass this dictionary into our call. This secure method ensures your keys never show up in your source code, keeping your accounts safe from accidental exposure while allowing to interact with protected resources.

Handling Rate Limits

Rate Limiting is when an API restricts how many requests you can send in a given time window. When you exceed that limit, the server responds with an HTTP 429 Too Many Requests status code instead of your data. This matters because a script that ignores 429 errors will either crash silently or get permanently blocked by the service. Handling rate limits gracefully keeps your integrations reliable.

The correct response to a 429 is to pause and retry rather than crashing immediately. Many APIs include a Retry-After header telling you exactly how many seconds to wait before your next attempt.

In this function, we loop up to max_retries times. On each attempt, we check whether the status code is 429. If it is, we read the Retry-After header value. If that header is missing, we fall back to exponential backoff using 2 ** attempt, which waits 1 second on the first retry, 2 seconds on the second, and 4 seconds on the third. The call pauses the script before trying again.

Querying Local Databases Safely

A Local Database, such as SQLite, is a file-based system used to store and organize data efficiently on your own machine. Querying local databases safely matters because allowing an AI tool like OpenCode to run unrestricted SQL commands could easily result in accidentally deleting or corrupting your important project data.

The most reliable protection is to open the database connection itself in read-only mode. When using Python's sqlite3 module, you can do this by passing a URI string with a mode=ro parameter and setting uri=True. SQLite enforces this restriction at the driver level: any write operation raises a sqlite3.OperationalError before it touches your data, regardless of how the query string was constructed.

In this first section, we check whether the provided SQL string starts with SELECT or . This gives users a clear error message when they accidentally pass the wrong kind of query. It is not the security control. A clause can contain data-modifying statements on some database engines, and a string check cannot catch every harmful pattern. We include it because it is a useful early warning, but we never rely on it alone.

Writing to Local Databases Safely

So far we have only read from databases. Sometimes, however, you need to persist fetched data — for example, saving the exchange rates we retrieved earlier into a local SQLite table so you can query them later. Writing to a database safely requires two habits: using parameterized queries to prevent SQL injection, and calling commit() to make the changes permanent.

This function opens a normal (read-write) connection and creates the table if it does not already exist. Calling conn.commit() after CREATE TABLE persists the schema change to disk. Without commit(), the table definition would be lost as soon as the connection closes.

Now let's insert the rates we fetched earlier.

The ? placeholders in the INSERT statement are the critical safety feature. Instead of building a query string like , we pass the actual values as a separate tuple. escapes them automatically, making it impossible for malicious data to alter the query structure. inserts all rows in a single call, and the subsequent writes every row to disk atomically — either all rows are saved or none are.

Fetching and Parsing Web Content

Web Scraping is the process of fetching raw HTML code from a website and extracting the readable text. Fetching web content matters because OpenCode sometimes needs external context — like the latest technical documentation — to write accurate code for new libraries or updated tools.

To accomplish this, we combine the requests library to fetch the page and BeautifulSoup to parse the messy HTML into clean text. Let's look at how we can write a script to fetch a documentation page.

First, we define a custom User-Agent inside our headers. Many websites will block automated scripts if they do not identify themselves, so providing a clear User-Agent helps ensure our request succeeds. We then fetch the url with a 15-second timeout and pass the raw HTML response into BeautifulSoup, which builds a structured tree of the webpage that we can easily search.

Now we need to isolate the actual text and remove the junk.

Built-in Tools vs Custom Tools vs MCP Servers

When connecting OpenCode to external data, you have three primary ways to do it: Built-in Tools, Custom Python Scripts, and MCP Servers. Understanding the difference between these options matters because choosing the right approach will save you time, reduce errors, and keep your workspace much cleaner.

Built-in Tools are the default commands OpenCode already knows how to use out of the box, like reading a file or running a simple curl command in the bash terminal. You should use built-in tools for quick, one-off tasks. For example, if you just need OpenCode to briefly glance at a public API response, asking it to run a quick terminal command is much faster than writing an entire Python program.

Custom Python Scripts are the scripts we have been writing throughout this lesson, using libraries like requests and sqlite3. Use custom scripts when you need to transform data, handle errors explicitly, add retry logic for 429 responses, or manage authentication credentials.

MCP Servers (Model Context Protocol) are advanced, standardized plugins that attach external systems directly to the AI. Use them when you need deep, continuous integration with complex platforms like a cloud database or a CRM service.

To decide which approach to use for any given task, apply this three-question rule:

Summary and Practice Preview

Great work! In this lesson, you learned how to safely connect OpenCode to the outside world. We covered how to fetch live data from public APIs while handling timeouts, and how to securely pass API keys using environment variables for authenticated services.

You also discovered how to handle 429 Too Many Requests errors using retry logic and exponential backoff, how to protect your local SQLite databases by restricting queries to read-only commands, and how to scrape and clean web documentation using BeautifulSoup. Finally, we discussed a concrete three-question rule for deciding when to rely on built-in tools, custom scripts, or MCP servers.

In the upcoming hands-on exercises, you will use the CodeSignal IDE to practice these exact skills. You will guide OpenCode to fetch live exchange rates, query a local sales database securely, and clean up web documentation for AI usage. Get ready to put your new external data skills to the test!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal