Welcome back 👋
Up to now, your /api/products endpoint has been about reliable API behavior: validating query parameters and returning consistent success(...) / error(...) envelopes. In Unit 2, you also introduced domain models and DTOs, so your backend has a clear “source of truth” for product data and a clear boundary around what clients are allowed to send.
In this lesson, we finally connect products to PostgreSQL. You’ll walk through the entire pipeline—route → service → repository → database—and, just as importantly, you’ll learn the core database concepts behind the code: what a pool is, why we use a lazy singleton, how queries run without leaking connections, and how raw Postgres errors become stable API failures.
When the browser calls GET /api/products, your backend doesn’t jump straight into SQL. It moves through layers that keep responsibilities clear:
- Route (HTTP layer): validates query params and turns results into
success(...)orerror(...). - Service (workflow layer): applies defaults and converts exceptions into structured results (
ok(...)/fail(...)). - Repository (data access layer): runs SQL and maps database rows into domain
Productobjects. - DB client (infrastructure layer): manages connection pooling and provides query/transaction helpers.
This structure pays off immediately: your route stays readable, and the database code stays centralized and reusable.
Before you can query Postgres, your app needs to know where the database is and how to authenticate.
That information comes from environment variables (values provided to the running process, not hardcoded in your code). They’re accessed in Node.js through process.env.
The logic for building the connection string is in src/lib/db/client.ts.
-
process.env.DATABASE_URLis the “all-in-one” connection string. Many platforms (and container setups) provide this single variable because it’s easy to configure and copy around. -
If
DATABASE_URLis not set, the code falls back to individual variables:PGHOST: the hostname where Postgres runs (e.g.localhost,db,127.0.0.1, or a remote host).PGPORT: the port Postgres listens on (default Postgres port is5432).PGUSER/PGPASSWORD: the credentials used to log in.PGDATABASE: which database to connect to.
-
Each fallback has a default (like
"localhost"or"postgres") so the project works in common development setups without extra configuration.
Once we have a connection string, we need a reliable way to execute queries without constantly opening and closing connections.
That’s where a connection pool comes in—and why the code uses a lazy singleton.
What is a connection pool?
A connection pool is a managed set of reusable database connections.
- Opening a new database connection is expensive: it involves network handshakes, authentication, and setup.
- If you created a fresh connection for every request, your app would get slower and your database could get overwhelmed.
- A pool keeps a small set of ready-to-use connections. When your app needs one, it borrows it briefly and returns it when done.
What does “singleton” mean here?
A singleton is a single shared instance of something—in this case, one Pool instance for the whole app process.
- You do not want a new pool per request.
- You want one pool that all requests share.
What does “lazy” mean here?
Lazy initialization means “don’t create it until we actually need it.”
- The pool is not created when the file loads.
- It’s created the first time code calls
getPool().
This matters in real apps because you sometimes want the server to boot even if the database isn’t used immediately.
The lazy singleton pool
poolstarts asnull, meaning “not created yet.”- The first call to
getPool()creates the pool and caches it in the variable.
Most of the time, repositories don’t want to think about connections at all. They just want to run SQL and get rows back.
That’s what query<T>() provides.
Key concepts behind this function
-
textis your SQL string, often containing$1,$2, etc. placeholders. -
paramsis the list of values that fill those placeholders safely. -
getPool().query(...)tells the pool:- “Find an available connection”
- “Run this SQL”
- “Return the connection to the pool”
-
Because the pool manages that borrow/return lifecycle, you don’t leak connections.
Why parameterized queries matter
When you see SQL like:
and params like:
- The database treats the parameters as values, not executable SQL.
- This is a primary defense against SQL injection.
Some operations require multiple SQL statements that must succeed or fail together. That’s what transactions are for.
This project includes a transaction helper:
What is getPool().connect()?
pool.query(...)is the “simple” mode where the pool borrows a connection automatically.pool.connect()is the “manual” mode: you explicitly ask the pool for a specific connection (a “client”) that you control.
You need manual mode for transactions because all statements in a transaction must run on the same connection.
Why release() is critical
- If you forget to release the client, it never returns to the pool.
- Enough leaked clients = the pool runs out of connections, and the app starts hanging or failing.
- The
finallyblock guarantees release happens even if an error occurs.
Postgres uses numeric/string error codes (like "23505") that aren’t meaningful to most API clients.
This project maps those raw errors into stable API error shapes in src/lib/db/errors.ts.
Postgres errors are “typed” by code:
codeis the main identifier (like “unique constraint violation”).detailandconstraintoften include extra debugging info.
Detecting Postgres errors:
- Not every thrown value is a Postgres error.
- This guard prevents unsafe assumptions before mapping.
"23505"→ unique violation →CONFLICT+ HTTP409. This is the classic “you tried to insert a duplicate SKU” type of problem."42P01"→ missing table/schema →DB_ERROR+ HTTP500. This is usually a setup issue (“schema not applied”).- Unknown codes still become
DB_ERROR, but we keepcodeindetailsto help debugging without relying on it for API behavior.
This mapping is crucial because it makes your API stable even if the database changes its internal error messages.
Now that the database layer is ready, the repository src/lib/repositories/productsRepo.ts is where SQL queries live.
- This reflects raw SQL results.
currencyandstatusare plain strings here because that’s what SQL returns.
Normalizing values for safety:
- This ensures the domain model never receives unexpected values.
- Even if the DB is misconfigured, the backend’s domain stays predictable.
- This is the conversion point from “database world” to “backend domain world.”
- Everything above this layer works with
Product, notProductRow.
Search + pagination query pattern:
LIMITcontrols how many rows you get;OFFSETcontrols where you start.ILIKEgives case-insensitive search.- Parameterization (
$1,$2,$3) keeps it safe and predictable.
The service layer src/lib/services/productsService.ts turns database exceptions into structured service results.
Handling exceptions centrally:
- Postgres failures are mapped into stable API errors.
- Anything else becomes
INTERNAL_ERROR, protecting the API from leaking implementation details.
The listProducts flow:
- Defaults are applied here so the repository always receives concrete values.
- The service returns
ok(rows)instead of “just rows,” making success/failure explicit.
Finally, app/routes/api.products.ts ties everything together and produces the actual HTTP response.
- Validation happens first, before touching the database.
- Service results are converted into HTTP success/error envelopes cleanly.
- Errors preserve meaningful HTTP status codes (like 409 for conflicts, 500 for DB errors).
In this lesson, you learned not just what files exist, but why they’re designed this way:
- Environment variables let the same code run across local/dev/hosted setups.
- A lazy singleton pool avoids expensive reconnecting and prevents resource exhaustion.
query()runs SQL through the pool safely without leaking connections.connect()+release()are critical for transactions (and connection safety).- Postgres error codes are mapped into stable API errors with useful HTTP statuses.
- Repositories own SQL and mapping; services own orchestration and error conversion; routes own validation and HTTP envelopes.
