Welcome back to "Getting Started with NGINX Web Server"! You're now at lesson three and making excellent progress. In the first lesson, we learned how to serve static files. In the second, we configured NGINX as a reverse proxy to forward API requests to a backend. Now we're ready to tackle a common real-world scenario: hosting multiple applications through a single NGINX instance.
Many organizations run several web applications on the same server, each with its own domain or subdomain. Rather than launching separate NGINX instances for each application, we can configure one NGINX server to route requests intelligently based on the requested hostname. This approach is called name-based virtual hosting, and it's one of NGINX's most practical features.
In this lesson, we'll configure two distinct applications served from the same NGINX instance, each with its own static files and backend API. We'll see how NGINX can differentiate between requests using the server_name directive, allowing us to manage multiple services efficiently from a unified configuration.
Consider a scenario where we're running two separate projects: perhaps a customer-facing portal and an internal admin dashboard. Each application has its own static files, its own backend API, and ideally its own domain name, like portal.company.com and admin.company.com.
Without virtual hosting, we'd face some difficult choices. We could run each application on a different port (one on 3000, another on 3001), but that creates awkward URLs and complicates firewall rules. We could run multiple NGINX instances, but that wastes resources and becomes harder to manage. Or we could merge both applications into one codebase, sacrificing clean separation.
Virtual hosting solves this elegantly: one NGINX instance listens on a single port but routes requests to different applications based on the requested hostname. This keeps applications isolated while presenting them through standard HTTP/HTTPS ports.
A virtual host is a server configuration that responds to requests for a specific domain name. When a client makes an HTTP request, it includes a Host header indicating which domain it's trying to reach. NGINX reads this header and matches it against the server_name directives in its configuration.
For example, when a browser requests http://app1.local/, it sends:
NGINX examines this Host header and selects the server block with server_name app1.local. This mechanism allows NGINX to host dozens or even hundreds of applications on a single instance, each responding to different domain names.
For our lesson, we'll use two local domain names: app1.local and app2.local. These represent two independent applications that we want to serve from the same NGINX server.
Let's begin building our multi-application configuration with the familiar foundation:
This structure remains consistent with our previous lessons. We're defining worker processes, connection limits, and MIME type support. Everything that follows will live inside the http block.
Now we'll create our first virtual host for app1.local:
Let's examine each directive:
listen 3000tells NGINX to accept connections on port 3000server_name app1.localspecifies which hostname this server block handlesroot app1sets the directory where static files are storedindex index.htmldefines the default file to serve
The server_name directive is the key to virtual hosting. When NGINX receives a request with Host: app1.local, it will use this server block.
Our first application needs to proxy API requests to a Flask backend on port 5000:
As you may recall from the previous lesson, proxy_pass forwards requests to the backend, while the proxy_set_header directives preserve client information. Here, we're routing /api/ requests from app1.local to a Flask server on port 5000.
We'll finish the first server block with our static file location:
This handles all non-API requests for app1.local. The try_files directive attempts to serve the requested file directly, then tries it as a directory, and finally falls back to index.html. This completes our first application's configuration.
Now we add a completely separate server block for our second application:
Notice the similarities to the first block: both listen on port 3000, but with different server_name and root values. NGINX will use server_name to determine which block handles each request, allowing both applications to share the same port.
The root app2 directive means this application's static files live in a completely separate directory, keeping the two applications isolated.
The second application also needs API proxying, but to a different backend:
Notice the crucial difference: proxy_pass now points to port 5001 instead of 5000. This means we'll run two separate Flask backends, one for each application. The configuration structure is identical, but the destination differs.
We complete the second server block with its static file handling:
This closes both the second server block and the http block. We now have a complete configuration with two virtual hosts, each serving its own static files and proxying to its own backend.
When a request arrives at port 3000, NGINX follows this decision process:
- Extract the Host header: NGINX reads which domain the client requested.
- Match server_name: It searches for a
serverblock with a matchingserver_name. - Select locations: Within the chosen server block, it evaluates
locationdirectives. - Execute actions: Depending on the location, it either serves files or proxies the request.
If a request comes in for app1.local, NGINX uses the first server block, serving from the app1 directory and proxying to port 5000. If the request is for app2.local, it uses the second server block, serving from app2 and proxying to port 5001.
To test virtual hosts locally, we need to map our custom domain names to localhost. On Linux and macOS, edit /etc/hosts:
On Windows, edit C:\Windows\System32\drivers\etc\hosts with the same entries. This tells your operating system that app1.local and app2.local resolve to 127.0.0.1. Now, when you browse to http://app1.local:3000, your system knows to connect to localhost.
Let's test our virtual host configuration using curl. First, request the API from app1.local:
This request includes Host: app1.local, so NGINX routes it to the first server block and proxies to port 5000. With the appropriate backend servers running, you'd see responses from the first application.
Now test the second application:
Even though we're still connecting to port 3000, NGINX reads the Host: app2.local header and routes this to the second server block, proxying to port 5001. The backend sees a completely different request flow.
If you don't have access to modify /etc/hosts (perhaps you're in a containerized environment or working with restricted permissions), you can still test virtual hosts by manually setting the Host header in your requests.
With curl, use the -H flag to specify the hostname:
This sends a request to localhost:3000 but includes Host: app1.local in the HTTP headers. NGINX reads this header and routes the request to the first server block, exactly as it would if you had configured /etc/hosts.
To test the second application:
Same local address, different Host header, different routing outcome.
For browser testing without /etc/hosts, you can use browser extensions that modify request headers:
- ModHeader (Chrome, Firefox)
- Simple Modify Headers (Firefox)
These tools let you add a custom Host header to all requests, allowing you to browse http://localhost:3000 while NGINX treats it as if you requested app1.local or .
The beauty of virtual hosting is complete application isolation. Each application has:
- Its own static file directory (
app1vsapp2) - Its own backend service (port 5000 vs 5001)
- Its own domain name (
app1.localvsapp2.local) - Its own URL space (the same paths can exist in both)
If app1 has a file at /images/logo.png, it's completely separate from app2's /images/logo.png. If both applications define an /api/users endpoint, they're served by different backends. The applications can even use different frameworks or programming languages; NGINX doesn't care what's behind the proxy.
Congratulations! You've successfully configured name-based virtual hosts in NGINX, learning how to serve multiple independent applications from a single server instance. You've seen how the server_name directive enables intelligent routing based on domain names, how multiple server blocks can share the same port, and how to keep applications completely isolated while managing them through unified configuration.
This skill is essential for production environments where efficiency and organization matter. Whether you're managing microservices, supporting multiple tenants, or simply organizing different aspects of a project, virtual hosting provides a clean and scalable solution.
Now it's time to roll up your sleeves and practice! In the upcoming exercises, you'll configure your own multi-application setups, experiment with different routing patterns, and solidify your understanding through hands-on implementation.
