Welcome to the fourth lesson of Securing Your NGINX Server! So far, we've covered authentication, rate limiting, and blocking malicious traffic. These techniques protect against various threats, but they all share one vulnerability: the data traveling between client and server remains visible to anyone monitoring the network.
Today, we're addressing this critical security gap by implementing HTTPS with SSL certificates. You'll learn how to encrypt traffic between clients and your server, automatically redirect insecure HTTP requests to secure HTTPS connections, and configure modern security standards that protect sensitive data from interception.
By the end of this lesson, you'll understand how to enable SSL encryption, enforce secure protocol versions, and implement HTTP Strict Transport Security (HSTS) for enhanced protection.
Every request sent over plain HTTP travels across the internet as readable text. This means passwords, session tokens, personal information, and any other data your application handles can be intercepted and read by anyone positioned along the network path. This vulnerability affects not just sensitive applications, but any service where user privacy matters.
HTTPS solves this problem through encryption: it uses SSL/TLS protocols to create a secure channel between client and server. When properly configured, HTTPS ensures three critical properties:
- Confidentiality: Data cannot be read by third parties during transmission.
- Integrity: Messages cannot be altered without detection.
- Authentication: Clients can verify they're communicating with the legitimate server.
Beyond security, HTTPS has become a standard expectation. Modern browsers mark HTTP sites as "Not Secure," search engines favor HTTPS sites in rankings, and many web APIs refuse to work over insecure connections. Implementing HTTPS is no longer optional; it's essential for any serious web service.
Once we enable HTTPS, we need to ensure all clients use it. The standard approach is to configure NGINX to permanently redirect HTTP requests to their HTTPS equivalents:
This server block listens on port 3000 for unencrypted HTTP connections. When a request arrives, the return directive immediately sends a 301 Moved Permanently response, instructing the client to retry the same request using HTTPS instead. The special variables $host and $request_uri preserve the original hostname and path, so a request to http://example.com/api/data becomes https://example.com/api/data.
The 301 status code tells browsers and search engines that this redirect is permanent, allowing them to remember and automatically use HTTPS for future visits.
After redirecting HTTP traffic, we need a separate server block that actually handles HTTPS connections. Here's the basic structure:
Notice the ssl parameter added to the listen directive. This tells NGINX that port 3001 should accept SSL/TLS encrypted connections rather than plain HTTP. The server_name identifies which domain this server block handles, which becomes important when you have multiple sites on the same server.
Without additional configuration, this server cannot function yet because SSL requires cryptographic certificates. Let's add those next.
SSL encryption requires two files: a certificate that proves your server's identity and a private key used to decrypt incoming data. We specify their locations using dedicated directives:
The ssl_certificate directive points to the certificate file, which contains the public key and identifying information about your server. The ssl_certificate_key points to the corresponding private key, which must be kept secure and never shared.
In this example, we're using a self-signed certificate located at self.crt. Self-signed certificates work for development and testing, though production environments should use certificates from trusted Certificate Authorities. These files come pre-configured in the CodeSignal environment, but in real-world scenarios, you would generate or obtain them separately.
Not all versions of SSL/TLS are equally secure. Older versions like SSLv3 and TLS 1.0 contain known vulnerabilities and should never be enabled. We explicitly specify which protocol versions NGINX should accept:
This configuration restricts connections to TLS 1.2 and TLS 1.3, the only versions currently considered secure. TLS 1.3 offers better performance and stronger security than 1.2, but supporting both ensures compatibility with slightly older clients while maintaining robust protection.
By limiting protocols this way, we prevent attackers from forcing connections to downgrade to vulnerable versions, a technique known as a protocol downgrade attack.
Even with secure protocols, the specific encryption algorithms (called ciphers) used during communication matter significantly. We configure which cipher suites NGINX will negotiate:
This directive uses OpenSSL notation to specify cipher preferences. Let's break down what each component means:
- HIGH: Enables ciphers with key lengths of 128 bits or greater.
- !aNULL: Explicitly excludes ciphers that provide no authentication.
- !MD5: Excludes ciphers using the MD5 hash algorithm, which has known weaknesses.
The exclamation mark indicates exclusion, ensuring NGINX never negotiates connections using these weak algorithms. This configuration strikes a balance between security and compatibility, providing strong encryption while still supporting most modern clients.
Even with HTTPS configured and HTTP redirects in place, sophisticated attackers can attempt to intercept the initial HTTP request before the redirect occurs. HTTP Strict Transport Security (HSTS) prevents this attack:
The Strict-Transport-Security header instructs browsers to remember that this site requires HTTPS. The max-age=31536000 sets this policy for one year (31,536,000 seconds). After receiving this header once, the browser will automatically convert all HTTP requests to HTTPS without making any insecure connections, even if the user types http:// in the address bar.
The always parameter ensures NGINX includes this header in all responses, not just successful ones. This guarantees that browsers receive and enforce the HSTS policy consistently.
With all security measures in place, we can now define how the server responds to requests. Here's the complete secure server configuration with a simple location block:
When a client successfully connects via HTTPS and requests the root path, they receive a confirmation response:
This simple response indicates that the connection is encrypted and all security measures are active. In production, this location would typically serve your actual application content, knowing that all data travels through an encrypted channel protected by modern security standards.
You've now learned how to implement comprehensive HTTPS security in NGINX. We covered redirecting HTTP traffic to HTTPS using permanent redirects, configuring SSL certificates and private keys, restricting connections to secure TLS protocol versions, selecting strong cipher suites, and enforcing HSTS to prevent downgrade attacks.
These techniques transform your server from transmitting data in plain text to providing encrypted, authenticated connections that protect user privacy and data integrity. Combined with the authentication, rate limiting, and IP blocking methods from previous lessons, you now have a complete toolkit for securing web applications.
The practice exercises coming up will give you hands-on experience configuring HTTPS from scratch. You'll apply these concepts to real configurations, building the confidence to secure production servers with industry-standard encryption!
