Announcing NGINX Plus R18
h4 {
font-weight:bolder;
font-size:110%;
}
h5 {
font-weight:bolder;
font-size:110%;
}
We are pleased to announce that NGINX Plus Release 18 (R18) is now available. NGINX Plus is the only all-in-one load balancer, content cache, web server, and API gateway. Based on NGINX Open Source, NGINX Plus includes exclusive enhanced features and award‑winning support. R18 simplifies configuration workflows for DevOps and enhances the security and reliability of your applications at scale.
More than 87% of websites now use SSL/TLS to encrypt communications over the Internet, up from 66% just three years ago. End-to-end encryption is now the default deployment pattern for websites and applications, and the explosion in SSL/TLS certificates means some companies are managing many thousands of certificates in production environments. This calls for a more flexible approach to deploying and configuring certificates.
New in this release is support for dynamic certificate loading. With thousands of certificates, it’s not scalable to define each one manually in the configuration for loading from disk – not only is that process tedious, but the configuration becomes unmanageably large and NGINX Plus startup unacceptably slow. With NGINX Plus R18, SSL/TLS certificates can now be loaded on demand without being listed individually in the configuration. To simplify automated deployments even further, certificates can be provisioned with the NGINX Plus API and they don’t even have to sit on disk.
Additional new features in NGINX Plus R18 include:
- OpenID Connect enhancements – We continue to improve our supported OpenID Connect reference implementation, originally released in NGINX Plus R15. In this release we have added support for opaque session tokens, refresh tokens, and a logout URL.
- Port ranges for virtual servers – NGINX Plus virtual servers can now be configured to listen on a range of ports, for example
80-90
. This enables NGINX Plus to support a broader range of applications, such as passive FTP, that require port ranges to be reserved. - Key‑value definition in the configuration – The NGINX Plus key-value store enables solutions for a wide range of use cases, including dynamic blacklisting of IP addresses and dynamic DDoS mitigation. You can now create key‑value pairs directly with variables in the NGINX Plus configuration, opening up even more use cases.
- Greater flexibility for active health checks – NGINX Plus’ active health checks are a powerful tool for monitoring the health of backend systems. With NGINX Plus R18, you can now test the value of any NGINX variable, and automatically shut down existing TCP connections to failed servers.
Rounding out this release are simplified configuration for clustered environments, modular code organization with the NGINX JavaScript module, new and updated dynamic modules (including Brotli), and direct Helm installation of the official NGINX Ingress Controller for Kubernetes.
Important Changes in Behavior
-
Obsolete APIs – NGINX Plus R13 (August 2017) introduced the all‑new NGINX Plus API for metrics collection and dynamic reconfiguration of upstream groups, replacing the Status and Upstream Conf APIs that previously implemented those functions. As announced at the time, the deprecated APIs continued to be available and supported for a significant period of time, which ended with NGINX Plus R16. If your configuration includes the
status
and/orupstream_conf
directives, you must replace them with theapi
directive as part of the upgrade to R18.For advice and assistance in migrating to the new NGINX Plus API, please see the transition guide on our blog, or contact our support team.
-
Updated
listen
directive – Previously, when thelisten
directive specified a hostname that resolved to multiple IP addresses, only the first IP address was used. Now a listen socket is created for every IP address returned. -
NGINX JavaScript Module (njs) changes – The deprecated
req.response
object has been removed from the NGINX JavaScript module. Functions declared using thefunction(req,res)
syntax that also reference properties of theres
object generate runtime errors, returning HTTP status code500
and a corresponding entry in the error log:YYYY/MM/DD hh:mm:ss [error] 34#34: js exception: TypeError: cannot get property "return" of undefined
Because JavaScript code is interpreted at runtime, the
nginx
-t
syntactic validation command does not detect the presence of invalid objects and properties. You must carefully check your JavaScript code and remove such objects before upgrading to NGINX Plus R18.Further, JavaScript objects that represent NGINX state (for example,
r.headersIn
) now returnundefined
instead of the empty string when there is no value for a given property. This change means that NGINX‑specific JavaScript objects now behave the same as built‑in JavaScript objects. -
Older operating systems removed or to be removed:
- Amazon Linux 2017.09 is no longer supported; oldest supported version is now 2018.03
- CentOS/Oracle Linux/Red Hat Enterprise Linux 7.3 is no longer supported; oldest supported version is now 7.4
- Debian 8.0 will be removed in NGINX Plus R19
- Ubuntu 14.04 will be removed in NGINX Plus R19
New Features in Detail
Dynamic SSL/TLS Certificate Loading
With previous releases of NGINX Plus, the typical approach to managing SSL/TLS certificates for secure sites and applications was to create a separate server
block for each hostname, statically specifying the certificate and associated private key as files on disk. (For ease of reading, we’ll use certificate to refer to the paired certificate and key from now on.) The certificates were then loaded as NGINX Plus started up. With NGINX Plus R18, certificates can be dynamically loaded, and optionally stored in the in‑memory NGINX Plus key‑value store rather than on disk.
There are two primary use cases for dynamic certificate loading:
In both cases, NGINX Plus can perform dynamic certificate loading based on the hostname provided by Server Name Indication (SNI) as part of the TLS handshake. This enables NGINX Plus to host multiple secure websites under a single server configuration and select the appropriate certificate on demand for each incoming request.
Lazy Loading of SSL/TLS Certificates From Disk
With “lazy loading”, SSL/TLS certificates are loaded into memory only as requests arrive and specify the corresponding hostname. This both simplifies configuration (by eliminating the list of per‑hostname certificates) and reduces resource utilization on the host. With a large number (many thousands) of certificates, it can take several seconds to read all of them from disk and load them into memory. Furthermore, a large amount of memory is used when the NGINX configuration is reloaded, because the new set of worker processes loads a new copy of the certificates into memory, alongside the certificates loaded by the previous set of workers. The previous certificates remain in memory until the final connection established under the old configuration is complete and the previous workers are terminated. If configuration is updated frequently and client connections are long lived, there can be multiple copies of the certificates in memory, potentially leading to memory exhaustion.
Lazy loading of certificates from disk is ideal for deployments with large numbers of certificates and/or when configuration reloads are frequent. For example, SaaS companies commonly assign a separate subdomain to each customer. Onboarding new customers is difficult because you have to create a new virtual server for each one, and then copy the new configuration and the customer’s certificate to each NGINX Plus instance. Lazy loading removes the need for the configuration changes – just deploy the certificates on each instance and you’re done.
To support lazy loading, the ssl_certificate
and ssl_certificate_key
directives now accept variable parameters. The variable must be available during SNI processing, which happens before the request line and headers are read. The most commonly used variable is $ssl_server_name
, which holds the hostname extracted by NGINX Plus during SNI processing. The certificate and key are read from disk during the TLS handshake at the beginning of each client session and cached in memory in the filesystem cache, further reducing memory utilization.
A secure site configuration becomes as simple as this:
server {
listen 443 ssl;
ssl_certificate /etc/ssl/$ssl_server_name.crt; # Lazy load from SNI
ssl_certificate_key /etc/ssl/$ssl_server_name.key; # ditto
ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_pass http://my_backend;
}
}
This same server
configuration can be used for an unlimited number of secure sites. This has two benefits:
- It eliminates the separate
server
block for each hostname, making the configuration much smaller and thus easier to read and manage. - It eliminates the need to reload the configuration every time you add a new hostname. When you do have to reload configuration, it’s much quicker because NGINX Plus doesn’t load all the certificates.
Note that lazy loading makes the TLS handshake take 20–30% longer, depending on the environment, because of the filesystem calls required to retrieve the certificate from disk. However, the additional latency affects only the handshake – once the TLS session is established, request processing takes the usual amount of time.
In‑Memory SSL/TLS Certificate Storage
You can now store SSL/TLS certificate data in memory, in the NGINX Plus key‑value store, as well as in files on disk. When the parameter to the ssl_certificate
or ssl_certificate_key
directive begins with the data:
prefix, NGINX Plus interprets the parameter as raw PEM data (provided in the form of a variable that identifies the entry in the key‑value store where the data actually resides).
An additional benefit of storage in the key‑value store rather than on disk is that deployment images and backups no longer include copies of the private key, which an attacker can use to decrypt all of the traffic sent to and from the server. Companies with highly automated deployment pipelines benefit from the flexibility of being able to use the NGINX Plus API to programmatically insert certificates into the key‑value store. Additionally, companies migrating applications to a public cloud environment where there is no real hardware security module (HSM) for private key protection benefit from the added security of not storing private keys on disk.
Here’s a sample configuration for loading certificates from the key‑value store:
keyval_zone zone=ssl_crt:10m; # Key-value store for certificate data
keyval_zone zone=ssl_key:10m; # Key-value store for private key data
keyval $ssl_server_name $crt_pem zone=ssl_crt; # Use SNI as key to obtain cert
keyval $ssl_server_name $key_pem zone=ssl_key;
server {
listen 443 ssl;
ssl_certificate data:$crt_pem; # Certificate from key-value store
ssl_certificate_key data:$key_pem; # Private key from key-value store
ssl_protocols TLSv1.3 TLSv1.2 TLSv1.1;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header Host $host;
proxy_pass http://my_backend;
}
}
To upload certificates and private keys to the key‑value store with the NGINX Plus API, you can run the following curl
command (only the very beginning of the key data is shown). Before running the command, remember to make a copy of the PEM data and replace every line break with n
; otherwise the line breaks are stripped out of the JSON payload.
$ curl -d '{"www.example.com":"-----BEGIN RSA PRIVATE KEY-----n..."}' http://localhost:8080/api/4/http/keyvals/ssl_key
Using the key‑value store for certificates is ideal for clustered deployments of NGINX Plus, because you upload the certificate only once for automatic propagation across the cluster. To protect the certificate data itself, use the zone_sync_ssl
directive to TLS‑encrypt the connections between cluster members. Using the key‑value store is also ideal for short‑lived certificates or automating integrations with certificate issuers such as Let’s Encrypt and Hashicorp Vault.
As with the lazy loading method described in the previous section, certificates are loaded on demand during the TLS handshake, which takes longer as a result. The performance penalty is smaller than for lazy loading, however, because it’s faster to fetch a certificate from memory than from the filesystem. For the fastest TLS handshakes, use ECC certificates and load all certificates as NGINX Plus starts up (in other words, keep your pre‑R18 configuration where the parameter to ssl_certificate
and ssl_certificate_key
directives is the hardcoded path to each certificate or key on disk).
Note that while the key‑value store makes it more difficult for an attacker to obtain private key files than from disk storage, an attacker with shell access to the NGINX Plus host might still be able to access keys loaded in memory. The key‑value store does not protect private keys to the same extent as a hardware security module (HSM); to have NGINX Plus fetch keys from an HSM, use the engine:engine-name:key-id
parameter to the ssl_certificate_key
directive.
OpenID Connect Enhancements
NGINX Plus supports OpenID Connect authentication and single sign‑on for backend applications through our reference implementation. This has been both simplified and enhanced now that the key‑value store can be modified directly from the JavaScript module using variables (see below).
OpenID Connect now issues clients with opaque session tokens in the form of a browser cookie. Opaque tokens contain no personally identifiable information about the user, so no sensitive information is stored on the client. NGINX Plus stores the actual ID token in the key‑value store, and substitutes it for the opaque token that the client presents. JWT validation is performed for every request so that expired or invalid tokens are rejected.
The OpenID Connect reference implementation now also supports refresh tokens so that expired ID tokens are seamlessly refreshed without requiring user interaction. NGINX Plus stores the refresh token sent by an authorization server in the key‑value store and associates it with the opaque session token. When the ID token expires, NGINX Plus sends the refresh token back to the authorization server. If the session is still valid, the authorization server issues a new ID token, which is seamlessly updated in the key‑value store. Refresh tokens make it possible to use short‑lived ID tokens, which provides better security without inconveniencing users.
The OpenID Connect reference implementation now provides a logout URL. When logged‑in users visit the /logout URI, their ID and refresh tokens are deleted from the key‑value store, and they must reauthenticate when making a future request.
Port Ranges for Virtual Servers
A server
block typically has one listen
directive specifying the single port on which NGINX Plus listens; if multiple ports need to be configured, there’s an additional listen
directive for each of them. With NGINX Plus R18, you can now also specify port ranges, for example 80‑90
, when it is inconvenient to specify a large number of individual listen
directives.
Port ranges can be specified for both the HTTP listen
directive and TCP/UDP (Stream) listen
directive. The following configuration enables NGINX Plus to act as a proxy for an FTP server in passive mode, where the data port is chosen from a large range of TCP ports.
stream {
server {
listen 21; # FTP control port
listen 40000-45000; # Data port range
proxy_pass :$server_port;
}
}
This configuration sets up a virtual server to proxy connections to the FTP server on the same port the connection came in on.
Updating the Key‑Value Store Through Variables
When the key‑value store is enabled, NGINX Plus provides a variable for the values stored there based on an input key (typically a part of the request metadata). Previously, the only way to create, modif,y or delete values in the key‑value store was with the NGINX Plus API. With NGINX Plus R18, you can change the value for a key directly in the configuration, by setting the variable that holds the value.
The following example uses the key‑value store to maintain a list of client IP addresses that recently accessed the site, along with the last URI they requested.
keyval_zone zone=recents:10m timeout=2m; # Maintain recent client info for 2m
keyval $remote_addr $last_uri zone=recents; # Key=client IP address, Value=URI
server {
listen 80;
location / {
set $last_uri $uri;
proxy_pass http://my_backend;
}
}
server {
listen 8080;
allow 127.0.0.1;
deny all;
location /api/ {
api;
}
}
The set
directive assigns a value ($last_uri
) for each client IP address ($remote_addr
), creating a new entry if it is absent, or modifying the value to reflect the $uri
of the current request. Thus the current list of recent clients and their requested URIs is available with a call to the NGINX Plus API:
$ curl http://localhost:8080/api/4/http/keyvals/recents
{
"10.19.245.68": "/blog/nginx-plus-r18-released/",
"172.16.80.227": "/products/nginx/",
"10.219.110.168": "/blog/nginx-unit-1-8-0-now-available"
}
More powerful use cases can be achieved with scripting extensions such as the NGINX JavaScript module (njs) and Lua module. Any configuration that utilizes njs has access to all variables, including those backed by the key‑value store, for instance r.variables.last_uri
.
Greater Flexibility for Active Health Checks
NGINX Plus’ active health checks routinely test backend systems, so that traffic is not directed to systems that are known to be unhealthy. NGINX Plus R18 extends this important feature with two additional capabilities.
- Support for testing arbitrary variables in health checks
- Ability to terminate TCP sessions upon failed health check
Testing Arbitrary Variables in Health Checks
When defining a health check for a backend application, you can use a match
block to specify the expected value for multiple aspects of the response, including the HTTP status code and character strings in the response headers and/or body. When the response includes all expected values, the backend is considered healthy.
For even more complex checks, NGINX Plus R18 now provides the require
directive for testing the value of any variable – both standard NGINX variables and variables you declare. This gives you more flexibility when defining health checks because variables can be evaluated with map
blocks, regular expressions, and even scripting extensions.
The require
directive inside a match
block specifies one or more variables, all of which must have a non‑zero value for the test to pass. The following sample configuration defines a healthy upstream server as one that returns headers indicating the response is cacheable – either an Expires
header with a non‑zero value, or a Cache-Control
header.
map $upstream_http_cache_control $has_cache_control {
"" 0;
default 1;
}
map $upstream_http_expires $is_cacheable {
"" $has_cache_control; # When absent determine cacheable from Cache-Control
default $upstream_http_expires; # Use Expires value to determine cacheable
}
match cacheable {
require $is_cacheable; # Has Cache-Control header OR non-zero Expires header
status 200;
}
server {
listen 80;
location / {
health_check uri=/ match=cacheable;
proxy_pass http://my_backend;
}
}
Using map
blocks in this way is a common way to incorporate OR logic into NGINX Plus configuration. The require
directive enables you to take advantage of this technique in health checks, as well as to perform advanced health checks. Advanced health checks can also be defined by using the JavaScript module (njs) to analyze additional attributes of the responses from each upstream server, such as response time.
Terminating Layer 4 Connections when Health Checks Fail
When NGINX Plus acts as a Layer 4 (L4) load balancer for TCP/UDP applications, it proxies data in both directions on the connection established between the client and the backend server. Active health checks are an important part of such a configuration, but by default a backend server’s health status is considered only when a new client tries to establish a connection. If a backend server goes offline, established clients might experience a timeout when they send data to the server.
With the proxy_session_drop
directive, new in NGINX Plus R18, you can immediately close the connection when the next packet is received from, or sent to, the offline server. The client is forced to reconnect, at which point NGINX Plus proxies its requests to a healthy backend server.
When this directive is enabled, two other conditions also trigger termination of existing connections: failure of an active health check, and removal of the server from an upstream group for any reason. This includes removal through DNS lookup, where a backend server is defined by a hostname with multiple IP addresses, such as those provided by a service registry.
Other Enhancements in NGINX Plus R18
Simplified Clustering Configuration
NGINX Plus has supported cluster‑wide synchronization of runtime state since NGINX Plus R15. The Zone Synchronization module currently supports the sharing of state data about sticky sessions, rate limiting, and the key-value store across a clustered deployment of NGINX Plus instances.
A single zone_sync
configuration can now be used for all instances in a cluster. Previously, you had to configure the IP address or hostname of each member explicitly, meaning that each instance had a slightly different configuration. The listen
directive now accepts a wildcard value such that the zone_sync
server listens on all local interfaces. This is particularly valuable when deploying NGINX Plus into a dynamic cluster where the instance’s IP address is not known until time of deployment.
Using the same configuration on every instance greatly simplifies deployment in dynamic environments (for example, with auto-scaling groups or containerized clusters).
NGINX JavaScript Module Enhancements
The NGINX JavaScript module (njs) has been updated to version 0.3.0. The most notable enhancement is support for the JavaScript import
and export
modules, which enables you to organize your JavaScript code into multiple function-specific files. Previously, all JavaScript code had to reside in a single file.
The following example shows how JavaScript modules can be used to organize and simplify the code required for a relatively simple use case. Here we employ JavaScript to perform data masking for user privacy so that a hashed (masked) version of the client IP address is logged instead of the real address. A given masked IP address in the log always represents the same client, but cannot be converted back to the real IP address.
We put the functions required for IP address masking into a JavaScript module that exports a single function, maskIp()
. The exported function depends on private functions that are only available within the module, and cannot be called by other JavaScript code.
export default {maskIp}; // This module only exposes the maskIp() function
function maskIp(addr) { // Public (exported) function
return i2ipv4(fnv32a(addr));
}
// Private functions below //
function fnv32a(str) { // Creates hash as 32-bit integer
var hval = 2166136261;
for (var i = 0; i < str.length; ++i ) {
hval ^= str.charCodeAt(i);
hval += (hval <<1) + (hval << 4) + (hval << 7) + (hval << 8) + (hval <>> 0;
}
function i2ipv4(i) { // Converts 32-bit integer to IPv4 "dotted-quad" format
var ipv4 = [];
for (var o = 24; o >= 0; o-=8) {
ipv4.push((i >> o) & 255);
}
return ipv4.join('.');
}
This module can now be imported into the main JavaScript file (main.js), and the exported functions referenced.
import masker from 'mask_ip_module.js';
function maskRemoteAddress(r) {
return(masker.maskIp(r.remoteAddress));
}
As a result, main.js is very simple, containing only the functions that are referenced by the NGINX configuration. The import
statement specifies either a relative or absolute path to the module file. When a relative path is provided, you can use the new js_set
directive to specify additional paths to be searched.
js_include main.js;
js_path /etc/nginx/njs_modules;
js_set $remote_addr_masked maskRemoteAddress;
log_format masked '$remote_addr_masked - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" "$http_user_agent"';
server {
listen 80;
location / {
proxy_pass http://my_backend;
access_log /var/log/nginx/access_masked.log masked;
}
}
The new features much improve readability and maintenance, especially when there are a large number of njs directives in use, and/or a large amount of JavaScript code. Separate teams can now maintain their own JavaScript code without needing to perform a complex merge into the main JavaScript file.
Direct Helm Installation of the NGINX Ingress Controller for Kubernetes
You can now install the NGINX Ingress Controller for Kubernetes directly from our new Helm repository, without having to download Helm chart source files (though that is also still supported). For more information, see the GitHub repo.
New and Updated Dynamic Modules
The following dynamic modules are added or updated in this release:
- New Brotli compression module – Brotli is a general‑purpose, lossless data compression algorithm that uses a variant of the LZ77 algorithm, Huffman coding, and second‑order context modeling.
- New OpenTracing module – You can now instrument NGINX Plus with OpenTracing‑compliant requests for a range of distributed tracing services, such as Datadog, Jaeger, and Zipkin.
- Updated Lua module – Lua is a scripting language for NGINX Plus. The module now uses LuaJIT 2.1.
Upgrade or Try NGINX Plus
If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R18 as soon as possible. You’ll also pick up a number of additional fixes and improvements, and it will help NGINX, Inc. to help you when you need to raise a support ticket.
Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.
If you haven’t tried NGINX Plus or NGINX WAF, we encourage you to try them out – for security, load balancing, and API gateway, or as a fully supported web server with enhanced monitoring and management APIs. You can get started today with a free 30‑day evaluation. See for yourself how NGINX Plus can help you deliver and scale your applications.
The post Announcing NGINX Plus R18 appeared first on NGINX.
Source: Announcing NGINX Plus R18