Announcing NGINX Plus R15
We are pleased to announce that the fifteenth version of NGINX Plus, our flagship product, is now available. Since our initial release in 2013, NGINX Plus has grown tremendously, in both its feature set and its commercial appeal. There are now more than 1,500 NGINX Plus customers and 409 million users of NGINX Open Source.
Based on NGINX Open Source, NGINX Plus is the only all-in-one load balancer, content cache, web server, and API gateway. NGINX Plus includes exclusive enhanced features designed to reduce complexity in architecting modern applications, along with award-winning support.
NGINX Plus R15 introduces new gRPC support, HTTP/2 server push, improved clustering support, enhanced API gateway functionality, and more:
- Native gRPC support — gRPC is the new remote procedure call (RPC) standard developed by Google. It’s a lightweight and efficient way for clients and servers to communicate. With this new functionality you can SSL-terminate, route, and load balance gRPC traffic to your backend servers.
- HTTP/2 server push — With HTTP/2 server push, NGINX Plus can send resources before clients actually request them, improving performance and reducing round trips.
- State sharing in a cluster – With this release, the shared-memory data used for Sticky Learn session persistence can be shared across all NGINX Plus instances in a cluster. The next few releases of NGINX Plus will build on our clustering capabilities and introduce new cluster-aware features.
- OpenID Connect integration — You can now provide SSO (Single Sign-On) to any web application with NGINX Plus, using the login flow of OpenID Connect and issuing JSON Web Tokens (JWTs) to clients. NGINX Plus integrates with CA Single Sign-On (formerly SiteMinder), ForgeRock OpenAM, Keycloak, Okta, OneLogin, Ping Identity, and other popular identity providers.
- NGINX JavaScript (njs) module enhancements — The njs modules (formerly nginScript) enable you to run JavaScript code during NGINX Plus request processing. The njs module for HTTP now provides support for issuing HTTP subrequests that are independent from, and asynchronous to, the client request. This is helpful in API gateway use cases, giving you the flexibility to modify and consolidate API calls using JavaScript. The njs modules for both HTTP and TCP/UDP also now include crypto libraries enabling implementation of common hash functions.
- Additional features — A new ALPN variable, updates to multiple dynamic modules, and more great features are also included in this release.
Changes in Behavior
- NGINX Plus R13 saw the introduction of the all-new NGINX Plus API, which enables functions that were previously implemented in separate APIs, including on-the-fly reconfiguration and extended metrics. The previous APIs, configured with the
upstream_conf
andstatus
directives, are deprecated. You are encouraged to check your configuration for these directives and migrate to theapi
directive as soon as is practical. Starting with the next release, NGINX Plus R16, the deprecated APIs will no longer be shipped. - The NGINX Plus API introduced in NGINX Plus R13 is updated to version 3 at this release. Previous versions are still supported. If you are considering updating your API clients, please check the API compatibility documentation.
- NGINX Plus packages available at the official repository now have a new numbering scheme. The NGINX Plus package and all dynamic modules now indicate the NGINX Plus release number. Each package version now corresponds to the NGINX Plus version, to make it clearer which version is installed and to simplify module dependencies. This change is transparent to customers unless using automated systems that reference package names by version number. In that case, please first test your upgrade process on a non-production environment.
NGINX Plus R15 Features in Detail
gRPC Support
With this release, NGINX Plus can proxy and load balance gRPC traffic, which many organizations are already using for communication with microservices. gRPC is an open source, high performance RPC (Remote Procedure Call) framework designed by Google for highly efficient, low latency service-to-service communication. gRPC mandates HTTP/2, rather than HTTP 1.1, as its transport mechanism because the features of HTTP/2 – flow control, multiplexing, and bidirectional traffic streaming with low latency – are ideally suited to connecting microservices at scale.
Support for gRPC was introduced in NGINX Open Source 1.13.10 and is now included in NGINX Plus R15. You can now inspect and route gRPC method calls, enabling you to:
- Apply HTTP/2 TLS encryption, rate limiting, IP-based access control lists, and logging to published gRPC services.
- Publish multiple gRPC services through a single endpoint by inspecting and proxying gRPC connections to internal services.
- Scale your gRPC services when in need of additional capacity by load balancing gRPC connections to upstream backend pools.
- Use NGINX Plus as an API gateway for both gRPC and RESTful endpoints.
To learn more about gRPC, read this blog post: Introducing gRPC Support with NGINX 1.13.10
HTTP/2 Server Push
First impressions are important, and page load time is a critical factor in determining whether users will revisit your website. One way to provide faster responses to users is reducing the number of RTTs (having the user wait for a Round Trip Time cycle – the time needed for a request and response) with HTTP/2 server push.
HTTP/2 server push was a highly requested and anticipated feature from the NGINX open source community, introduced in NGINX Open Source 1.13.9. Now included in NGINX Plus R15, it allows a server to preemptively send data that is likely to be required to complete a previous client-initiated request. For example, a browser needs a whole range of resources – style sheets, images, and so on – to render a website page, so it may make sense to send those resources immediately when a client first accesses the page, rather than waiting for the browser to request them explicitly.
In the configuration example below, we use the http2_push
directive to prime a client with style sheets and images that it will need to render a demo web page:
server {
# Ensure that HTTP/2 is enabled for the server
listen 443 ssl http2;
ssl_certificate ssl/certificate.pem;
ssl_certificate_key ssl/key.pem;
root /var/www/html;
# when a client requests demo.html, also push
# /style.css, /image1.jpg and /image2.jpg
location = /demo.html {
http2_push /style.css;
http2_push /image1.jpg;
http2_push /image2.jpg;
}
}
It isn’t always possible to determine the exact set of the resources needed by clients, however, making it impractical to list specific files in the NGINX configuration file. In this case, NGINX can intercept HTTP Link
headers and push the resources that are marked preload
in them. To enable interception of Link
headers, include the http2_push_preload on;
directive.
server {
listen 443 ssl http2;
ssl_certificate ssl/certificate.pem;
ssl_certificate_key ssl/key.pem;
root /var/www/html;
http2_push_preload on;
}
To learn more about HTTP/2 server push, read this blog post: Introducing HTTP/2 Server Push with NGINX 1.13.9.
State Sharing Across a Cluster
Configuring multiple NGINX Plus servers into a high-availability cluster gives further resilience to your applications and eliminates single points of failure in your application stack. Clustering with NGINX Plus is designed for mission-critical, production deployments where resilience and high availability are paramount. There are many solutions for deploying high availability clustering with NGINX Plus.
Clustering support was introduced in previous releases of NGINX Plus, providing two tiers of clustering:
- Network resiliency using the keepalived package — Handles failover in case an NGINX Plus server goes down.
- Configuration synchronization using the
nginx-sync
package — Ensures configuration is in sync across all NGINX Plus servers.
NGINX Plus R15 introduces a third tier of clustering — state sharing during runtime, allowing you to synchronize data in shared memory zones among cluster nodes. More specifically, the data stored in shared memory zones for Sticky Learn session persistence can now be synchronized across all nodes in the cluster using the new zone_sync
module.
New zone_sync
Module
With state sharing, there is no primary or master node – all nodes are peers and exchange data in a full mesh topology. Additionally, the state-sharing clustering solution is independent from the high availability solution for network resiliency. Therefore, the state-sharing cluster can span physical locations.
An NGINX Plus state-sharing cluster has three requirements:
- Network connectivity between all cluster nodes
- Synchronized clocks
- Configuration such as the following:
stream {
resolver 10.0.0.53 valid=20s;
server {
listen 9000;
zone_sync;
zone_sync_server nginx-cluster.example.com:9000 resolve;
}
}
The zone_sync
directive enables synchronization of shared memory zones in a cluster. The zone_sync_server
directive identifies the other NGINX Plus instances in the cluster. NGINX Plus supports DNS service discovery so cluster members can be identified by hostname, and so the configuration is identical for each cluster member.
The minimal configuration above lacks the security controls necessary to protect synchronization data in a production deployment. The configuration that follows employs several such safeguards:
- SSL/TLS encryption of synchronization data
- Client certificate authentication, so each cluster node identifies itself to the others (mutual TLS)
- IP address access control lists (ACLs) so that only NGINX nodes on the same physical network may connect for synchronization
stream {
resolver 10.0.0.53 valid=20s;
server {
zone_sync;
zone_sync_server nginx-cluster.example.com:9000 resolve;
listen 10.0.0.1:9000 ssl; # Listen on internal IP, require TLS
ssl_certificate_key /etc/ssl/nginx-1.example.com.key.pem;
ssl_certificate /etc/ssl/nginx-1.example.com.server_cert.pem;
allow 10.0.0.0/24; # Only accept connections from internal network
deny all;
zone_sync_ssl_verify on; # Peers must connect with client cert
zone_sync_ssl_trusted_certificate /etc/ssl/ca_chain.crt.pem;
zone_sync_ssl_verify_depth 2;
zone_sync_ssl on; # Connect to peers with TLS, offer client cert
zone_sync_ssl_certificate /etc/ssl/nginx-1.example.com.client_cert.pem;
zone_sync_ssl_certificate_key /etc/ssl/nginx-1.example.com.key.pem;
}
}
Sticky Learn Feature
The first supported NGINX Plus feature that uses state data shared across a cluster is Sticky Learn session persistence. Session persistence means that requests from a client are always forwarded to the server that fulfilled the client’s first request, which is useful when session state is stored at the backend.
In the following configuration, the sticky learn
directive defines a shared memory zone called sessions. The sync
parameter enables cluster-wide state sharing by instructing NGINX Plus to publish messages about the contents of its shared memory zone to the other nodes in the cluster. Note that the shared memory zone name needs to be included in every NGINX Plus node in the cluster for successful synchronization of state data, and zone_sync
must also be configured, as above.
upstream my_backend {
zone my_backend 64k;
server backends.example.com resolve;
sticky learn zone=sessions:1m
create=$upstream_cookie_session
lookup=$cookie_session
sync;
}
server {
listen 80;
location / {
proxy_pass http://my_backend;
}
}
Note: Clustering support and Sticky Learn session persistence are exclusive to NGINX Plus.
OpenID Connect Integration
Many enterprises use Identity and Access Management (IAM) solutions to manage user accounts and provide a Single Sign-On (SSO) environment for multiple applications. They often look to extend SSO across new and existing applications to minimize complexity and cost.
“Using OpenID Connect with NGINX Plus enabled us to quickly and easily integrate with our identity provider and, at the same time, simplify our application architecture.”
— Scott Macleod, Software Engineer, NHS Digital
NGINX Plus R10 introduced support for validating OpenID Connect tokens. In this release we extend that capability so that NGINX Plus can also control the login flow for OpenID Connect 1.0, communicating with the identity provider and issuing the access token to the client. This enables integration with most major identity providers, including CA Single Sign-On (formerly SiteMinder), ForgeRock OpenAM, Keycloak, Okta, OneLogin, and Ping Identity. The newly extended capability also lets you:
- Extend SSO to legacy applications without modifying or modernizing those applications
- Integrate SSO into new applications without implementing SSO or authentication in the application code
- Eliminate vendor lock in; you get standards-based SSO without having to deploy proprietary IAM vendor agent software with the application
OpenID Connect integration with NGINX Plus is available as a reference implementation on GitHub. The GitHub repo includes sample configuration with instructions on installation, configuration, and fine-tuning for specific use cases.
NGINX JavaScript (njs) Enhancements
With njs, you can include JavaScript code within your NGINX configuration so it is evaluated at runtime, as HTTP or TCP/UDP requests are processed. This enables a wide range of potential use cases such as gaining finer control over traffic, consolidating JavaScript functions across applications, and defending against security threats.
NGINX Plus R15 includes two significant updates to njs: subrequests and hash functions.
Subrequests
You can now issue simultaneous HTTP requests that are independent of, and asynchronous to, the client request. This enables a multitude of advanced use cases.
The following sample JavaScript code (fastest_wins.js) issues an HTTP request to two different backends simultaneously. The first response is sent to NGINX Plus for forwarding to the client and the second response is ignored.
function sendFastest(req, res) {
var n = 0;
function done(reply) { // Callback for completed subrequests
if (n++ == 0) {
req.log("WINNER is " + reply.uri);
res.status = reply.status;
res.contentLength = reply.body.length;
res.sendHeader();
res.send(reply.body);
res.finish();
}
}
req.subrequest("/server_one", req.variables.args, done);
req.subrequest("/server_two", req.variables.args, done);
}
The corresponding NGINX Plus configuration reads in the JavaScript code with the js_include
directive. All requests matching the root location (/) are passed to the sendFastest()
function which generates subrequests to the /server_one
and /server_two
locations. The original URI is passed to the corresponding backend servers, including any query parameters. Both subrequests execute the done
callback function. However, since this function includes res.finish()
, only the first subrequest to complete will send its response to the client.
js_include fastest_wins.js;
server {
listen 80;
location / {
js_content sendFastest;
}
location /server_one {
proxy_pass http://10.0.0.1$request_uri; # Pass the original URI
}
location /server_two {
proxy_pass http://10.0.0.2$request_uri;
}
}
Hash Functions
The njs module now includes a crypto library with implementations of:
- Hash functions: MD5, SHA-1, and SHA-256
- HMAC using MD5, SHA-1, or SHA-256
- Digest formats in base64 and base64url hex formats
An example use case for hash functions is to add data integrity to application cookies. The following njs code sample includes a signCookie()
function to add a digital signature to a cookie and a validateCookieSignature()
function to validate signed cookies.
function signCookie(req, res) {
if (res.headers["set-cookie"].length) {
// Response includes a new cookie
var cookie_data = res.headers["set-cookie"].split(";");
var c = require('crypto');
var h = c.createHmac('sha256').update(cookie_data[0] + req.remoteAddress);
return "signature=" + h.digest('hex');
}
return "";
}
function validateCookieSignature(req) {
var raw_cookie = req.variables["cookie_" + req.variables.cookie_name];
if (raw_cookie.length) {
// Cookie is present, ensure signed version
var sig_cookie = req.variables["cookie_signature"];
if (sig_cookie.length) {
// Signature presented, check for match
var c = require('crypto');
var h = c.createHmac('sha256').update(req.variables.cookie_name + "=" + raw_cookie + req.remoteAddress);
if (h.digest('hex') == sig_cookie) {
return ""; // Success
}
}
} else {
return ""; // No cookie presented therefore no validation
}
return "1"; // Failure
}
The following NGINX Plus configuration utilizes the njs code to validate cookie signatures in incoming HTTP requests and return an error message to the client if the validation fails. NGINX proxies the request if the validation is a success or no cookie is present.
js_include cookie_signing.js
js_set $signature_error validateCookieSignature;
js_set $signed_cookie signCookie;
server {
listen 80;
set $cookie_name "session"; // The cookie name to be signature-checked
location / {
if ($signature_error) {
return 403; # Unauthorized
}
proxy_pass http://my_backend;
add_header Set-Cookie $signed_cookie;
}
}
Additional New Features
ALPN Variable for the Stream Modules
Application Layer Protocol Negotiation (ALPN) is an extension to TLS that enables a client and server to negotiate what protocol will be used during the TLS handshake, avoiding additional round trips that might incur latency and degrade the user experience. The most common use case for ALPN is automatically upgrading connections from HTTP to HTTP/2 when both client and server support HTTP/2.
The new NGINX variable $ssl_preread_alpn_protocols
, first introduced in NGINX Open Source 1.13.10, captures the negotiated protocol between the client and server. The configuration below shows how an XMPP client may introduce itself through ALPN so that NGINX Plus can route XMPP traffic to xmpp_backend
, gRPC traffic to grpc_backend
, and all other traffic to http_backend
, all through a single endpoint.
stream {
map $ssl_preread_alpn_protocols $upstream {
"xmpp-client" xmpp_backend;
"~bh2b" grpc_backend; # 'h2' appears within word boundaries (b)
default http_backend; # Treat all other clients as HTTP
}
upstream xmpp-servers {
#...
}
upstream grpc-servers {
#...
}
upstream http-servers {
#...
}
server {
listen 443 ssl;
#ssl_certificate, ciphers
ssl_preread on;
proxy_pass $upstream;
}
}
Including ssl_preread
on enables NGINX Plus to extract information from the ClientHello
message at the ALPN preread phase. This is required for obtaining the value to be assigned to $ssl_preread_alpn_protocols
.
To learn more, read our documentation on the ngx_stream_ssl_preread
module.
Queue Time Variable
NGINX Plus supports upstream queueing so that client requests do not have to be rejected immediately when all servers in the upstream group are not available to accept new requests.
A typical use case for upstream queueing is to protect backend servers from overloading without immediately rejecting requests. You can define the maximum number of simultaneous connections for each upstream server with the max_conns
directive. The queue
directive will then hold requests in a queue when there are no backends available, either because they have reached their connection limit or because they are unhealthy.
In this release, the new NGINX variable $upstream_queue_time
, first introduced in NGINX 1.13.9, captures the amount of time a request spends in the queue. The configuration below includes a custom log format that captures various timing metrics for each request; the metrics can then be analyzed offline as part of performance tuning. We limit the number of queued requests for the my_backend
upstream group to 20. The timeout
parameter sets how long requests are held in the queue before an error message is returned to the client (503
by default). Here we set it to 5 seconds (the default is 60 seconds).
log_format timing '$remote_addr - $remote_user [$time_local] "$request" $status '
'$body_bytes_sent "$http_referer" "$http_user_agent" '
'$request_time $upstream_queue_time $upstream_response_time';
upstream my_backend {
zone my_backend 64k;
server backends.example.com resolve max_conns=250;
queue 20 timeout=5s; # Queue up to 20 requests when no backends available
}
server {
listen 80;
location / {
proxy_pass http://my_backend;
access_log /var/log/nginx/access.log timing;
}
}
To learn more about upstream queuing, read documentation on the queue
directive.
Access Logs Without Escaping
You can now disable escaping in the NGINX access log. The new escape=none parameter to the log_format directive, first introduced in NGINX 1.13.10, specifies that no escaping should be applied to special characters in variables.
Update to the LDAP Auth Reference Implementation
Our reference implementation for authenticating users using an LDAP authentication system has been updated to address issues and fix bugs. Check it out on GitHub.
Transparent Proxying without root Privilege
You can use NGINX Plus as a transparent proxy by including the transparent
parameter to the proxy_bind
directive. Worker processes can now inherit the CAP_NET_RAW
Linux capability from the master process so that NGINX Plus no longer requires special privileges for transparent proxying.
Note: This applies only to Linux platforms.
JWT Grace Period
When using JWT authentication in your environment, and a time-based claim — either nbf
(not before date) and/or exp
(expiry date) — is present, NGINX Plus will ensure that the current time is within that time interval while validating the token. In cases where the identity provider clock is not synchronized with the clock of the NGINX Plus instance, tokens may expire unexpectedly or appear to start in the future. You can use auth_jwt_leeway
directive to account for those time differences.
Cookie Flag Dynamic Module
The third-party module for setting cookie flags can now be installed as a dynamic module with one of the following commands.
$ apt-get install nginx-plus-module-cookie-flag # Ubuntu/Debian
$ yum install nginx-plus-module-cookie-flag # Redhat/CentOS
Note: The Cookie-Flag module is covered by your NGINX Plus support agreement.
NGINX WAF module Update
The NGINX WAF module, based on ModSecurity 3.0, has been updated with the following enhancements:
- Performance improvements in
libmodsecurity
- Memory leak fixes in
ModSecurity-nginx
To learn more about the NGINX WAF, read the NGINX WAF product page.
Upgrade or Try NGINX Plus
NGINX Plus R15 includes improved authentication capabilities for your client applications, additional clustering capabilities, nginScript enhancements, and notable bug fixes.
If you’re running NGINX Plus, we strongly encourage you to upgrade to Release 15 as soon as possible. You’ll pick up a number of fixes and improvements, and upgrading will help NGINX to help you when you need to raise a support ticket. Installation and upgrade instructions for NGINX Plus R15 are available at the customer portal.
Please carefully review the new features and changes in behavior described in this blog post before proceeding with the upgrade.
If you haven’t tried NGINX Plus, we encourage you to try it out – for web acceleration, load balancing, and application delivery, or as a fully supported web server with enhanced monitoring and management APIs. You can get started for free today with a 30‑day evaluation. See for yourself how NGINX Plus can help you deliver and scale your applications.
The post Announcing NGINX Plus R15 appeared first on NGINX.
Source: Announcing NGINX Plus R15