Announcing NGINX Ingress Controller for Kubernetes Release 1.7.0
We are happy to announce release 1.7.0 of the NGINX Ingress Controller for Kubernetes. This release builds upon the development of our supported solution for Ingress load balancing on Kubernetes platforms, including Amazon Elastic Container Service for Kubernetes (EKS), the Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Red Hat OpenShift, IBM Cloud Private, Diamanti, and others.
With release 1.7.0, we continue our commitment to providing a flexible, powerful and easy-to-use Ingress Controller, which you can configure with both Kubernetes Ingress resources and NGINX Ingress resources:
- Kubernetes Ingress resources provide maximum compatibility across Ingress controller implementations, and can be extended using annotations and custom templates to generate sophisticated configuration.
- NGINX Ingress resources provide an NGINX‑specific configuration schema, which is richer and safer than customizing the generic Kubernetes Ingress resources.
Release 1.7.0 introduces the following major improvements:
- Red Hat OpenShift Operator – The Operator manages the lifecycle of the NGINX Ingress Controller on OpenShift in a simple and familiar manner. It has been certified by Red Hat, meaning that the NGINX Ingress Controller is a fully supported solution for OpenShift.
- NGINX Ingress resources support additional protocols (TCP, UDP, and TLS Passthrough) – You can now deliver complex, non-HTTP-based services from Kubernetes using custom resources, in a simple and intuitive manner. This approach is a preview of our future direction.
- NGINX Ingress resources support the circuit breaker pattern for microservices – You can now specify a default error page that the NGINX Ingress Controller returns when a named service is not functioning correctly.
- NGINX Ingress resources provide improved validation and reporting – Stricter validation, using OpenAPI validation, reduces the chance of errors in NGINX Ingress resources and provides much improved error messages.
What Is the NGINX Ingress Controller for Kubernetes?
The NGINX Ingress Controller for Kubernetes is a daemon that runs alongside NGINX Open Source or NGINX Plus instances in a Kubernetes environment. The daemon monitors Kubernetes Ingress resources and NGINX Ingress resources to discover requests for services that require ingress load balancing. The daemon then automatically configures NGINX or NGINX Plus to route and load balance traffic to these services.
Multiple NGINX Ingress controller implementations are available. The official NGINX implementation is high‑performance, production‑ready, and suitable for long‑term deployment. We focus on providing stability across releases, with features that can be deployed at enterprise scale. We provide full technical support to NGINX Plus subscribers at no additional cost, and NGINX Open Source users benefit from our focus on stability and supportability.
What’s New in NGINX Ingress Controller 1.7.0?
Support for Red Hat OpenShift
We are pleased to announce the NGINX Ingress Operator for Red Hat OpenShift, a fully supported lifecycle management solution for the NGINX Ingress Controller (for both NGINX Open Source and NGINX Plus) on the OpenShift platform. This enables point-and-click installation of the NGINX Ingress Controller – you need to specify just a few parameters for a basic deployment.
The Operator abstracts away the complexity around Kubernetes’ native objects (such as deployments, replicas, and Pods) which frees application owners and other teams from having to understand the Kubernetes container infrastructure in detail. Additionally, the Operator provides parameters for customizing the Ingress Controller deployment.
The goal of the Operator is to provide a one‑step deployment process for bringing the advanced capabilities of the NGINX Ingress controller to OpenShift. For example, you can harness the NGINX Ingress resources (VirtualServer, VirtualServerRoute, and TransportServer) to support many use cases such as blue‑green deployment, traffic splitting, and A/B testing.
You can also leverage the role‑based access control (RBAC) and cross‑namespace capabilities of NGINX Ingress resources to support self‑service and multi‑tenancy for your users, with clear demarcation and delegation of application delivery component management across different teams.
Once the OpenShift cluster administrator installs the Operator, end users can deploy the NGINX Ingress Controller with a simple manifest like the following:
apiVersion: k8s.nginx.org/v1alpha1
kind: NginxIngressController
metadata:
name: nginx-ingress-controller-foo
namespace: nginx-ingress-controller-foo-ns
spec:
type: deployment
image:
repository: registry.hub.docker.com/nginx/nginx-ingress
tag: edge
pullPolicy: Always
serviceType: NodePort
For more advanced users, the Operator exposes a wide range of options for customizing the deployment. For a sample manifest with all possible parameters, see our GitHub repo.
See also Getting Started with NGINX Ingress Operator on Red Hat OpenShift.
Support for TCP, UDP, and TLS Passthrough Services in NGINX Ingress Resources
New in the 1.7.0 release, we’ve extended NGINX Ingress resources to support TCP, UDP, and TLS Passthrough load balancing:
- TCP and UDP support means that the Ingress Controller can manage a much wider range of protocols, from DNS and Syslog (UDP) to database and other TCP‑based applications
- TLS Passthrough means the NGINX Ingress Controller can route TLS‑encrypted connections based on the Service Name Indication (SNI) header, without decrypting them or requiring access to the TLS certificates or keys
In release 1.7.0, we are previewing two new NGINX Ingress resources for configuring TCP, UDP, and TLS Passthrough: GlobalConfiguration and TransportServer. As we develop the next release (1.8.0), we welcome feedback, criticism, and suggestions for improvement to this approach. Once we’re satisfied we have a solid configuration architecture, we’ll lock it down and regard it as stable and fully production‑ready.
This direct support for non‑HTTP protocols provides a simpler and more reliable option than our previous recommendation, which was to embed the NGINX stream
configuration directly in a Kubernetes Ingress resource.
Let’s step through the new simplified and reliable way for supporting TCP and UDP services with the NGINX Ingress Controller.
As the cluster administrator, you use the GlobalConfiguration resource to enable users to configure TCP and UDP load balancing of their applications through certain ports that you allocate. To define a resource for global configuration of the Ingress Controller, simply add the -global-configuration=namespace/name
command‑line argument during installation.
Consider this sample GlobalConfiguration definition in the nginx-ingress
namespace with two listeners:
- TCP protocol listening on port 5353
- UDP protocol listening on port 5353
apiVersion: k8s.nginx.org/v1alpha1
kind: GlobalConfiguration
metadata:
name: nginx-configuration
namespace: nginx-ingress
spec:
listeners:
- name: dns-udp
port: 5353
protocol: UDP
- name: dns-tcp
port: 5353
protocol: TCP
Users must then reference the listeners by name (dns-udp
or dns-tcp
) in the TransportServer resource when configuring the NGINX Ingress Controller for TCP and UDP load balancing. Here is a sample implementation of the TransportServer resource for provisioning TCP load balancing.
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: dns-tcp
spec:
listener:
name: dns-tcp
protocol: TCP
upstreams:
- name: dns-app
service: coredns
port: 5353
action:
pass: dns-app
In the spec
section, users reference the dns-tcp
listener defined in the GlobalConfiguration resource definition, and pass TCP connections to the dns-app
upstream. Note that only port 5353 is allocated to the dns-tcp
listener. A TransportServer for UDP protocol can be configured similarly. See the complete TCP/UDP example on GitHub.
Users can additionally use the TransportServer resource for TLS Passthrough, routing TLS‑encrypted connections to upstream services. To enable TLS Passthrough, add the –enable-tls-passthrough
command‑line argument during installation. Here is a sample implementation of the TransportServer resource with TLS Passthrough.
apiVersion: k8s.nginx.org/v1alpha1
kind: TransportServer
metadata:
name: secure-app
spec:
listener:
name: tls-passthrough
protocol: TLS_PASSTHROUGH
host: app.example.com
upstreams:
- name: secure-app
service: secure-app
port: 8443
action:
pass: secure-app
Note that when enabling TLS Passthrough, we reference a built‑in listener with the name tls-passthrough
and protocol TLS_PASSTHROUGH
. See the complete TLS Passthrough example on GitHub.
Support for the Circuit Breaker Pattern
The NGINX Ingress Controller implementation of the circuit breaker pattern does two things:
- Detects and isolates a failed service, removing it from the application (supported in release 1.6.0 and later)
- Delivers a canned response to the client instead of routing requests to the failed service (new in release 1.7.0)
The use of a circuit breaker can improve the performance of an application by eliminating calls to a failed component that would otherwise time out or cause delays, and it can often mitigate the impact of a failed non‑essential component.
For example, an application might present a web or mobile interface that includes a list of ancillary items – comments on an article, recommendations, advertisements, and so on. If the Kubernetes service that generated this list fails, the circuit breaker can replace the internally generated error message (502
Service
Unavailable
) with a more appropriate response, such as an empty JSON list.
In this way, even if the service provided by the application degrades (ancillary items are not available), internal errors are gracefully handled in a way that conceals them from the calling client.
In the following example, the NGINX Ingress resource is managing traffic to the app-svc
service. It uses an active health check (exclusive to NGINX Plus) to verify that the endpoints respond with an appropriate status code (2xx
or 3xx
by default) for requests for /status, and redirects requests that cause 502
error responses.
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: app
spec:
host: app.example.com
tls:
secret: app-secret
upstreams:
- name: app
service: app-svc
port: 80
healthCheck:
enable: true
path: /status
routes:
- path: /
errorPages:
- codes: [502]
redirect:
code: 301
url: https://nginx.org
action:
pass: app
You can apply a VirtualServer definition to actively check on a periodic basis whether the app
upstream is healthy. If the periodic health checks fails for all endpoints of the app-svc
service, then the app
upstream becomes unavailable (the circuit breaker is open), and clients making requests to the app
upstream are redirected to a backup website (in this example https://nginx.org).
When testing this circuit breaker use case, consider the following scenarios and how NGINX responds:
- A request to an endpoint results in a transient error. Depending on how you configure its retry‑request logic, NGINX either retries the request or gives up and redirects the request.
- The health check fails against all endpoints in the service. NGINX identifies that all endpoints have failed and redirects the request to a specified URL.
- The service has no endpoints (perhaps because they are not in a ready state). NGINX redirects the request promptly.
Improved Validation and Reporting
Release 1.7.0 of the NGINX Ingress Controller improves the mechanism for validating NGINX Ingress resources. Based on the OpenAPI spec for the NGINX Ingress resources objects (VirtualServer, VirtualServerRoute, and TransportServer), kubectl
and the Kubernetes API server can detect violations of the structure of a resource – for example, when an integer value is assigned to a string field. This shortens the feedback loop for such errors to be detected by users configuring load balancing for their applications.
Compared to previous releases, the validator catches errors earlier in the NGINX Ingress Controller configuration process, and provides more detailed error messages.
Following is the YAML file for a simple VirtualServer implementation of Layer 7 path‑based routing on the NGINX Ingress Controller, connecting upstream Pods to clients. The fields highlighted in yellow are configuration errors.
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: cafe
spec:
host: cafe.example.com
tls:
secret: cafe-secret
upstreams:
- name: tea
service: tea-svc
port: DD
healthCheck:
enable: 3654
- name: coffee
service: coffee-svc
port: 80
routes:
- path: /tea
action:
pass: tea
- path: /coffee
actionn:
pass: coffee
We run the following command to apply the YAML file:
# kubectl apply -f cafe-virtual-server.yaml
The validation mechanism catches the errors highlighted above and generates the following messages. Note that the messages are generated in reverse of the order in which the errors occur in the YAML file (the first message corresponds to the last error, the misspelled actionn
).
error: error validating "cafe-virtual-server.yaml": error validating data: [ValidationError(VirtualServer.spec.routes[1]): unknown field "actionn" in org.nginx.k8s.v1.VirtualServer.spec.routes,
ValidationError(VirtualServer.spec.upstreams[0].healthCheck.enable): invalid type for org.nginx.k8s.v1.VirtualServer.spec.upstreams.healthCheck.enable: got "number", expected "boolean",
ValidationError(VirtualServer.spec.upstreams[0].port): invalid type for org.nginx.k8s.v1.VirtualServer.spec.upstreams.port: got "string", expected "integer"]; if you choose to ignore these errors, turn validation off with --validate=false
Resources
For the complete changelog for the 1.7.0 release, see the Release Notes.
To try out the NGINX Ingress Controller for Kubernetes with NGINX Plus, start your free 30-day trial today or contact us to discuss your use cases.
To try the NGINX Ingress Controller with NGINX Open Source, you can obtain the release source code, or download a prebuilt container from DockerHub.
The post Announcing NGINX Ingress Controller for Kubernetes Release 1.7.0 appeared first on NGINX.
Source: Announcing NGINX Ingress Controller for Kubernetes Release 1.7.0