Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller
table.nginx-blog, table.nginx-blog th, table.nginx-blog td {
border: 2px solid black;
border-collapse: collapse;
}
table.nginx-blog {
width: 100%;
}
table.nginx-blog th {
background-color: #d3d3d3;
align: left;
padding-left: 5px;
padding-right: 5px;
padding-bottom: 2px;
padding-top: 2px;
line-height: 120%;
}
table.nginx-blog td {
padding-left: 5px;
padding-right: 5px;
padding-bottom: 2px;
padding-top: 5px;
line-height: 120%;
}
table.nginx-blog td.center {
text-align: center;
padding-bottom: 2px;
padding-top: 5px;
line-height: 120%;
}
pre.table {
background-color: white;
padding: 0;
margin: 0;
}
Editor – This post is an extract from our comprehensive eBook, Managing Kubernetes Traffic with F5 NGINX: A Practical Guide. Download it for free today.
Many organizations setting up Kubernetes for the first time start with the NGINX Ingress controller developed and maintained by the Kubernetes community (kubernetes/ingress-nginx). As Kubernetes deployments mature, however, some organizations find they need advanced features or want commercial support while keeping NGINX as the data plane.
One option is to migrate to the NGINX Ingress Controller developed and maintained by F5 NGINX (nginxinc/kubernetes-ingress), and here we provide complete instructions so you can avoid some complications that result from differences between the two projects.
Not sure how these options differ? Read A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options on our blog.
To distinguish between the two projects in the remainder of this post, we refer to the NGINX Ingress Controller maintained by the Kubernetes community (kubernetes/ingress-nginx) as the “community Ingress controller” and the one maintained by F5 NGINX (nginxinc/kubernetes-ingress) as “NGINX Ingress Controller”.
There are two ways to migrate from the community Ingress controller to NGINX Ingress Controller:
- Option 1: Migrate Using NGINX Ingress Resources
This is the optimal solution, because NGINX Ingress resources support the broader set of Ingress networking capabilities required in production‑grade Kubernetes environments. For more information on NGINX Ingress resources, watch our webinar, Advanced Kubernetes Deployments with NGINX Ingress Controller. - Option 2: Migrate Using the Kubernetes Ingress Resource
This option is recommended if you are committed to using the standard Kubernetes Ingress resource to define Ingress load‑balancing rules.
Option 1: Migrate Using NGINX Ingress Resources
With this migration option, you use the standard Kubernetes Ingress resource to set root capabilities and NGINX Ingress resources to enhance your configuration with increased capabilities and ease of use.
The custom resource definitions (CRDs) for NGINX Ingress resources – VirtualServer, VirtualServerRoute, TransportServer, GlobalConfiguration, and Policy – enable you to easily delegate control over various parts of the configuration to different teams (such as AppDev and security teams) as well as provide greater configuration safety and validation.
Configuring SSL Termination and HTTP Path-Based Routing
The table maps the configuration of SSL termination and Layer 7 path‑based routing in the spec
field of the standard Kubernetes Ingress resource with the spec
field in the NGINX VirtualServer resource. The syntax and indentation differ slightly in the two resources, but they accomplish the same basic Ingress functions.
Kubernetes Ingress Resource | NGINX VirtualServer Resource |
---|---|
|
|
Configuring TCP/UDP Load Balancing and TLS Passthrough
With the community Ingress controller, a Kubernetes ConfigMap API object is the only way to expose TCP and UDP services.
With NGINX Ingress Controller, TransportServer resources define a broad range of options for TLS Passthrough and TCP and UDP load balancing. TransportServer resources are used in conjunction with GlobalConfiguration resources to control inbound and outbound connections.
For more information, see Support for TCP, UDP, and TLS Passthrough Services in NGINX Ingress Resources on our blog.
Convert Community Ingress Controller Annotations to NGINX Ingress Resources
Production‑grade Kubernetes deployments often need to extend basic Ingress rules to implement advanced use cases, including canary and blue‑green deployments, traffic throttling, ingress‑egress traffic manipulation, and more.
The community Ingress controller implements many of these use cases with Kubernetes annotations. However, many of these annotations are built with custom Lua extensions that pertain to very specific NGINX Ingress resource definitions and as a result are not suitable for implementing advanced functionality in a stable and supported production environment.
In the following sections we show how to convert community Ingress controller annotations into NGINX Ingress Controller resources.
- Canary Deployments
- Traffic Control
- Header Manipulation
- Proxying and Load Balancing
- mTLS Authentication
- Session Persistence (Exclusive to NGINX Plus)
Canary Deployments
Even as you push frequent code changes to your production container workloads, you must continue to serve your existing users. Canary and blue‑green deployments enable you to do this, and you can perform them on the NGINX Ingress Controller data plane to achieve stable and predictable updates in production‑grade Kubernetes environments.
The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for canary deployments.
The community Ingress controller evaluates canary annotations in this order of precedence:
nginx.ingress.kubernetes.io/canary-by-header
nginx.ingress.kubernetes.io/canary-by-cookie
nginx.ingress.kubernetes.io/canary-by-weight
For NGINX Ingress Controller to evaluate them the same way, they must appear in that order in the NGINX VirtualServer or VirtualServerRoute manifest.
Community Ingress Controller | NGINX Ingress Controller |
---|---|
|
|
|
|
|
|
|
|
Traffic Control
In microservices environments, where applications are ephemeral by nature and so more likely to return error responses, DevOps teams make extensive use of traffic‑control policies – such as circuit breaking and rate and connection limiting – to prevent error conditions when applications are unhealthy or not functioning as expected.
The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for rate limiting, custom HTTP errors, a custom default backend, and URI rewriting.
Community Ingress Controller | NGINX Ingress Controller |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
As indicated in the table, as of this writing NGINX Ingress resources do not include fields that directly translate the following four community Ingress controller annotations, and you must you use snippets. Direct support for the four annotations, using Policy resources, is planned for future releases of NGINX Ingress Controller.
nginx.ingress.kubernetes.io/limit-connections
nginx.ingress.kubernetes.io/limit-rate
nginx.ingress.kubernetes.io/limit-rate-after
nginx.ingress.kubernetes.io/limit-whitelist
Header Manipulation
Manipulating HTTP headers is useful in many use cases, as they contain additional information that is important and relevant for systems involved in an HTTP transaction. For example, the community Ingress controller supports enabling and setting cross‑origin resource sharing (CORS) headers, which are used with AJAX applications, where front‑end JavaScript code from a browser is connecting to a backend app or web server.
The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for header manipulation.
Community Ingress Controller | NGINX Ingress Controller |
---|---|
|
|
Proxying and Load Balancing
There are other proxying and load‑balancing functionalities you might want to configure in NGINX Ingress Controller depending on the specific use case. These functionalities include setting load‑balancing algorithms and timeouts and buffering settings for proxied connections.
The table shows the statements in the upstream
field of NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for custom NGINX load balancing, proxy timeouts, proxy buffering, and routing connections to a service’s Cluster IP address and port.
Community Ingress Controller | NGINX Ingress Controller |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
mTLS Authentication
A service mesh is particularly useful in a strict zero‑trust environment, where distributed applications inside a cluster communicate securely by mutually authenticating. What if we need to impose that same level of security on traffic entering and exiting the cluster (north‑south traffic)?
We can configure mTLS authentication at the Ingress Controller layer so that the end systems of external connections authenticate each other by presenting a valid certificate.
The table shows the fields in NGINX Policy resources that correspond to community Ingress controller annotations for client certificate authentication and backend certificate authentication.
Community Ingress Controller | NGINX Ingress Controller |
---|---|
|
|
|
|
Session Persistence (Exclusive to NGINX Plus)
The table shows the fields in NGINX Policy resources that are exclusive to the NGINX Ingress Controller based on NGINX Plus and correspond to community Ingress controller annotations for session persistence (affinity).
Community Ingress Controller | NGINX Ingress Controller |
---|---|
|
|
Option 2: Migrate Using the Kubernetes Ingress Resource
The second option for migrating from the community Ingress controller to NGINX Ingress Controller is to use only annotations and ConfigMaps in the standard Kubernetes Ingress resource and potentially rely on master/minion“‑style processing. This keeps all the configuration in the Ingress object.
Note: With this method, do not alter the spec
field of the Ingress resource.
Advanced Configuration with Annotations
The following table outlines the community Ingress controller annotations that correspond directly to annotations supported by NGINX Ingress Controller.
1The community Ingress controller uses Lua to implement some of its load‑balancing algorithms. NGINX Ingress Controller doesn’t have an equivalent for all of them.
2Redirects HTTP traffic to HTTPS. The community Ingress controller implements this with Lua code, while NGINX Ingress Controller uses native NGINX if
conditions.
The following table outlines the community Ingress controller annotations that correspond directly to annotations supported by the NGINX Ingress Controller based on NGINX Plus.
Community Ingress Controller | NGINX Ingress Controller Based on NGINX Plus |
---|---|
|
|
Note: The NGINX Ingress Controller based on NGINX Plus has additional annotations for features that the community Ingress controller doesn’t support at all, including active health checks and authentication using JSON Web Tokens (JWTs).
Global Configuration with ConfigMaps
The following table maps community Ingress controller ConfigMap keys to their directly corresponding NGINX Ingress Controller ConfigMap keys. Note that a handful of ConfigMap key names are identical. Also, both the community Ingress controller and NGINX Ingress Controller have ConfigMaps keys that the other does not (not shown in the table).
Summary
You can migrate from the community Ingress controller to NGINX Ingress Controller using either custom NGINX Ingress resources or the standard Kubernetes Ingress resource with annotations and ConfigMaps. The former option supports a broader set of networking capabilities and so is more suitable for production‑grade Kubernetes environments.
This post is an extract from our comprehensive eBook, Managing Kubernetes Traffic with F5 NGINX: A Practical Guide. Download it for free today.
Try the NGINX Ingress Controller based on NGINX Plus for yourself in a free 30-day trial today or contact us to discuss your use cases.
The post Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller appeared first on NGINX.
Source: Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller