Site icon 지락문화예술공작단

Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller

Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller

table.nginx-blog, table.nginx-blog th, table.nginx-blog td {
border: 2px solid black;
border-collapse: collapse;
}
table.nginx-blog {
width: 100%;
}
table.nginx-blog th {
background-color: #d3d3d3;
align: left;
padding-left: 5px;
padding-right: 5px;
padding-bottom: 2px;
padding-top: 2px;
line-height: 120%;
}
table.nginx-blog td {
padding-left: 5px;
padding-right: 5px;
padding-bottom: 2px;
padding-top: 5px;
line-height: 120%;
}
table.nginx-blog td.center {
text-align: center;
padding-bottom: 2px;
padding-top: 5px;
line-height: 120%;
}

pre.table {
background-color: white;
padding: 0;
margin: 0;
}

Editor – This post is an extract from our comprehensive eBook, Managing Kubernetes Traffic with F5 NGINX: A Practical Guide. Download it for free today.

Many organizations setting up Kubernetes for the first time start with the NGINX Ingress controller developed and maintained by the Kubernetes community (kubernetes/ingress-nginx). As Kubernetes deployments mature, however, some organizations find they need advanced features or want commercial support while keeping NGINX as the data plane.

One option is to migrate to the NGINX Ingress Controller developed and maintained by F5 NGINX (nginxinc/kubernetes-ingress), and here we provide complete instructions so you can avoid some complications that result from differences between the two projects.

Not sure how these options differ? Read A Guide to Choosing an Ingress Controller, Part 4: NGINX Ingress Controller Options on our blog.

To distinguish between the two projects in the remainder of this post, we refer to the NGINX Ingress Controller maintained by the Kubernetes community (kubernetes/ingress-nginx) as the “community Ingress controller” and the one maintained by F5 NGINX (nginxinc/kubernetes-ingress) as “NGINX Ingress Controller”.

There are two ways to migrate from the community Ingress controller to NGINX Ingress Controller:

Option 1: Migrate Using NGINX Ingress Resources

With this migration option, you use the standard Kubernetes Ingress resource to set root capabilities and NGINX Ingress resources to enhance your configuration with increased capabilities and ease of use.

The custom resource definitions (CRDs) for NGINX Ingress resources – VirtualServer, VirtualServerRoute, TransportServer, GlobalConfiguration, and Policy – enable you to easily delegate control over various parts of the configuration to different teams (such as AppDev and security teams) as well as provide greater configuration safety and validation.

Configuring SSL Termination and HTTP Path-Based Routing

The table maps the configuration of SSL termination and Layer 7 path‑based routing in the spec field of the standard Kubernetes Ingress resource with the spec field in the NGINX VirtualServer resource. The syntax and indentation differ slightly in the two resources, but they accomplish the same basic Ingress functions.

Kubernetes Ingress Resource NGINX VirtualServer Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-test
spec:
  tls:
    - hosts:
      - foo.bar.com
      secretName: tls-secret
  rules:
    - host: foo.bar.com
      http:
        paths:
        - path: /login
          backend: 
            serviceName: login-svc
            servicePort: 80
        - path: /billing
            serviceName: billing-svc
            servicePort: 80
apiVersion: networking.k8s.io/v1
kind: VirtualServer
metadata:
  name: nginx-test
spec:
  host: foo.bar.com 
  tls:
    secret: tls-secret
  upstreams:
    - name: login
      service: login-svc
      port: 80
    - name: billing 
      service: billing-svc
      port: 80
  routes: 
  - path: /login
    action:
      pass: login 
  - path: /billing 
    action: 
      pass: billing

Configuring TCP/UDP Load Balancing and TLS Passthrough

With the community Ingress controller, a Kubernetes ConfigMap API object is the only way to expose TCP and UDP services.

With NGINX Ingress Controller, TransportServer resources define a broad range of options for TLS Passthrough and TCP and UDP load balancing. TransportServer resources are used in conjunction with GlobalConfiguration resources to control inbound and outbound connections.

For more information, see Support for TCP, UDP, and TLS Passthrough Services in NGINX Ingress Resources on our blog.

Convert Community Ingress Controller Annotations to NGINX Ingress Resources

Production‑grade Kubernetes deployments often need to extend basic Ingress rules to implement advanced use cases, including canary and blue‑green deployments, traffic throttling, ingress‑egress traffic manipulation, and more.

The community Ingress controller implements many of these use cases with Kubernetes annotations. However, many of these annotations are built with custom Lua extensions that pertain to very specific NGINX Ingress resource definitions and as a result are not suitable for implementing advanced functionality in a stable and supported production environment.

In the following sections we show how to convert community Ingress controller annotations into NGINX Ingress Controller resources.

Canary Deployments

Even as you push frequent code changes to your production container workloads, you must continue to serve your existing users. Canary and blue‑green deployments enable you to do this, and you can perform them on the NGINX Ingress Controller data plane to achieve stable and predictable updates in production‑grade Kubernetes environments.

The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for canary deployments.

The community Ingress controller evaluates canary annotations in this order of precedence:

  1. nginx.ingress.kubernetes.io/canary-by-header
  2. nginx.ingress.kubernetes.io/canary-by-cookie
  3. nginx.ingress.kubernetes.io/canary-by-weight

For NGINX Ingress Controller to evaluate them the same way, they must appear in that order in the NGINX VirtualServer or VirtualServerRoute manifest.

Community Ingress Controller NGINX Ingress Controller
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "httpHeader"
matches:
- conditions:
  - header: httpHeader
      value: never
  action:
    pass: echo 
  - header: httpHeader
      value: always
  action:
    pass: echo-canary
action:
  pass: echo
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "httpHeader"
nginx.ingress.kubernetes.io/canary-by-header-value: "my-value"
matches:
- conditions:
  - header: httpHeader
      value: my-value
  action:
    pass: echo-canary 
action:
  pass: echo
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "cookieName"
matches:
- conditions:
  - cookie: cookieName
      value: never
  action:
    pass: echo 
  - cookie: cookieName
      value: always
  action:
    pass: echo-canary
action:
  pass: echo
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
splits:
- weight: 90 
  action:
    pass: echo
- weight: 10 
   action:
     pass: echo-canary

Traffic Control

In microservices environments, where applications are ephemeral by nature and so more likely to return error responses, DevOps teams make extensive use of traffic‑control policies – such as circuit breaking and rate and connection limiting – to prevent error conditions when applications are unhealthy or not functioning as expected.

The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for rate limiting, custom HTTP errors, a custom default backend, and URI rewriting.

Community Ingress Controller NGINX Ingress Controller

nginx.ingress.kubernetes.io/custom-http-errors: "code"

nginx.ingress.kubernetes.io/default-backend: "default-svc"
errorPages:
- codes: [code]
    redirect:
      code: 301
      url: default-svc

nginx.ingress.kubernetes.io/limit-connections: "number"
http-snippets: |
    limit_conn_zone $binary_remote_addr zone=zone_name:size;
routes:
- path: /path
    location-snippets: |
      limit_conn zone_name number;

nginx.ingress.kubernetes.io/limit-rate: "number"
nginx.ingress.kubernetes.io/limit-rate-after: "number"
location-snippets: |
    limit_rate number;

    limit_rate_after number;

nginx.ingress.kubernetes.io/limit-rpm: "number"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "multiplier"
rateLimit:
    rate: numberr/m

    burst: number * multiplier
    key: ${binary_remote_addr}
    zoneSize: size

nginx.ingress.kubernetes.io/limit-rps: "number"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "multiplier"
rateLimit:
    rate: numberr/s

    burst: number * multiplier
    key: ${binary_remote_addr}
    zoneSize: size
nginx.ingress.kubernetes.io/limit-whitelist: "CIDR"
http-snippets: |
server-snippets: |
nginx.ingress.kubernetes.io/rewrite-target: "URI"
rewritePath: "URI"

As indicated in the table, as of this writing NGINX Ingress resources do not include fields that directly translate the following four community Ingress controller annotations, and you must you use snippets. Direct support for the four annotations, using Policy resources, is planned for future releases of NGINX Ingress Controller.

Header Manipulation

Manipulating HTTP headers is useful in many use cases, as they contain additional information that is important and relevant for systems involved in an HTTP transaction. For example, the community Ingress controller supports enabling and setting cross‑origin resource sharing (CORS) headers, which are used with AJAX applications, where front‑end JavaScript code from a browser is connecting to a backend app or web server.

The table shows the fields in NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for header manipulation.

Community Ingress Controller NGINX Ingress Controller
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"

nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For" 

nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"

nginx.ingress.kubernetes.io/cors-allow-origin: "*"

nginx.ingress.kubernetes.io/cors-max-age: "seconds"
responseHeaders:
  add: 
    - name: Access-Control-Allow-Credentials
      value: "true" 
    - name: Access-Control-Allow-Headers
      value: "X-Forwarded-For"
    - name: Access-Control-Allow-Methods
      value: "PUT, GET, POST, OPTIONS"
    - name: Access-Control-Allow-Origin
      value: "*"
    - name: Access-Control-Max-Age
      value: "seconds"

Proxying and Load Balancing

There are other proxying and load‑balancing functionalities you might want to configure in NGINX Ingress Controller depending on the specific use case. These functionalities include setting load‑balancing algorithms and timeouts and buffering settings for proxied connections.

The table shows the statements in the upstream field of NGINX VirtualServer and VirtualServerRoute resources that correspond to community Ingress controller annotations for custom NGINX load balancing, proxy timeouts, proxy buffering, and routing connections to a service’s Cluster IP address and port.

Community Ingress Controller NGINX Ingress Controller
nginx.ingress.kubernetes.io/load-balance
lb-method
nginx.ingress.kubernetes.io/proxy-buffering
buffering
nginx.ingress.kubernetes.io/proxy-buffers-number
nginx.ingress.kubernetes.io/proxy-buffer-size
buffers
nginx.ingress.kubernetes.io/proxy-connect-timeout
connect-timeout
nginx.ingress.kubernetes.io/proxy-next-upstream
next-upstream
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout
next-upstream-timeout
nginx.ingress.kubernetes.io/proxy-read-timeout
read-timeout
nginx.ingress.kubernetes.io/proxy-send-timeout
send-timeout
nginx.ingress.kubernetes.io/service-upstream
use-cluster-ip

mTLS Authentication

A service mesh is particularly useful in a strict zero‑trust environment, where distributed applications inside a cluster communicate securely by mutually authenticating. What if we need to impose that same level of security on traffic entering and exiting the cluster (north‑south traffic)?

We can configure mTLS authentication at the Ingress Controller layer so that the end systems of external connections authenticate each other by presenting a valid certificate.

The table shows the fields in NGINX Policy resources that correspond to community Ingress controller annotations for client certificate authentication and backend certificate authentication.

Community Ingress Controller NGINX Ingress Controller

nginx.ingress.kubernetes.io/auth-tls-secret: secretName
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
ingressMTLS:
   clientCertSecret: secretName
   verifyClient: "on"

   verifyDepth: 1

nginx.ingress.kubernetes.io/proxy-ssl-secret: "secretName"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "on|off"
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "1"
nginx.ingress.kubernetes.io/proxy-ssl-protocols: "TLSv1.2"
nginx.ingress.kubernetes.io/proxy-ssl-ciphers: "DEFAULT"
nginx.ingress.kubernetes.io/proxy-ssl-name: "server-name"
nginx.ingress.kubernetes.io/proxy-ssl-server-name: "on|off"
egressMTLS:
   tlsSecret: secretName

   verifyServer: true|false

   verifyDepth: 1

   protocols: TLSv1.2

   ciphers: DEFAULT

   sslName: server-name

   serverName: true|false

Session Persistence (Exclusive to NGINX Plus)

The table shows the fields in NGINX Policy resources that are exclusive to the NGINX Ingress Controller based on NGINX Plus and correspond to community Ingress controller annotations for session persistence (affinity).

Community Ingress Controller NGINX Ingress Controller

nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "cookieName"
nginx.ingress.kubernetes.io/session-cookie-expires: "x"
nginx.ingress.kubernetes.io/session-cookie-path: "/route"
nginx.ingress.kubernetes.io/session-cookie-secure: "true"
sessionCookie:
  enable: true

  name: cookieName

  expires: xh

  path: /route

  secure: true

Option 2: Migrate Using the Kubernetes Ingress Resource

The second option for migrating from the community Ingress controller to NGINX Ingress Controller is to use only annotations and ConfigMaps in the standard Kubernetes Ingress resource and potentially rely on master/minion“‑style processing. This keeps all the configuration in the Ingress object.

Note: With this method, do not alter the spec field of the Ingress resource.

Advanced Configuration with Annotations

The following table outlines the community Ingress controller annotations that correspond directly to annotations supported by NGINX Ingress Controller.

Community Ingress Controller NGINX Ingress Controller NGINX Directive
nginx.ingress.kubernetes.io/configuration-snippet: |
nginx.org/location-snippets: |
N/A
nginx.ingress.kubernetes.io/load-balance1
nginx.org/lb-method
Default:
random two least_conn
nginx.ingress.kubernetes.io/proxy-buffering: "on|off"
nginx.org/proxy-buffering: "True|False"
proxy_buffering
nginx.ingress.kubernetes.io/proxy-buffers-number: "number"
nginx.ingress.kubernetes.io/proxy-buffer-size: "xk"
nginx.org/proxy-buffers: "number 4k|8k"
nginx.org/proxy-buffer-size: "4k|8k"
proxy_buffers

proxy_buffer_size
nginx.ingress.kubernetes.io/proxy-connect-timeout: "seconds"
nginx.org/proxy-connect-timeout: : "secondss"
proxy_connect_timeout
nginx.ingress.kubernetes.io/proxy-read-timeout: "seconds"
nginx.org/proxy-read-timeout: "secondss"
proxy_read_timeout
nginx.ingress.kubernetes.io/proxy-send-timeout: "seconds"
nginx.org/proxy-send-timeout: "secondss"
proxy_send_timeout
nginx.ingress.kubernetes.io/rewrite-target: "URI"
nginx.org/rewrites: "serviceName=svc rewrite=URI"
rewrite
nginx.ingress.kubernetes.io/server-snippet: |
nginx.org/server-snippets: |
N/A
nginx.ingress.kubernetes.io/ssl-redirect: "true|false"
ingress.kubernetes.io/ssl-redirect: "True|False"
N/A2

1The community Ingress controller uses Lua to implement some of its load‑balancing algorithms. NGINX Ingress Controller doesn’t have an equivalent for all of them.

2Redirects HTTP traffic to HTTPS. The community Ingress controller implements this with Lua code, while NGINX Ingress Controller uses native NGINX if conditions.

The following table outlines the community Ingress controller annotations that correspond directly to annotations supported by the NGINX Ingress Controller based on NGINX Plus.

Community Ingress Controller NGINX Ingress Controller Based on NGINX Plus
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "cookie_name"
nginx.ingress.kubernetes.io/session-cookie-expires: "seconds"
nginx.ingress.kubernetes.io/session-cookie-path: "/route"
nginx.com/sticky-cookie-services: "serviceName=example-svc cookie_name expires=time path=/route"

Note: The NGINX Ingress Controller based on NGINX Plus has additional annotations for features that the community Ingress controller doesn’t support at all, including active health checks and authentication using JSON Web Tokens (JWTs).

Global Configuration with ConfigMaps

The following table maps community Ingress controller ConfigMap keys to their directly corresponding NGINX Ingress Controller ConfigMap keys. Note that a handful of ConfigMap key names are identical. Also, both the community Ingress controller and NGINX Ingress Controller have ConfigMaps keys that the other does not (not shown in the table).

Community Ingress Controller NGINX Ingress Controller
disable-access-log
access-log-off
error-log-level
error-log-level
hsts
hsts
hsts-include-subdomains
hsts-include-subdomains
hsts-max-age
hsts-max-age
http-snippet
http-snippets
keep-alive
keepalive-timeout
keep-alive-requests
keepalive-requests
load-balance
lb-method
location-snippet
location-snippets
log-format-escape-json: "true"
log-format-escaping: "json"
log-format-stream
stream-log-format
log-format-upstream
log-format
main-snippet
main-snippets
max-worker-connections 
worker-connections
max-worker-open-files
worker-rlimit-nofile
proxy-body-size
client-max-body-size
proxy-buffering
proxy-buffering
proxy-buffers-number: "number"
proxy-buffer-size: "size"
proxy-buffers: number size
proxy-connect-timeout
proxy-connect-timeout
proxy-read-timeout
proxy-read-timeout
proxy-send-timeout
proxy-send-timeout
server-name-hash-bucket-size
server-names-hash-bucket-size
server-name-hash-max-size
server-names-hash-max-size
server-snippet
server-snippets
server-tokens
server-tokens
ssl-ciphers
ssl-ciphers
ssl-dh-param
ssl-dhparam-file
ssl-protocols
ssl-protocols
ssl-redirect
ssl-redirect
upstream-keepalive-connections
keepalive
use-http2
http2
use-proxy-protocol
proxy-protocol
variables-hash-bucket-size
variables-hash-bucket-size
worker-cpu-affinity
worker-cpu-affinity
worker-processes
worker-processes
worker-shutdown-timeout
worker-shutdown-timeout

Summary

You can migrate from the community Ingress controller to NGINX Ingress Controller using either custom NGINX Ingress resources or the standard Kubernetes Ingress resource with annotations and ConfigMaps. The former option supports a broader set of networking capabilities and so is more suitable for production‑grade Kubernetes environments.

This post is an extract from our comprehensive eBook, Managing Kubernetes Traffic with F5 NGINX: A Practical Guide. Download it for free today.

Try the NGINX Ingress Controller based on NGINX Plus for yourself in a free 30-day trial today or contact us to discuss your use cases.

The post Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller appeared first on NGINX.

Source: Migrating from the Community Ingress Controller to F5 NGINX Ingress Controller

Exit mobile version