How to Simplify Kubernetes Ingress and Egress Traffic Management
One of the ways a service mesh can actually make it more complicated to manage a Kubernetes environment is when it must be configured separately from the Ingress controller. Separate configurations aren’t just time‑consuming, either. They increase the probability of configuration errors that can prevent proper traffic routing and even lead to security vulnerabilities (like bad actors gaining access to restricted apps) and poor experiences (like customers not being able to access apps they’re authorized for). Beyond the time it takes to perform separate configurations, you end up spending more time troubleshooting errors.
You can avoid these problems – and save time – by integrating NGINX Plus Ingress Controller with NGINX Service Mesh to control both ingress and egress mTLS traffic. In this video demo, we cover the complete steps.
Supporting documentation is referenced in the following sections:
- Prerequisites
- Deploying NGINX Plus Ingress Controller with NGINX Service Mesh
- Using a Standard Kubernetes Ingress Resource to Expose the App
- Using an NGINX VirtualServer Resource to Expose the App
- Configuring a Secure Egress Route with NGINX Ingress Controller
Prerequisites (0:18)
Before starting the actual demo, we performed these prerequisites:
- Installed the NGINX Server Mesh control plane in the Kubernetes cluster and set up mTLS and the
strict
policy for the service mesh. - Installed NGINX Ingress Controller as a Deployment (rather than a DaemonSet) in the Kubernetes cluster, enabled egress, and exposed it as a service of type
LoadBalancer
. - Followed our instructions to download the sample
bookinfo
app, inject the NGINX Service Mesh sidecar, and deploy the app.
Note that as a result of the strict
policy created in Step 1, requests to the bookinfo
app from clients outside the mesh are denied at the sidecar. We illustrate this in the demo by first running the following command to set up port forwarding:
> kubectl port-forward svc/product-page 9080
Forwarding from 127.0.0.1:9080 -> 9080
Forwarding from [::1]:9080 -> 9080
Handling connection for 9080
When we try to access the app, we get status code 503
because our local machine is not part of the service mesh:
> curl localhost:9080
503
Deploying NGINX Plus Ingress Controller with NGINX Service Mesh (1:50)
The first stage in the process of exposing an app is to deploy NGINX Plus Ingress Controller. Corresponding instructions are provided in our tutorial, Deploy with NGINX Plus Ingress Controller for Kubernetes.
Note: The demo does not work with the NGINX Open Source version of NGINX Ingress Controller.
NGINX provides both Deployment and DaemonSet manifests for this purpose. In the demo, we use the Deployment manifest, nginx-plus-ingress.yaml. It includes annotations to route both ingress and egress traffic through the same NGINX Plus Ingress Controller instance:
The manifest enables direct integration of NGINX Plus Ingress Controller with Spire, the certificate authority (CA) for NGINX Service Mesh, eliminating the need to inject the NGINX Service Mesh sidecar into NGINX Plus Ingress Controller. Instead, NGINX Plus Ingress Controller fetches certificates and keys directly from the Spire CA to use for mTLS with the pods in the mesh. The manifest specifies the Spire agent address:
and mounts the Spire agent UNIX socket to the NGINX Plus Ingress Controller pod:
The final thing to note about the manifest is the -enable-internal-routes
CLI argument, which enables us to route to egress services:
Before beginning the demo, we ran the kubectl
apply
-f
nginx-plus-ingress.yaml
command to install the NGINX Plus Ingress Controller, and at this point we inspect the deployment in the nginx-ingress
namespace. As shown in the READY
column of the following output, there is only one container for the NGINX Plus Ingress Controller pod, because we haven’t injected it with an NGINX Service Mesh sidecar.
We’ve also deployed a service of type LoadBalancer
to expose the external IP address of the NGINX Plus Ingress Controller (here, 35.233.133.188) outside of the cluster. We’ll access the sample bookinfo
application at that IP address.
> kubectl get pods --namespace=nginx-ingress
NAME READY STATUS RESTARTS AGE
pod/nginx-ingress-867f954b8f0fzdrm 1/1 Running 0 3d3h
NAME TYPE CLUSTER-IP EXTERNAL-IP ...
service-nginx-ingress LoadBalancer 10.31.245.207 35.233.133.188 ...
... PORT(S) AGE
... 80:31469/TCP,443:32481/TCP 4d2h
...
Using a Standard Kubernetes Ingress Resource to Expose the App (3:55)
Now we expose the bookinfo
app in the mesh, using a standard Kubernetes Ingress resource as defined in bookinfo-ingress.yaml. Corresponding instructions are provided in our tutorial, Expose an Application with NGINX Plus Ingress Controller.
= 1.18.0
tls:
– hosts:
– bookinfo.example.com
secretName: bookinfo-secret
rules:
– host: bookinfo.example.com
http:
paths:
– path: /
backend:
serviceName: productpage
servicePort: 9080 [bookinfo-ingress.yaml] –>
The resource references a Kubernetes Secret for the bookinfo
app on line 10 and includes a routing rule which specifies that requests for bookinfo.example.com are sent to the productpage
service (lines 11–18). The Secret is defined in bookinfo-secret.yaml:
We run this command to load the key and certificate, which in the demo is self-signed:
> kubectl apply -f bookinfo-secret.yaml
secret/bookinfo-secret unchanged
We activate the Ingress resource:
> kubectl apply -f bookinfo-ingress.yaml
ingress.networking.k8s.io/bookinfo-ingress deleted
and verify that Ingress Controller added the route defined in the resource, as confirmed by the event at the end of the output:
> kubectl describe ingress bookinfo-ingress
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 5s nginx-ingress-controller Configuration for ...
...default/bookinfo-ingress was added or updated
In the demo we now use a browser to access the bookinfo
app at https://bookinfo.example.com/. (We have previously added a mapping in the local /etc/hosts file between the IP address of the Ingress Controller service – 35.233.133.188 in the demo, as noted above – and bookinfo.example.com. For instructions, see the documentation.) The info in the Book Reviews section of the page changes periodically as requests rotate through the three versions of the reviews
service defined in bookinfo.yaml (download).
We next inspect the ingress traffic into the clusters. We run the generate-traffic.sh script to make requests to the productpage
service via the NGINX Ingress Controller’s public IP address, and then run the nginx-meshctl
top
command to monitor the traffic:
> nginxmesh-ctl top deploy/productpage-v1
Deployment Direction Resource Success Rate P99 P90 P50 ...
productpage-v1
To details-v1 100.00% 3ms 3ms 2ms
To reviews-v2 100.00% 99ms 90ms 20ms
To reviews-v3 100.00% 99ms 85ms 18ms
To reviews-v1 100.00% 20ms 17ms 9ms
From nginx-ingress 100.00% 192ms 120ms 38ms
... NumRequests
... 14
... 5
... 5
... 12
Using an NGINX VirtualServer Resource to Expose the App (6:45)
We next show an alternative way to expose an app, using an NGINX VirtualServer resource. It’s a custom NGINX Ingress Controller resource that supports more complex traffic handling, such as traffic splitting and content‑based routing.
First we delete the standard Ingress resource:
> kubectl delete -f bookinfo-ingress.yaml
ingress.networking.k8s.io "bookinfo-ingress" deleted
Our bookinfo-vs.yaml file configures mTLS with the same Secret as in bookinfo-ingress.yaml (lines 7–8). Lines 9–12 define the productpage
service as the upstream, and lines 13–24 a route that sends all GET
requests made at bookinfo.example.com to that upstream. For HTTP methods other than GET
, it returns status code 405
.
We apply the resource:
> kubectl apply -f bookinfo-vs.yaml
virtualserver.kubernetes.nginx.org/bookinfo-vs created
We then perform the same steps as with the Ingress resource – running the kubectl
describe
command to confirm correct deployment and accessing the app in a browser. Another confirmation that the app is working correctly is that it rejects the POST
method:
> curl -k -X POST https://bookinfo.example.com/
Method not allowed
Configuring a Secure Egress Route with NGINX Ingress Controller (8:44)
Now we show how to route egress traffic through NGINX Plus Ingress Controller. Our tutorial Configure a Secure Egress Route with NGINX Plus Ingress Controller covers the process, using different sample apps.
We’ve already defined a simple bash
pod in bash.yaml and deployed it in the default namespace from we’re sending requests. As shown in the READY
column of this output, it has been injected with the NGINX Service Mesh sidecar.
> kubectl get all
NAME READY STATUS RESTARTS AGE
pod/bash-6ccb678958-zsgm7 2/2 Running 0 77s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.31.240.1 <none> 443/TCP 4d2h
...
There are several use cases where you might want to enable requests from within the pod to an egress service, which is any entity that’s not part of NGINX Service Mesh. Examples are services deployed:
- Outside the cluster
- On another cluster
- On the same cluster, but not injected with the NGINX Service Mesh sidecar
In the demo, we’re considering the final use case. We have an application deployed in the legacy
namespace, which isn’t controlled by NGINX Service Mesh and where automatic injection of the NGINX Service Mesh sidecar is disabled. There’s only one pod running for the app.
> kubectl get all --namespaces=legacy
NAME READY STATUS RESTARTS AGE
pod/target-5f7bcb96c6-km9lz 1/1 Running 0 27m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/target-svc ClusterIP 10.31.245.213 <none> 80/TCP,443/TCP 27m
...
Remember that we’ve configured a strict
mTLS policy for NGINX Service Mesh; as a result we can’t send requests directly from the bash
pod to the target service, because the two cannot authenticate with each other. When we try, we get status code 503
as illustrated here:
> kubectl exec -it bash-6ccb678958-zsgm7 -c bash -- curl target-svc.legacy
curl: (56) Recv failure: connection reset by peer
503command terminated with exit code 56
The solution is to enable the bash
pod to send egress traffic through NGINX Plus Ingress Controller. We uncomment the annotation on lines 14–15 of bash.yaml:
Then we apply the new configuration:
> kubectl apply -f bash.yaml
deployment.apps/bash configured
and verify that a new bash
pod has spun up:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
bash-678c8b4579-7sfml 2/2 Running 0 6s
bash-6ccb678958-zsgm7 2/2 Terminating 0 3m28s
Now when we run the same kubectl
exec
command as before, to send a request from the bash
pod to the target service, we get status code 404
instead of 503
. This indicates that the bash
pod has successfully sent the request to NGINX Plus Ingress Controller, but the latter doesn’t know where to forward it because no route is defined.
We create the required route in with the following Ingress resource definition in legacy-route.yaml. The internal-route
annotation on line 7 means that the target service is not exposed to the Internet, but only to workloads within NGINX Service Mesh.
= 1.18.0
tls:
rules:
– host: target-svc.legacy
https:
paths:
– path: /
backend:
serviceName: target-svc
servicePort: 80 [legacy-route.yaml] –>
We activate the new resource and confirm that NGINX Plus Ingress Controller added the route defined in the resource:
> kubectl apply -f legacy-route.yaml
ingress.networking.k8s.io/target-internal-route created
> kubectl describe ingress target-internal-route -n legacy
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal AddedOrUpdated 6s nginx-ingress-controller Configuration for ...
...legacy/target-internal-route was added or updated
Now when we run the kubectl
exec
command, we reach the target service:
{"req": {"method": "GET"
"url": "/",
"host": "target-svc.legacy",
"remoteAddr": "10.28.2.76:56086"}}
An advantage of routing egress traffic through NGINX Plus Ingress Controller is that you can control exactly which external services can be reached from inside the cluster – it’s only the ones for which you define a route.
One final thing we show in the demo is how to monitor egress traffic. We run the kubectl
exec
command to send several requests, and then run this command:
> nginxmesh-ctl top deploy/nginx-ingress -n nginx-ingress
Deployment Direction Resource Success Rate P99 P90 P50 NumRequests
nginx-ingress
To target 100.00% 1ms 1ms 1ms 9
From bash 100.00% 0ms 0ms 0ms 9
Say “No” to Latency – Try NGINX Service Mesh with NGINX Ingress Controller
Many service meshes offer ingress and egress gateway options, but we think you’ll appreciate an added benefit of the NGINX integration: lower latency. Most meshes require a sidecar to be injected into the Ingress controller, which requires traffic to make an extra hop on its way to your apps. Seconds matter, and that extra hop slowing down your digital experiences might cause customers to turn elsewhere. NGINX Service Mesh doesn’t add unnecessary latency because it doesn’t inject a sidecar into NGINX Ingress Controller. Instead, by integrating directly with Spire, the CA of the mesh, NGINX Ingress Controller becomes part of NGINX Service Mesh. NGINX Ingress Controller simply fetches certificates and keys from the Spire agent and uses them to participate in the mTLS cert exchange with meshed pods.
There are two versions of NGINX Ingress Controller for Kubernetes: NGINX Open Source and NGINX Plus. To deploy NGINX Ingress Controller with NGINX Service Mesh as described in this blog, you must use the NGINX Plus version, which is available for a free 30-day trial.
NGINX Service Mesh is completely free and available for immediate download and can be deployed in less than 10 minutes! To get started, check out the docs and let us know how it goes via GitHub.
The post How to Simplify Kubernetes Ingress and Egress Traffic Management appeared first on NGINX.
Source: How to Simplify Kubernetes Ingress and Egress Traffic Management
Leave a Reply