Shaping the Future of Kubernetes Application Connectivity with F5 NGINX
Application connectivity in Kubernetes can be extremely complex, especially when you deploy hundreds – or even thousands – of containers across various cloud environments, including on-premises, public, private, or hybrid and multi-cloud. At NGINX, we firmly believe that integrating a unified approach to manage connectivity to, from, and within a Kubernetes cluster can dramatically simplify and streamline operations for development, infrastructure, platform engineering, and security teams.
In this blog, we want to share some reflections and thoughts on how NGINX created one of the most popular Ingress controllers today, and ways we plan continue delivering the best-in-class capabilities to manage Kubernetes app connectivity in the future.
Before anything, we want to note the importance of putting the customer first. NGINX does so by looking at each customer’s specific scenario and use cases, goals they aim to achieve, and challenges they might encounter on their journey. Then, we develop a solution leveraging our technology innovations that helps the customer achieve those goals and address any challenges in the most efficient way.
Ingress Controller
In 2017, we released the first version of NGINX Ingress Controller to answer the demand for enterprise-class Kubernetes-native app delivery. NGINX Ingress Controller helps improve user experience with load balancing, SSL termination, URI rewrites, session persistence, JWT authentication, and other key application delivery features. It is built on the most popular data plan in the world – NGINX – and leverages the Kubernetes Ingress API.
After its release, NGINX Ingress Controller gained immediate traction due to its ease of deployment and configuration, low resource utilization (even under heavy loads), and fast and reliable operations.
As our journey advanced, we reached limitations with the Ingress object in the Kubernetes API, such as support for protocols other than HTTP and the inability to attach customized request-handling policies like security policy. Due to these limitations, we introduced Custom Resource Definitions (CRDs) to enhance NGINX Ingress Controller capabilities and enable advanced use cases for our customers.
NGINX Ingress Controller provides the CRDs VirtualServer, VirtualServerRoute, TransportServer, and Policy to enhance performance, resilience, uptime, and security, along with observability for the API gateway, load balancer, and Ingress functionality at the edge of a Kubernetes cluster. In support of frequent app releases, these NGINX CRDs also enable role-oriented self-service governance across multi-tenant development and operations teams.
With our most recent release at the time of writing (version 3.1), we added JWT authorization and introduced Deep Service Insight to help customers monitor status of their apps behind NGINX Ingress Controller. This helps implement advanced failover scenarios (e.g., from on-premises to cloud ). Many other features are planned in the roadmap, so stay tuned for the new releases.
Learn more about how you can reduce complexity, increase uptime, and provide better insights into app health and performance at scale on the NGINX Ingress Controller web page.
Service Mesh
In 2020, we continued our Kubernetes app connectivity journey by introducing NGINX Service Mesh, a purpose-built, developer-friendly, lightweight yet comprehensive solution to power a variety of service-to-service connectivity use cases, including security and visibility, within the Kubernetes cluster.
NGINX Service Mesh and NGINX Ingress Controller leverage the same data plane technology and can be tightly and seamlessly integrated for unified connectivity to, from, and within a cluster.
Prior to the latest release (version 2.0), NGINX Service Mesh used SMI specifications and a bespoke API server to deliver service-to-service connectivity within a Kubernetes cluster. With version 2.0, we decided to deprecate the SMI resources and replace them by mimicking the resources from Gateway API for Mesh Management and Administration (GAMMA). With this approach, we ensure unified north-south and east-west connectivity that leverages the same CRD types, simplifying and streamlining configuration and operations.
NGINX Service Mesh is available as a free download from GitHub.
Gateway API
The Gateway API is an open source project intended to improve and standardize app and service networking in Kubernetes. Managed by the Kubernetes community, the Gateway API specification evolved from the Kubernetes Ingress API to solve limitations of the Ingress resource in production environments. These limitations include defining fine-grained policies for request processing and delegating control over configuration across multiple teams and roles. It’s an exciting project – and since the Gateway API’s introduction, NGINX has been an active participant.
That said, we intentionally didn’t want to include the Gateway API specifications in NGINX Ingress Controller because it already has a robust set of CRDs that cover a diverse variety of use cases, and some of those use cases are the same ones the Gateway API is intended to address.
In 2021, we decided to spin off a separate new project that covers all aspects of Kubernetes connectivity with the Gateway API: NGINX Kubernetes Gateway.
We decided to start our NGINX Kubernetes Gateway project, rather than just using NGINX Ingress Controller, for these reasons:
- To ensure product stability, reliability, and production readiness (we didn’t want to include beta-level specs into a mature, enterprise-class Ingress controller).
- To deliver comprehensive, vendor-agnostic configuration interoperability for Gateway API resources without mixing them with vendor-specific CRDs.
- To experiment with data and control plane architectural choices and decisions with the goal to provide easy-to-use, fast, reliable, and secure Kubernetes connectivity that is future-proof.
In addition, the Gateway API formed a GAMMA subgroup to research and define capabilities and resources of the Gateway API specifications for service mesh use cases. Here at NGINX, we see the long-term future of unified north-south and east-west Kubernetes connectivity in the Gateway API and heading in this direction.
The Gateway API is truly a collaborative effort across vendors and projects – all working together to build something better for Kubernetes users, based on experience and expertise, common touchpoints, and joint decisions. There will always be room for individual implementations to innovate and for data planes to shine. With NGINX Kubernetes Gateway, we continue working on native NGINX implementation of the Gateway API, and we encourage you to join us in shaping the future of Kubernetes app connectivity.
Ways you can get involved in NGINX Kubernetes Gateway include:
- Join the project as a contributor
- Try the implementation in your lab
- Test and provide feedback
To join the project, visit NGINX Kubernetes Gateway on GitHub.
Even with this evolution of the Kubernetes Ingress API, NGINX Ingress Controller is not going anywhere and will stay here for foreseeable future. We’ll continue to invest into and develop our proven and mature technology to satisfy both current and future customer needs and help users who need to manage app connectivity at the edge of a Kubernetes cluster.
Get Started Today
To learn more about how you can simplify application delivery with NGINX Kubernetes solutions, visit the Connectivity Stack for Kubernetes web page.
The post Shaping the Future of Kubernetes Application Connectivity with F5 NGINX appeared first on NGINX.
Source: Shaping the Future of Kubernetes Application Connectivity with F5 NGINX