The Greatest Hits of 2021 on the NGINX Blog
As the turmoil caused by the COVID‑19 pandemic continued throughout 2021, we at NGINX tried to rise to the challenge and keep driving forward to make positive changes for our community, partners, and customers.
Approaching the end of the year, we take this opportunity to look back at a selection of the biggest and most popular blog articles we published, as voted by you, our community. Read on to see what major events we covered, and to catch up on interesting news and topics you may have missed!
How to Choose a Service Mesh
As your Kubernetes deployment matures, it can be a challenge to know when a service mesh will yield benefits and not just additional complexity. And once you know you need a service mesh, choosing the right one isn’t always straightforward either. In this post Jenn Gile provides a six‑point checklist for determining whether you need a service mesh, and a conversation guide for facilitating the strategic decision‑making session that we recommend you have with your team and stakeholders about which service mesh is right for you.
NGINX and HAProxy: Testing User Experience in the Cloud
Many performance benchmarks measure peak throughput or requests per second (RPS), but those metrics might not tell the whole performance story at real‑world sites. This leads us to the observation that what matters most is that you deliver consistent, low‑latency performance to all of your users, even under high load. In comparing NGINX and HAProxy running on Amazon Elastic Compute Cloud (EC2) as reverse proxies, Amir Rawdat set out to do two things:
- Determine what level of load each proxy comfortably handles
- Collect the latency percentile distribution, which we find is the metric most directly correlated with user experience
Get the results and all the testing details.
Introducing NGINX Instance Manager
NGINX really can be considered as a Swiss Army Knife™ that accelerates your IT infrastructure and application modernization efforts. This wide‑ranging, versatile functionality can, however, lead to many NGINX instances spread across an organization, sometimes with NGINX Open Source and NGINX Plus managed by different groups. How do you track all the instances? How do you ensure they have up-to-date configuration and security settings? That’s where F5 NGINX Instance Manager comes in.
Ideal for DevOps users who are NGINX experts and have a lot of experience with NGINX configurations, NGINX Instance Manager simplifies NGINX management, configuration, and visibility. In this post, Karthik Krishnaswamy explains how NGINX Instance Manager can benefit you.
What Are Namespaces and cgroups, and How Do They Work?
NGINX Unit supports both namespaces and cgroups, which enables process isolation. In this post, Scott van Kalken looks at these two major Linux technologies, which also underlie containers. Learn about these underlying technologies and how to create them.
Comparing NGINX Performance in Bare Metal and Virtual Environments
While there was an explosive growth in public cloud adoption due to the COVID‑19 pandemic, enterprises are also embracing hybrid cloud, where they run workloads in both public clouds and on premises. To help you determine the optimal and most affordable solution that satisfies your performance and scaling needs, we provide a sizing guide that compares NGINX performance in the two environments.
In this post Amir Rawdat describes how we tested NGINX to arrive at the values published in the sizing guide. Because many of our customers also deploy apps in Kubernetes, we also step through our testing of NGINX Ingress Controller on the Rancher Kubernetes Engine (RKE) platform, and discuss how the results compare to NGINX running in traditional on‑premises architectures.
How to Simplify Kubernetes Ingress and Egress Traffic Management
One of the ways a service mesh can actually make it more complicated to manage a Kubernetes environment is when it must be configured separately from the Ingress controller. You can avoid these problems – and save time – by integrating the NGINX Plus-based F5 NGINX Ingress Controller with F5 NGINX Service Mesh to control both ingress and egress mTLS traffic. In this post, Kate Osborn covers the complete steps from the companion video demo.
Easy and Robust Single Sign-On with OpenID Connect and NGINX Ingress Controller
With the release of NGINX Ingress Controller 1.10.0, we were happy to announce a major enhancement: a technology preview of OpenID Connect (OIDC) authentication. OIDC is the identity layer built on top of the OAuth 2.0 framework which provides an authentication and single sign‑on (SSO) solution for modern apps. Our OIDC policy is a full‑fledged SSO solution enabling users to securely authenticate with multiple applications and Kubernetes services. Significantly, it enables apps to use an external identity provider (IdP) to authenticate users and frees the apps from having to handle usernames or passwords. Amir Rawdat explains it all for you in this popular post.
Deploying NGINX Ingress Controller on Amazon EKS: How We Tested
Last, but by no means least, in our 2021 blog round‑up, earlier this year we updated our NGINX Ingress Controller solution brief with sizing guidelines for Amazon Elastic Kubernetes Service (EKS). The brief outlines the performance you can expect to achieve with the NGINX Ingress Controller running on various instance types in Amazon EKS, along with the estimated monthly total cost of ownership (TCO). In this post, Amir Rawdat returns to explain how we came up with those numbers, including all the information you need to do similar testing of your own.
Give NGINX a Try
Free 30-day trials are available for all of the commercial solutions mentioned in this post (and a couple more!):
- NGINX Plus and NGINX App Protect
- NGINX Ingress Controller and NGINX App Protect
- NGINX Controller
- NGINX Instance Manager
- F5 DNS Load Balancer Cloud Service and F5 Secondary DNS Cloud Service
Or get started with free and open source offerings:
The post The Greatest Hits of 2021 on the NGINX Blog appeared first on NGINX.