How to Augment or Replace Your F5 Hardware Load Balancer with NGINX

How to Augment or Replace Your F5 Hardware Load Balancer with NGINX

“No one owns a fax machine anymore.”

That’s how NGINX CEO Gus Robertson recently concluded a story about a frustrating experience with his insurance company. Gus needed to update his policy and his insurance company required that he fax a form to execute the change.

Times have changed and Gus’ insurance company is not keeping up. Consumers interact with their preferred brands via email, social channels, and websites, all accessed from their mobile phones. Case in point – to update his policy, Gus downloaded a “send a fax” app, took a photo of the form with his mobile phone, and sent the photo as a fax from the phone. Complicated as that was, it was easier than finding a fax machine. Put simply: Fax machines are analog devices in a digital world.

So it begs the question: Why are you still using your F5 hardware load balancers?

Don’t get me wrong. F5 makes great technology, and hardware load balancers have been an integral part of data-center architecture over the last 20 years. But times have changed and F5 appliances are now the fax machines of the modern application world.

To explain why, we have to look at how applications are changing, the challenges that creates, and how you can solve the problem by replacing your aging F5 hardware with a modern software load balancer, NGINX.

Changing Applications Require Software Load Balancers

The way enterprises architect applications has changed. According to our recent user survey, 65% of applications in an enterprise portfolio are monoliths, where all of the application logic is packaged and deployed as a single unit. However, we see that the majority of new app development uses microservices architectures instead. Nearly 10% of apps are built net-new as microservices (where different applications are broken up into discrete, packaged services), while the 25% in between are hybrid applications (a combination of a monolith with attached microservices, sometimes referred to as “miniservices”).

You can read more about this in our seminal blog series on microservices. It details the journey from monoliths to microservices, which has a profound impact on all aspects of application infrastructure:

  • People: Control shifts from infrastructure teams to application teams. AWS showed the industry that if you make infrastructure easy to manage, developers will provision it themselves. Responsibility for infrastructure then shifts away from dedicated infrastructure and network roles.
  • Process: DevOps speeds provisioning time. DevOps applies agile methodologies to app deployment and maintenance. Modern app infrastructure must be automated and provisioned orders of magnitude faster, or you risk delaying the deployment of crucial fixes and enhancements.
  • Technology: Infrastructure decouples software from hardware. Software‑defined infrastructure, Infrastructure as Code, and composable infrastructure all describe the trend of value shifting from proprietary hardware appliances to programmable software on commodity hardware or public cloud computing resources.

These trends impact all aspects of application infrastructure, but in particular they change the way load balancers – sometimes referred to as application delivery controllers, or ADCs – are deployed. Load balancers are the intelligent control point that sits in front of all apps.

Historically, a load balancer was deployed as hardware at the edge of the data center. The appliance improved the security, performance, and resilience of hundreds or even thousands of apps that sat behind it. However, the shift to microservices and the resulting changes to the people, process, and technology of application infrastructure require frequent changes to apps. These app changes then require corresponding changes to load balancer policies.

If your policies are implemented on an F5 hardware load balancer, then you’ll spend weeks or even months testing and implementing iRule changes. Why? F5 appliances are so expensive that you generally have them load balance traffic for dozens, hundreds, or even thousands of apps. On an appliance that frontends 1,000 apps, changing a policy for one of them means you have to test the impact on the other 999. One large bank told us that they maintain an entirely parallel F5 infrastructure for mandatory testing of changes, a process that takes six weeks overall.

The effect of frequent iRule changes is painful enough in the monolithic world, but it makes them untenable in the microservices world. Maintaining F5 appliances becomes expensive, time‑consuming, and error‑prone.

NGINX software load balancers change that.

There are four common deployment models:

  • Replace F5 appliances deployed at the edge with NGINX software.
  • Deploy NGINX behind the F5 appliance to act as a DevOps‑friendly abstraction layer.
  • Frontend an F5 appliance with NGINX to offload capabilities like SSL termination.
  • Provision an NGINX instance for each of your apps, or even for each of your customers.

Because NGINX is lightweight and programmable, it consumes fewer compute resources than an equivalent F5 appliance while providing at least 2x better performance and 80% cost savings.

But replacing your F5 appliance with NGINX software isn’t an overnight process. To help, we’ve curated a list of resources to help you research, evaluate, and implement NGINX.

Resources to Help You Augment or Replace Your F5 with NGINX

Stage 1: Researching NGINX as an F5 Alternative

The first stage in the process is to understand why it makes sense to deploy NGINX as your software load balancer. If you’re just getting started, we recommend you check out our:

Ebook: 5 Reasons to Switch from Hardware to Software Load Balancing – Learn how to build and deliver new apps with software‑based application delivery platforms.
NGINX Plus icon on globe
Blog: How Hardware Load Balancers Are Killing Agile Development (and Competitive Advantage)! – This short read elaborates on the concepts discussed in this blog, demonstrating why F5 and other hardware load balancers are not DevOps‑friendly.
Blog: Not All Software Load Balancers Are Created Equal – F5 offers a software load balancer, but it doesn’t fit in a modern application architecture the same way NGINX does. This blog explains the significant differences.
ORM Survey Infographic
Infographic: Why Replace F5 BIG-IP with NGINX Plus? – This provides concrete data points to help you make the case for migrating from F5 to NGINX.
Case study: IgnitionOne Manages Massive Traffic With Minimal Latency – Read about IgnitionOne’s experience replacing F5 and achieving 5x capacity at 99% lower cost.

Stage 2: Evaluating NGINX as an F5 Alternative

Now that you’ve built the business case for NGINX, it’s time to understand the various ways you can deploy NGINX to augment or replace your F5 hardware. Learn from customers and NGINX experts with our:

Case study: Migrating Load‑Balanced Services from F5 to NGINX Plus at AppNexus – This blog and video show how AppNexus replaced its F5 BIG-IP hardware, cutting costs by 95%.
Blog: NGINX Plus vs. F5 BIG-IP: 2018 Price‑Performance Comparison – Read these test results to size your NGINX environment and get better performance at less cost.
Whitepaper: Replacing and Augmenting F5 BIG-IP with NGINX Plus – Learn the benefits of deploying NGINX Plus behind, beside, or in place of your F5 hardware.
Video: The TCO of the NGINX Application Platform – Learn about quantifying the ROI of NGINX and total cost of ownership versus F5. Contact us for a custom ROI calculation.

Stage 3: Implementing NGINX as an F5 Alternative

After making the investment decision, it’s time to roll up your sleeves and migrate your F5 environment over to NGINX.

Ebook: F5 BIG-IP to NGINX Plus: Migration Guide – Read this detailed guide to learn how F5 iRules and other configuration can be migrated easily to NGINX Plus.
Webinar: Replacing and Augmenting F5 BIG-IP with NGINX Plus – Watch this companion piece to the ebook for tips on migrating from F5 to NGINX.

Two Ways to Get Started with a Free Trial of NGINX Software

Getting started with NGINX Plus is easy. We offer two automated trial experiences, based on your needs.

If you’re thinking of replacing your F5 load balancer with NGINX Plus, your first option is to get a free 30‑day NGINX Plus trial. This is the best option for DevOps and infrastructure teams that plan to use automation and orchestration tools to manage NGINX, or already manage F5 via an API.

As a second option, you can get a free NGINX Controller trial (which includes NGINX Plus) if you want a solution that includes additional monitoring, management, and analytics capabilities for NGINX Plus load balancers. This is the best option for infrastructure and network teams that do not manage F5 via an API or want to evaluate the NGINX Application Platform as a fully integrated, standalone ADC.

We’d love to talk with you about how NGINX Plus can help with your use case. But, no, you can’t fax us. I think we can agree that, like F5 hardware, there is no place for a fax machine in the modern enterprise.

The post How to Augment or Replace Your F5 Hardware Load Balancer with NGINX appeared first on NGINX.

Source: How to Augment or Replace Your F5 Hardware Load Balancer with NGINX

About KENNETH 19694 Articles
지락문화예술공작단

Be the first to comment

Leave a Reply

Your email address will not be published.


*


이 사이트는 스팸을 줄이는 아키스밋을 사용합니다. 댓글이 어떻게 처리되는지 알아보십시오.