Building a Secure, Fast Network Fabric for Microservices Applications

Building a Secure, Fast Network Fabric for Microservices Applications

td {
padding-right: 10px;
}

Title slide from presentation by NGINX Microservices Practivce Lead Chris Stetson at nginx.conf 2016: 'Building a Secure Fast Network Fabric for Microservices Applications'

This post is adapted from a presentation delivered at nginx.conf 2016 by Chris Stetson. You can view a recording of the presentation on YouTube.

Table of Contents

0:00 Introduction
0:56 The Big Shift
2:31 An Anecdote
3:53 The Tight Loop Problem
5:10 Mitigation
6:00 NGINX Works Well with Microservices
6:42 The Microservices Reference Architecture
7:36 The Value of the MRA
8:15 The Networking Problem
9:03 Service Discovery
9:52 Load Balancing
10:50 Secure and Fast Communication
12:13 A Solution
12:29 Network Architectures
13:02 The Proxy Model
13:58 The Router Mesh Model
15:03 The Fabric Model
16:05 The Normal Process
17:08 The Fabric Model in Detail
18:19 Persistent SSL/TLS Connections
19:14 Circuit Breaker Plus
19:50 Zokets Demo
23:30 Conclusion

0:00 Introduction

Chris Stetson: Hi, my name is Chris Stetson and I’m the Head of Professional Services and also the Microservices Practice Lead at NGINX.

We’re going to be talking about microservices today and how to build a fast, secure network system using NGINX. At the end of our talk, we’ll have a demo with our partners at Zokets to show you how to build a microservices application very quickly and easily using the Fabric Model.

Before we go into the Fabric Model, I want to talk about microservices and what it means from an NGINX perspective.

0:56 The Big Shift

Section title slide reading 'The Big Shift', referring to the transition to microservices architecture currently underway [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Microservices has caused a big shift in the way applications are architected.

The move from monolithic to microservices architecture for web application delivery is an emerging trend [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

When I first started building applications, they were all pretty much the same. The monolithic architecture shown in the slide is emblematic of the way that applications were constructed.

There was a virtual machine [VM] of some sort. For me, that was usually Java. The functional components of the application would exist in the VM as objects. Those objects would talk to each other in memory. They would have handles back and forth, making method calls. Occasionally, you would reach out to other systems to get data or pass out information, such as notifications.

In a microservices architecture, the components of a web application are hosted in containers and communicate across the network using RESTful API calls [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

With microservices, the paradigm for how applications are constructed is completely different. Your functional components have shifted from being on the same host in memory, talking through a VM, to living on containers and connecting to each other through HTTP using RESTful API calls.

This is very powerful because it gives you functional isolation. It gives you much more granular scalability, and you get resiliency to better deal with failure. A lot of this is simply enabled by the fact that you’re making calls across the network using HTTP.

Now, there are some downsides to this approach.

2:31 An Anecdote

Section title slide reading 'An Anecdote' [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

I have a deep, dark secret. I was a Microsoft employee and I did .NET development for many years. While I was there, I built their video publishing platform called Showcase.

Showcase was a system that took all the videos that Microsoft published internally and put them out on the Web. People could view them and learn, for example, tips and tricks for Microsoft Word. It was a very popular platform. We had a lot of people who used it, and many of them would comment on the videos that we published.

Showcase started off as a .NET monolith. As its popularity grew, we decided that we should change it to an SOA architecture. The conversion was relatively easy. Visual Studio gives you a capability to essentially flip a switch and your DLL calls shift to RESTful API calls. With some minor refactoring, we were able to get our code to work reasonably well. We were also using the Telligent Community Server for our comments and community features within the application.

3:53 The Tight Loop Problem

Stetson's experience with converting a monolith to an SOA architecture alerted him to the performance problem when there are thousands of requests flying around [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

It seemed we were SOA-capable, and in our first initial test, everything worked fine. It wasn’t until we actually moved the system to our staging environment and started using production data that we saw some serious problems. The problems were around pages with a lot of comments.

This was a very popular platform and some of our pages had as many as 2,000 comments on them. As we dug into the problem, we realized that the reason these pages were taking over a minute to render was because the Telligent Community Server was populating the user names first, then doing a network call to the user database for every user name, to get its details and populate them on the rendered page. This was incredibly inefficient, and it was taking a minute or two to render pages that, in memory, normally took 5 to 6 seconds.

5:10 Mitigation

To solve the problems he experiences with excessive traffic in a network architecture, Stetson grouped requests and cached data [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

As we went through the process of finding and fixing the problem, we eventually tuned the system by doing things like grouping all the requests. We cached some of the data and ultimately we ended up optimizing the network to really improve performance.
So, what does this have to do with microservices? Well, with microservices, you’re essentially taking an SOA architecture and you’re putting it into hyperdrive. All the objects that were contained on this single VM and managed internally, talking to each other in memory, are now using HTTP to exchange the data.

When this is done right, you get very good performance and linear scalability.

6:00 NGINX Works Well with Microservices

NGINX works well with microservices, being the number one downloaded application on Docker Hub [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

NGINX is one of the best tools you can use to transition to microservices.

A little history about NGINX and microservices. We’ve been involved in the microservices movement from the very beginning. We are the number one downloaded application from Docker Hub. Our customers and end users who have some of the largest microservices installations in the world use us extensively within their infrastructure.

The reason is because we are small, fast, and reliable.

6:42 The NGINX Microservices Reference Architecture

The NGINX Microservices Reference Architecture follows the principles of the 12-Factor App, adapted for a microservices architecture [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

We’ve also been working on microservices internally at NGINX for a while now. Here is a stylized context diagram of the NGINX Microservices Reference Architecture that we’ve been building, which currently runs on AWS.

We have six core microservices. They all run in Docker containers. We decided to build them as a polyglot application, so each container runs with a different language. We’re using Ruby, Python, PHP, Java, and Node.js.

We built this using the 12-Factor App approach, modified slightly to make it work better for microservices instead of the Roku platform. Later on, we’ll be showing you the application actually running in a demo.

7:36 The Value of the MRA

The NGINX Microservices Reference Architecture provides customers with a blueprint for microservices architecture [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Why did we build this reference microservices architecture?

We built it because we wanted to provide our customers with a blueprint for building microservices. We also wanted to test out NGINX and NGINX Plus features within the context of microservices, and figure out how we can use it to better advantage. Finally we wanted to make sure we had a deep understanding of the microservices ecosystem and what it can provide you.

8:15 The Networking Problem

Section title slide "The Networking Problem" underlines a challenge to a microservices architecture: the need for microservices to communicate over a network [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Let’s go back to our discussion of the big shift.

With the transition from having all of the functional components of your application running in memory and being managed by the VM, to working over a network and talking to each other, you’ve essentially introduced a series of problems that you need to address in order for the application to work efficiently.

One, you need to do service discovery. Two, you need to do load balancing between all the different instances in your architecture. And three, you need to worry about performance and security.

For better or worse, these issues go hand in hand and you have to balance them together. Hopefully, we’ll have a solution that addresses all of them.

Let’s look at each issue in more depth.

9:03 Service Discovery

Service discovery is a challenge in a microservices architecture that does not apply in a monolithic design and is made more difficult by lack of a standard method [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Let’s talk about service discovery. In a monolith, the app engine would manage all of the object relations. You never had to worry about where one object was versus another, you just simply made a method call, the VM would connect you to the object instance, and away it would go.

With microservices, you need to think about where those services are. Unfortunately, this is not a universally standard process. The various service registries that you’re using, whether Zookeeper, Consul, etcd, or whatever, all work in different ways. In this process, you need to register your services, you need to be able to read where those services are and be able to connect to them.

9:52 Load Balancing

Efficient and sophisticated load balancing like that with NGINX Plus is a requirement for a microservices architecture [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

The second problem is regarding load balancing. When you have multiple instances of a service, you want to be able to connect to them easily, distribute your requests across them efficiently, and do it in the quickest possible manner. So load balancing between the different instances is a very important problem.

Unfortunately, load balancing in its simplest form is pretty dumb. As you get more complicated and start using different schemes for doing load balancing, it also becomes more complicated and sophisticated to manage. Ideally, you want your developers to be able to decide on the load balancing scheme based on the needs of their application. So for example, if you’re connecting back to a stateful application, you want to have persistence so that you can make sure that your session information is retained.

10:50 Secure and Fast Communication

Secure and fast communication is a big challenge in a microservices architecture because SSL/TLS processing is CPU intensive and slows down message exchange [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Perhaps the most daunting aspect of microservices design is performance and security.

When everything was running in memory, it was all very fast. Now it’s going over the network, which is an order of magnitude slower.

The information that was securely contained in a single system, typically in a binary format, is now being flung across the network in text format. Now it’s relatively easy to put a sniffer on the network and be able to listen to all that data of your application being moved around.

If you want to encrypt the data at the transmission layer, you introduce significant overhead in terms of connection rates and CPU usage. SSL/TLS, in its full implementation, requires nine steps just to initiate a single request. When your system is doing thousands, tens of thousands, hundreds of thousands, or millions of requests a day, this becomes a significant impediment to performance.

12:13 A Solution

The Fabric Model of the NGINX Microservices Reference Architecture address the challenges of a microservices architecture by providing service discovery, robust load balancing, and fast encryption [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Some of the solutions we’ve been developing at NGINX we think address all of these, giving you robust service discovery, really good user configurable load balancing, and secure and fast encryption.

12:29 Network Architectures

Section title slide reading Network Architectures to introduce the three models in the NGINX Microservices Reference Architecture [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

Let’s talk about the various ways you can set up and configure your network architecture.

We’ve come up with three network models. They’re not mutually exclusive per se, but we think that they pair out into various formats. The three models are the Proxy Model, the Router Mesh Model, and the Fabric Model, which is the most complex and in many ways turns load balancing on its head.

13:02 The Proxy Model

In the Proxy Model of the NGINX Microservices Reference Architecture, NGINX manages inbound traffic as a reverse proxy and load balancer [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

The Proxy Model focuses entirely on inbound traffic to your microservices application and really ignores internal communication.

You get all of the goodness of HTTP traffic management that NGINX provides. You can have SSL/TLS termination, you can have traffic shaping and security, and with the latest version of NGINX Plus, you get WAF capability with ModSecurity.

You can have caching. You can add all the things that NGINX provides your monolithic application to your microservices system, and with NGINX Plus, you can do service discovery. As instances of your APIs come up and down, NGINX Plus can dynamically add and subtract them in the load balancing tool.

13:58 The Router Mesh Model

In the Router Mesh Model of the NGINX Microservices Reference Architecture, NGINX Plus handles incoming traffic as a reverse proxy and also load balances among the microservices, implementing the circuit break pattern [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

The Router Mesh Model is like the Proxy Model in that we have a frontend proxy server to manage that incoming traffic, but it also adds centralized load balancing between the services.

Each of the services connects to that centralized router mesh, which manages the distribution of connections between the different services. The Router Mesh Model also allows you to build in a circuit breaker pattern, so that you can add resiliency to your application and do things that will allow you to monitor and pull back on instances of your services that are failing.

Unfortunately, because it adds an extra hop, if you have to do SSL/TLS encryption, it actually exacerbates the performance problem. This is where the Fabric Model comes into play.

15:03 The Fabric Model

The Fabric Model of the NGINX Microservices Reference Architecture is a microservices architecture that provides routing, forward proxy, and reverse proxy at the container level and establishes persistent connections between services [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

The Fabric Model is the model that flips everything on its head.

Like the two other models before it, you have a proxy server in the front to manage incoming traffic. But where it differs from the Router Mesh Model is that instead of a centralized router, you have NGINX Plus running in every container.

This NGINX Plus instance acts as a reverse and forward proxy for all of the HTTP traffic. Using this system, you get service discovery, robust load balancing, and most importantly high-performance, encrypted networking.

We’ll go into how that happens and how we make that work. Let’s start by looking at a normal process for how services connect and distribute their request structures.

16:05 The Normal Process

In a standard microservices architecture, a client first makes a DNS request to the service registry, then uses the addresses obtained to establish an SSL/TLS connection to the service [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

In this diagram, you see the Investment Manager needs to talk to the User Manager to get information. The Investment Manager creates an HTTP client. That client does a DNS request against the service registry and gets back an IP address. It then initiates that SSL/TLS connection to the User Manager, which goes through the nine-step [negotiation or “handshake”] process. Once the data is transferred, the VM closes down the connection and garbage collects that HTTP client.

That’s the process. It’s fairly straightforward and understandable. When you break it down into these steps, you can see what it takes to actually complete the request and response process.

In the Fabric Model, we’ve changed that around a bit.

17:08 The Fabric Model in Detail

In the Fabric Model of the NGINX Microservices Reference Architecture, NGINX Plus runs in every service to establish local persistent connections and maintain service discovery information [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

The first thing you’ll notice is NGINX Plus is running in each of the services, and the application code talks locally to it. Because these are localhost connections, you don’t have to worry about them being encrypted. They can be HTTP requests from the Java or PHP code to the NGINX Plus instance. It’s all HTTP locally within the container.

You’ll also notice that NGINX Plus is managing the connection to the service registry. We have a resolver that goes through and asynchronously queries the DNS instance of the registry to get all of the instances of the User Manager and pre-establish connections so that the Java service, when it needs to request some data from the user manager, can use a pre-existing connection.

18:19 Persistent SSL/TLS Connections

Persistent connections between microservices is one of the main advantages of the Fabric Model of the NGINX Microservices Reference Architecture [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

The real benefit comes in the stateful, persistent, encrypted connections between the microservices.

Remember in that first diagram, how the service instance goes through the process of creating the HTTP client, negotiating the SSL/TLS connection, making the request and closing it down? Here, NGINX pre-establishes the connection between the microservices, and using keepalive functionality, keeps that connection persistent between calls so that you don’t have to do that SSL/TLS negotiation process for each request.

Essentially, we’re creating mini-VPN connections from service to service. In our initial testing, we found that there’s a 77% increase in connection speed.

19:14 Circuit Breaker Plus

The Fabric Model of the NGINX Microservices Reference Architecture implements the circuit breaker pattern [presentation by Chris Stetson, NGINX Microservices Practice Lead, at nginx.conf 2016]

You also get the benefit of the ability to create and use the circuit breaker pattern within the Fabric Model and in the Router Mesh Model.

Essentially, you define an active health check within your service, and set up caching so that you can retain data in case the service becomes unavailable, giving you the full circuit breaker functionality.

So, by now I’m sure you think this Fabric Model thing sounds pretty cool and you’d like to see it in action.

19:50 Zokets Demo

We’ve been working with our partners at Zokets who have helped us build a system to easily visualize, control, and automate the process of building microservices-based Fabric Model applications.

I’d like to introduce Sehyo Chang, CTO of Zokets, who will help us show off the Fabric Model using their platform.

Editor – To view the demo, access timepoint 20:25 in the video recording:

23:30 Conclusion

For anybody interested in learning more about how to build these types of network architectures, I highly recommend our series of blog posts which talk about Microservices Reference Architecture. It walks through each of these models, the Proxy Model, the Router Mesh Model, and Fabric Model.

To try NGINX Plus in your own environment, start your free 30‑day trial today or contact us for a live demo.

The post Building a Secure, Fast Network Fabric for Microservices Applications appeared first on NGINX.

Source: Building a Secure, Fast Network Fabric for Microservices Applications

About KENNETH 19694 Articles
지락문화예술공작단

Be the first to comment

Leave a Reply

Your email address will not be published.


*


이 사이트는 스팸을 줄이는 아키스밋을 사용합니다. 댓글이 어떻게 처리되는지 알아보십시오.