nginMesh: NGINX as a Proxy in an Istio Service Mesh

nginMesh: NGINX as a Proxy in an Istio Service Mesh

td {
padding-right: 10px;
}

This post is adapted from a presentation at nginx.conf 2017 by A.J. Hunyady, Senior Director of Product Management at NGINX, Inc. You can view the complete presentation, Deploying NGINX Proxy in an Istio Service Mesh, on YouTube. Also see the press release announcing nginMesh and Ed Robinson’s conference talk on NGINX open source projects.

td {
padding-right: 10px;
}

Table of Contents

0:54 Agenda
1:32 Modern Apps
4:29 Cloud-Native Apps
7:52 The Services Mesh
9:37 What is Istio?
12:20 Istio with NGINX
14:25 Demo of Istio and NGINX in action
31:48 nginMesh Roadmap
34:00 Q&A

Good afternoon, everyone. My name is A.J. Hunyady. I’m going to give a talk on NGINX proxy used within Istio service mesh. I’m the product owner and I’ll be joined on stage by Sehyo Chang, who’s the chief architect for this product. He’ll be doing a demo for us.

Before I get started, I’d like to ask a couple of questions. How many of you are familiar with the service mesh? Show hands. Well, that’s OK. How about Istio, have any of you heard of Istio? Oh, that’s a fair number. All right, great. Well, thank you. That helps.

0:54 Agenda

This is the agenda. I’ll speak about the evolution of microservices and what I perceive as the new stack. Then, I’ll briefly talk about the role of the service mesh, for those of you who haven’t run across it yet.

After that, I’ll do a brief intro on Istio and talk about how NGINX and Istio will work together in giving you a service mesh for enterprise – maybe I should call it an enterprise creative service mesh. Then Sehyo will be doing a demo for us. And finally, I’ll talk about roadmaps, since we have several. We’re going to deliver this in several phases.

1:32 Modern Apps

Let’s get started. You might have seen this live this morning. I’m trying to set some context regarding the iteration of the applications and the transitions it’s gone through, from cloud servers in the 1990s to three-tier apps, and from then on in the 2000s to web 2.0 applications.

With the evolution of container technology, what we’re seeing now is the next transition toward cloud-native apps. If you look at cloud-native apps, it’s all about microservices, and they’re defined by having small, loosely-coupled workloads that have a well-defined API.

They’re portable and they communicate with each other through a networking layer. This is all nice and great for developers, that they have the ability to use any infrastructure they like. Since they’re portable, they can easily move from the laptop into the cloud.

They can use their favorite programming language – which might be C, Go, Java, and so forth – and they have the ability to work on smaller workloads, so you don’t really have to deal with the big, monolithic types of applications.

But there’s a downside, and the downside is that we’re trading some of that complexity to IT operations which has to deal with deploying all these workloads across the ecosystem, across the data center.

You may deploy a thousand of them, or maybe a million if you’re Netflix. (They just announced that they hit the one-million mark of services deployed on the network in April of this year.) It’s one thing to do it in one data center, but it’s quite another to do it in multiple data centers and across geographical locations.

And you want to get the same type of reliability that you’ve seen before, the same type of troubleshooting capabilities that you’ve seen for your apps. As a matter of fact, Gartner announced that by 2021, about half the enterprises worldwide will move from “cloud-first” to “all in the cloud”, which gives us about four years. Not a big deal, right? When you think of other transitions – in particular, the one to virtualization took about 10, 15 years.

But there’s some good news. If you look at where this area’s heading, a lot of companies and a lot of vendors in this area are building innovative solutions in the space.

4:29 Cloud-Native Apps

Let me talk a little bit about the new stack. If you look at cloud-native apps and microservices, you may think, “Oh my gosh, service-oriented architecture all over again.” It’s becoming cool, right? I would argue that that may be the case, but it’s quite a different type of environment. It’s no longer services bus-related, but it’s built on orchestration layers.

If you look at this new stack, it has several components. The first one is packaging – which is, I would argue, a pretty well-solved problem by now. Docker has done a pretty good job of providing you the portability you need and the ability to push polyglots within your system.

They’ve done something very interesting. In about 2015 – at the end of 2014, early 2015 – CoreOS announced that they had their own container technology which was rkt (Rocket), and then RedHat also announced Atomic. So Docker decided to create OCI (Open Container Initiative).

What they’ve done is: they’ve brought all these companies together and provided their Version 1 specification. They’re trying to make sure that the packaging stays uniform across all enterprises, rather than just Docker-specific packaging.

The next layer on the stack is orchestration. Once you figure out how to package, once you figure out how to get your workloads into testing, you have to deal with orchestration-type challenges: how do you take containers and assign them to compute?

There are three major vendors there, maybe four if you consider HashiCorp: you have Kubernetes, Mesos, Docker Swarm, and Nomad. I would argue that, with about 40% of the market, Kubernetes is doing a good job trying to standardize that orchestration function.

The next layer, as Gus mentioned this morning, is interconnectivity between services. As you might have noted, when you’re deploying containers, networking containers together is not a trivial task. It’s even harder to secure them. This is where the service mesh connectivity comes in. I’ll talk a little bit more about this because it has a lot to do with Istio.

The last part of the platform is the application platform. That’s where we bring in the policy layer, the workflows, where you want to deploy applications across multiple environments. That’s where NGINX Controller is strong.

Those are the types of problems that Controller wants to solve by giving you workflows, giving you policy, giving you role-based access. OpenShift is also solving some issues by giving you the ability to set up access control across your environments.

7:52 The Services Mesh

What is the service mesh? Service mesh is a uniform infrastructure layer for service-to-service communication. It utilizes lightweight proxies and it can be deployed, typically, in two flavors.

It may be deployed side-by-side with the application. If you look at this diagram, it may look familiar to you as it’s our reference architecture – Director Stetson, I see him here in the first row. This was built by us about a year ago, and some of our customers have tried it. It gives you the ability to run your proxy functionality side-by-side as a process, side-by-side with the application level.

Another implementation of that is done through service proxy. That’s the approach Istio has taken, that I’m going to describe in more detail in the next set of slides.

Why is service mesh required? It gives you a consistent way to deal with routing rules across the ecosystem, across various applications; security concerns to make sure that all the services that are supposed to communicate with each other will communicate; resiliency – have load-balancing functionality and work together with a service discovery protocol for services to come up, and when the services disappear, you’re able to age them gracefully; and monitoring – you’ll have the ability to do end-to-end monitoring by tracking when a packet traverses the network, and if you have multiple services that are chained together, you’re quickly able to identify where that service fails.

9:37 What is Istio?

What is Istio? Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. It was introduced by Google in collaboration with IBM and a series of other vendors on May 23, 2017.

It’s currently in Alpha 1.6. Google anticipates that they’ll come up with 0.2 sometime this month – towards the end of the month – they have a website they’ve put up and there’s a lot more information there. It has multiple layers that I’m going to talk to you about.

What is Istio? It offers you a control plane within Istio itself. As I mentioned in the previous slides, there are two approaches that you can deploy: one way is through a sidecar proxy and the other one’s integrated.

Istio has chosen to give you a sidecar proxy which is transparent to the application, but it’s deployed on top of a Kubernetes environment, so each service that’s deployed by Kubernetes will have a proxy side-by-side with it.

Then it enables the application services to communicate with each other transparently. It will patch the traffic that moves from one service to another through to the sidecar proxy. Then, they’re able to communicate with each other without the application being involved.

It also takes the identity of the application. Why is that important? You can do some really interesting things with regard to security by taking the identity of the application. You can set out things such as MTL – mutual TLS – where you authenticate both the client and the server, or both sides of the services, so you can ensure that only the services that should communicate with each other, will.

It also does things such as certificate authority automation. It enables you to rotate certificates; that’s no longer a manual operation. If you look at Istio, there are really three main components: it’s on the Pilot side where you have the configuration for the routing domain and plugin into the service discovery.

Then you have Mixer, where it does three things, in a sense: it makes sure that services that should communicate, will communicate, it does monitoring, and it also does quota management. And it has authentication, which I’ve already mentioned.

12:20 Istio with NGINX

Where does NGINX come into play? I think that’s kind of a giveaway. NGINX will be represented in this diagram by becoming the sidecar proxy in the Istio environment, which gives you the ability to get best-in-class features that you know, from routing, to load balancing, to circuit breaker capabilities, and caching encryption.

More importantly, you have the ability to build your own custom modules, third-party modules. You can bring in authentication mechanisms of your choice. You can even bring in dynamic language support. For example, if you have Lua scripting or nginScript, they can now, all of a sudden, be integrated in an Istio-like environment.

And we have roadmap that we’re going to go through as we’re iterating to the Istio deployment. On one side, we have Unit – that we announced this morning – that gives you the ability to take the sidecar proxy and integrate it side-by-side with services.

There are some performance improvements for that: it gives you the ability to deploy one component instead of two; it gives you better compatibility. As you’re probably aware, within the Kubernetes environment, there are pods, but they’re not (the same kind of) pods.

They don’t really exist within a Docker Swarm environment, or within a Mesos environment. Having the Unit component that does the service mesh function, as well as the application server function, you can easily port from one environment to another.

On top of that, you’re going have Controller, which gives you the ability to control your workflows. Because, while your pilot enables you to set various routes for your application – for blue-green, A/B, and canary types of workloads – you need a high-level abstraction that gives you the ability to map applications together, write beta data around it, and also move those applications from one cloud to another, or from one environment to another.

That’s all I have to say about this.

14:25 Demo of Istio and NGINX in action

I’m going to ask Sehyo Chang, NGINX’s Director of Engineering, to come up and show us a demo of Istio and NGINX in action. We’ll be running Kubernetes, Istio, and NGINX together. Sehyo?

Click here for demo by Sehyo Chang

We have a few more slides that we’re going to go over. I think what Sehyo has demonstrated is that NGINX makes quite a capable proxy within an Istio environment. We actually have several components that we plugged in. As part of the Istio environment, we have a plugin which we bring into NGINX to communicate the mixer to JRPC.

We also have an agent that enables us to proxy all the traffic that comes into the service to our agent. It goes in, then it’s funneled through to the service engine that we’ve provided, and it goes up in the service.

It’s pretty much transparent to the application. You don’t have to make any changes to the application. You deploy the application as you would in a Kubernetes environment, and then you’re making changes to traffic routing without really impacting the bytes of the application itself.

It’s been a really interesting setup. We weren’t able to get everything to mirror.

Anyway, what we’re trying to say is that this project is available on GitHub today. We just open-sourced it, so you can play around with it yourself. It comes with instructions.

31:48 nginMesh Roadmap


In terms of the nginMesh roadmap, as we said, it’s in Tech Preview today. We have it available on GitHub at github.com/nginmesh. This is our new product name. We’re going to have it available in beta, and at that time, we’ll publish more than the container.

We’re going to do that with Istio 0.2 because there are several components that will be changing within the environment. We’ll also add OAuth. Then, toward the end of October, we’ll add the Ingress Controller part of this, so you’ll be able to have a full chain of information, and you’ll have full visibility across the ecosystem.

Last but not least, we have a bunch of future enhancements that we’d like to make. We’ll bring in gRPC on the upstreams through NGINX, and then we’ll integrate NGINX Plus support, as well as Unit and Controller.

In conclusion, we’d like leave you with the statement that NGINX has joined CNCF, and we’re partnering with Istio to build innovative solutions that help enterprises transition to modern microservices architectures. The project, as I said, is available on GitHub. Thank you very much.

34:00 Q&A

Q: What’s the difference between an API gateway and a service mesh?

When you think of an API gateway, you’re dealing with high-level abstractions. You’re assigning different types of authentication layers. Typically, it has to do with the traffic coming to your network. A service mesh deals with service-to-service communications. It provides you security.

It provides you mutual authentication. Within an API gateway, it’s a typical one-way authentication. You have an SSL certificate and you’re attempting to get the client. Typically, an API gateway comes with a control plane that enables you to provide AVS for multiple types of applications.

Service mesh, and Istio itself, are more about inter-service communication and abstracting applications from each. They probably have different functions.

The post nginMesh: NGINX as a Proxy in an Istio Service Mesh appeared first on NGINX.

Source: nginMesh: NGINX as a Proxy in an Istio Service Mesh

About KENNETH 19688 Articles
지락문화예술공작단

Be the first to comment

Leave a Reply

Your email address will not be published.


*


이 사이트는 스팸을 줄이는 아키스밋을 사용합니다. 댓글이 어떻게 처리되는지 알아보십시오.