Microservices & API Gateways, Part 2: How Kong Can Help

Microservices & API Gateways, Part 2: How Kong Can Help

td {
padding-right: 10px;
}

Microservices & API Gateways with Kong [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]
This post is adapted from a presentation presentation by Marco Palladino at nginx.conf in September 2016.
This blog post is the second of two parts, and is focused on how Marco Palladino’s API Gateway, Kong, can fit your microservices architecture. Click here for part one, which highlights the core difference between monolithic and microservices architectures as well as how to set up and API gateway for additional functionality.

Table of Contents

23:52 API Gateways and Kong Can Help
25:49 What is Kong?
26:09 What Does Kong Do?
26:35 Kong Plugins
27:22 Kong: OpenResty + NGINX
30:26 NGINX Configuration
32:53 Kong Entry-Points
33:30 Core Entities
34:34 Plugins Configuration Matrix
35:18 Multi-DC Deployment
37:27 Demo Time
41:14 Questions

23:52 API Gateways and Kong Can Help

API gateways and Kong can help organize, maintain and deploy your microservices application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

Palladino: API gateways can fix some of these problems by implementing those middle functions that you don’t have to re-implement again in those services. Sometimes different teams implement different microservices in different ways.

If you don’t have a centralized way of doing things, you will end up with teams doing authentication and rate-limiting in a different way than another team. You want to avoid having this fragmentation.

An API gateway can also help fixing not only the management part of an API, but can also help in fixing the other two missing things that we have to do.

Analytics – a gateway can communicate transparently with your analytics infrastructure because the gateway is the entry point for every request. The gateway knows all of the data and the traffic that’s going to your services, so you have one central place where you can push all of this information to your monitoring or analytics solution like Kibana or Splunk.

Automation – the gateway can also help automate deployments, but also documentation and onboarding. But what is onboarding? If you have an API and you have authentication, you will need something that allows users to create credentials for that API and start consuming the API. The developer portal or your documentation can integrate with the gateway to provision these credentials so you don’t have to build something from scratch.

25:49 What is Kong?

Kong is an open source management layer for APIs to secure, manage, and extend APIs and microservices

So what is Kong? KONG it’s an open source API gateway or management layer APIs that you can use for implementing extra features on top of those upstream services. KONG is open source, so it’s available on GitHub. You can download it and use it today.

26:09 What Does Kong Do?

Kong centralizes common middleware functionality for your microservices application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

Kong centralizes all that fragmentation into one place. This shows exactly what I was talking about before – the fragmentation of having multiple services, each one with different implementations for common features.

An API gateway like Kong can centralize all of that in one place, which in turn makes development of those services even easier because you have less code to handle maintain.

26:35 Kong Plugins

Kong plugins can be built from scratch and extend the functionality of Kong and your microservices application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

So what are plugins? Plugins are the Kong middleware that you can add on top of those upstream services. Middleware plugins can be anything starting from authentication to security, to traffic control, logging, or transformations.

You may have a SOAP service that you need to make available with a RESTful interface. An API gateway like Kong can implement that transformation layer in the API gateway so you don’t have to ask your team to change the implementation of the API. The API gateway can implement the transformation for you.

27:22 Kong: OpenResty + NGINX

Kong is an OpenResty application that runs on top of NGINX [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

Technically speaking, KONG is an OpenResty application. OpenResty runs on top of NGINX and extends NGINX with Lua. Lua is a very easy to use scripting language that allows you to script the things you can do in NGINX.

OpenResty provides hooks for different events of the API request and response lifecycle so you can write Lua scripting code that can hook into those events. For example, if a new request is coming in, is about to be proxied, and a response came back, you can write custom code for each one of these events and you can change the request and the response on-the-fly when a client makes a request.

We have NGINX at the bottom which handles the low-level features of KONG. All of the proxying is being done by NGINX – it’s a very solid technology. OpenResty is the underlying core of KONG. it extends NGINX with these new capabilities, and Kong on top of those two technologies implements clustering plugins and a RESTful API that you can use for managing the API gateway.

Likewise with Elasticsearch, you have an API for doing pretty much whatever you need to do for your data store. Kong exposes an API that allows you completely operate the system by just making HTTP calls and parsing a JSON response.

This means you can integrate Kong with your devops and automation tools. You can also integrate Kong with third party services, developer portals and onboarding tools. Kong stores all information in either PostgreSQL or Cassandra, and depending on the use case you may want to use one or the other.

Cassandra is an eventual consistent data store. This means that if you have a cluster of Cassandra nodes and you’re storing data, eventually that data will be propagated to all other nodes not at the same time.

So you can have, let’s say, a Cassandra cluster of three nodes in DC1, a Cassandra cluster of another three nodes in DC2, and then you link them together. Whatever operation you’re doing on one data center, it will be eventually replicated to the other data center.

PostgreSQL is easier to use. It supports master and slave, but it doesn’t support masterless multi-DC replication; however, there are some tools that allow you to do that.

30:26 NGINX Configuration

NGINX configuration of Kong for your microservices application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

Since Kong gets built on top of NGINX, you will be able to access the NGINX configuration and pretty much replace your existing NGINX implementation with Kong and still run the old functionality on top of Kong.

This is exactly how it works – we have an NGINX configuration that you can change, update, and modify, and then KONG comes with its own configuration that you can include inside of the NGINX configuration.

In the Kong configuration, it’s leveraging the OpenResty directives – anything by Lua, access by Lua, header filter by Lua. Those are all of the events in the life cycle of a request or response that you can hook into when you are creating a plugin in Kong.

Kong already comes with some plugins that either the community or Mashape has built, and some of them are available on the GetKong.com website or on the GitHub repo. Also, the community has extended the system with their own plugins. If you search on GitHub for Kong plugins, you can find a plugin for other things that are not a feature on the website.

On the website, we generally just put those that we feel are stable enough to be used in production, but there are other are ready to use plugins on GitHub. If you are planning to use Kong, I encourage you to check out what the community has built so that you don’t have to reinvent the wheel.

If you need to do something that’s very custom and very specific for your use case, you can of course extend Kong by creating your own plugin that you can shape privately inside your organization. The plugin will have access to all of these OpenResty events so you can change requests and change the response on-the-fly.

You can also make requests to third-party services. Let’s say you have a legacy authentication system that you want to integrate in your API gateway. You can create a plugin that makes a request to that authentication system on every request. It could then handle that request in the plugin and then return a response so that the client only has to make a request to the API interface.

32:53 Kong Entry-Points

Kong entry-points into your microservices application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

There are two main entry-points for Kong. The first is called the proxy entry-point, which means that consumers and clients that want to consume the upstream services will be able to do that through the default port :8000 or the ssl port :8443.

Then you have the Admin API, which is available on a different port, :8001. That is the API you can use for doing pretty much everything you have to do on the system.

33:30 Core Entities

Core entities for Kong and accessing your microservices application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

After this presentation, I will quickly show you how to use Kong using the terminal. It’s important to know before doing that that car has three main core entities that you will always use in the API gateway. Those entities are APIs, which represent the upstream services that you’re trying to put behind Kong.

So we can have a thousand different services behind Kong and we call them APIs. You can have consumers; consumers are clients or individual developers, it depends on your use case, that are going to consume those APIs.

The consumer can be a client app, either internal to your organization, public, or a partner. Then we have plugins that you can apply on top of APIs and consumers to change how the middleware functionality works.

34:34 Plugins Configuration Matrix

There are multiple ways to configure Plugins - per every API and every consumer, per every API and a specific consumer, per a sepcific API and every consumer, or per a specific apI and a specific consumer [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

For example, I can have plugins that are applied for every API and every consumer. Let’s say I want rate limiting for every service and I want to rate limit requests to 200 requests per second for every service.

Then you can have plugins that are for every API, but for a specific consumer. So I want everybody to make 200 requests per second but then an internal app should not have any limit. You can do that for one consumer for example. Or you can do per API and every consumer. You can play with it in a very flexible way.

35:18 Multi-DC Deployment

Multi-dc deployments allow Kong and your microservices application to be scaled horizontally [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

This is an example use case for a multi-DC deployment. Kong is the entry-point for all of the requests your clients are going to make. Kong will accept the request, it will figure out what kind of upstream service you’re trying to run, and it will try to run the middleware associated to that API.

Some of the middleware can be authentication plugins, so there is a an order execution and those will be executed first. Then once Kong knows what consumer is trying to consume the API, it can load dynamically all of this information.

Kong relies on the data store only for the first request. On the first request, Kong will parse all of the information from the data store and then will cache it in memory. So all of the other requests after the first one are going to be handled in memory, which means that Kong can be very fast without adding too much latency on the transaction.

If you have two different clusters, you need to somehow connect those data centers together. The information that’s going to be shared is going to either be data between the Cassandra nodes, or invalidation events.

Because Kong caches everything on the first request, what happens if on another node you’re making some changes? How does the previous Kong node know that the data isn’t valid anymore?

There are invalidation events that are being sent across the Kong nodes, so every time you’re making an operation on one Kong node, for example changing the address of an upstream service, that one Kong node will invalidate that one specific entity by sending an invalidation event for that for that one specific entity to all the other Kong nodes.

The other Kong nodes can then receive the invalidation event, delete the entity, and if a new request comes in, Kong will be forced to require the data again from the data store.

37:27 Demo Time

Demo of Kong [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]
Kong running with docker can be invoked with the admin port :8001 [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

So here we go. I have Kong running with docker, and I can invoke it on my admin port – :8001. Now, for example, I can tell Kong to create a new API test. Here we’re creating a mapping – we’re telling Kong how to process requests, so every incoming request into /test will be proxied to httpbin. Then I’m telling Kong before proxying the request to strip /test because /test doesn’t exist on the upstream service.

In Kong, you can consume the service by executing /test and /get, which is handled by httpbin [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

So then I can consume the service by executing /test and, for example, /get – an endpoint being handled by httpbin. If I do this operation, this is httpbin answering my request and these are the proxy headers that are being returned by Kong.

Now let’s say I want to create an authentication plugin to protect this upstream service. I can use the admin API again with http 127.0.0.1:8001/apis/test/plugins name=key-auth and tell the system I want key-auth on top of it.

If I do this, the system will add the plugin on top of that one API. This means that if I consume the plugin again, it not complain “I cannot consume the API”.

I can have one node or a hundred nodes and this information will be automatically replicated across every node. I don’t have to worry about reloading or restarting the nodes.

So I can make another post request – this is a CLI tool called HTTP and if I send parameters in a specific format, it automatically makes a post request. I can create a new consumer http 127.0.0.1:8001/consumers username=demo, and then it can associate a credential to this consumer using http 127.0.0.1:8001/consumers/demo/key-auth key=secret123 using “secret123”.

You can replicate the request after associateing a credential to a consumer to have Kong validate the request and proxy it to the upstream service [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

If we now replicate the same request again with http 127.0.0.1:8000/test/get?apikey=secret123, the system will validate the request and proxy it to the upstream service. Now I can integrate Kong’s API with existing tools or existing applications and Kong will automate all of this for you – including rate limiting.

Let’s say need to create a multi-DC rate limiting solution on top of this upstream service. Each plugin can have configuration properties. In this case, the configuration properties that I can apply are how many requests I want. I can tell the system I want to be able to make no more than ten requests per minute.

Kong rate limiting functionality that can be implemented in your microservice application [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

If I do this and I consume the API again, Kong will now implement that rate limiting feature. The real functionality here is being delivered by plugins and not by Kong per se. Kong is just a middleware manager.

41:14 Questions

Thank you [presentation by Marco Palladino, CTO at Mashape.com at the nginx 2016 conference]

I think have two minutes extra for questions.

Q: Is it possible for Kong to rate limit upstream requests? For example, it might have a thousand requests coming from down the stream but Kong limits the number of upstream requests to perhaps thirty, therefore somehow queuing the downstream requests and only allowing a small number of upstream requests.

That’s definitely possible. The rate limiting plugin that I have used today does not implement that feature, but you can fork this plugin for example – it’s open source – and then implement that logic in Lua to do exactly that.

The community has built another rate limiting plugin – it’s called the Quota rate limiting plugin. It does not do exactly what you’re trying to implement, but it allows you to configure from the API how you should rate limit the the requests. In the API response you can set a custom header that tells Kong the number of quotas that you want to allow for that one consumer. If you said that to zero, then Kong will block that consumer from making additional requests.

Q: Let’s say I have a multi-tenant system and I have lots of customers sending in request to Kong, but one of them decides to go wild and send 10,000 times as much traffic. is it possible to somehow queue up that customer’s traffic so that it doesn’t interrupt the traffic from the other customers without necessarily rate limiting it.

I believe there is a throttling plugin available, so you can check that out. Plugins can be applied per API and per consumer, so you can create a plugin configuration for that one specific consumer and you can have a special treatment just for him.

Thank you.

The post Microservices & API Gateways, Part 2: How Kong Can Help appeared first on NGINX.

Source: Microservices & API Gateways, Part 2: How Kong Can Help

About KENNETH 19688 Articles
지락문화예술공작단

Be the first to comment

Leave a Reply

Your email address will not be published.


*


이 사이트는 스팸을 줄이는 아키스밋을 사용합니다. 댓글이 어떻게 처리되는지 알아보십시오.