Deploying NGINX Plus as an API Gateway, Part 2: Protecting Backend Services

Deploying NGINX Plus as an API Gateway, Part 2: Protecting Backend Services

This is the second blog post in our series on deploying NGINX Plus as an API gateway. Part 1 provides detailed configuration instructions for several use cases. This blog post extends those use cases and looks at a range of safeguards that can be applied to protect and secure backend API services in production.

Rate Limiting

Unlike browser-based clients, individual API clients are able to place huge loads on your APIs, even to the extent of consuming so much of the system resources that other API clients are effectively locked out. Not only malicious clients pose this threat: a misbehaving or buggy API client might enter a loop that overwhelms the backend. To protect against this, we apply a rate limit to ensure fair use by each client and to protect the resources of the backend services.

NGINX Plus can apply rate limits based on any attribute of the request. The client IP address is typically used, but when authentication is enabled for the API, the authenticated client ID is a more reliable and accurate attribute.

Rate limits themselves are defined in the top‑level API gateway configuration file and can then be applied globally, on a per‑API basis, or even per URI.

In this example, the limit_req_zone directive on line 8 defines a rate limit of 10 requests per second for each client IP address ($binary_remote_addr), and the one on line 9 defines a limit of 200 requests per second for each authenticated client ID ($http_apikey). This illustrates how we can define multiple rate limits independently of where they are applied. An API may apply multiple rate limits at the same time, or apply different rate limits for different resources.

Then in the following configuration snippet we use the limit_req directive to apply the first rate limit in the policy section of the “Warehouse API” described in Part 1. By default, NGINX Plus sends the 503 (Service Unavailable) response when the rate limit has been exceeded. However, it is helpful for API clients to know explicitly that they have exceeded their rate limit, so that they can modify their behavior. To this end we use the limit_req_status directive to send the 429 (Too Many Requests) response instead.

You can use additional parameters to the limit_req directive to fine‑tune how NGINX Plus enforces rate limits. For example, it is possible to queue requests instead of rejecting them outright when the limit is exceeded, allowing time for the rate of requests to fall under the defined limit. For more information about fine‑tuning rate limits, see Rate Limiting with NGINX and NGINX Plus on our blog.

Enforcing Specific Request Methods

With RESTful APIs, the HTTP method (or verb) is an important part of each API call and very significant to the API definition. Take the pricing service of our Warehouse API as an example:

  • GET /api/warehouse/pricing/item001        returns the price of item001
  • PATCH /api/warehouse/pricing/item001   changes the price of item001

We can update the definition of the Warehouse API to accept only these two HTTP methods.

With this configuration in place, requests to the pricing service that do not match those listed on line 4 are rejected and are not passed to the backend services. NGINX Plus sends the 405 (Method Not Allowed) response to inform the API client of the precise nature of the error, as shown in the following console. Where a minimum‑disclosure security policy is required, the error_page directive can be used to convert this response into a less informative error, for example 400 (Bad Request).

$ curl https://api.example.com/api/warehouse/pricing/item001
{"sku":"item001","price":179.99}
$ curl -X DELETE https://api.example.com/api/warehouse/pricing/item001
{"status":405,"message":"Method not allowed"}

Applying Fine-Grained Access Control

Part 1 in this series described how to protect APIs from unauthorized access by enabling authentication options such as API keys and JSON Web Tokens (JWTs). We can use the authenticated ID, or attributes of the authenticated ID, to perform fine‑grained access control.

Here we show two such examples. The first extends a configuration presented in Part 1 and uses a whitelist of API clients to control access to a specific API resource, based on API key authentication. The second example implements the JWT authentication method mentioned in Part 1, using a custom claim to control which HTTP methods NGINX Plus accepts from the client. Of course, all of the NGINX Plus authentication methods are applicable to these examples.

Controlling Access to Specific Resources

Let’s say we want to allow only “infrastructure clients” to access the audit resource of the Warehouse API inventory service. With API key authentication enabled, we use a map block to create a whitelist of infrastructure client names so that the variable $is_infrastructure evaluates to 1 when a corresponding API key is used.

In the definition of the Warehouse API, we add a location block for the inventory audit resource on lines 13–19. The if block ensures that only infrastructure clients can access the resource.

Note that the location directive on line 13 uses the = modifier to make an exact match on the audit resource. Exact matches take precedence over the default path‑prefix definitions used for the other resources. The following trace shows how with this configuration in place a client that isn’t on the whitelist is unable to access the inventory audit resource. The API key shown belongs to client_two (as defined in Part 1).

$ curl -H "apikey: QzVV6y1EmQFbbxOfRCwyJs35" https://api.example.com/api/warehouse/inventory/audit
{"status":403,"message":"Forbidden"}

Controlling Access to Specific Methods

As defined above, the pricing service accepts the GET and PATCH methods, which respectively enable clients to obtain and modify the price of a specific item. (We could also choose to allow the POST and DELETE methods, to provide full lifecycle management of pricing data.) In this section, we expand that use case to control which methods specific users can issue. With JWT authentication enabled for the Warehouse API, the permissions for each client are encoded as custom claims. The JWTs issued to administrators who are authorized to make changes to pricing data include the claim "admin":true.

This map block, added to the bottom of api_gateway.conf, coalesces all of the possible HTTP methods into a new variable, $request_type, which evaluates to either READ or WRITE. In the following snippet, we use the $request_type variable to direct requests to the appropriate Warehouse API policy, /_warehouse_READ or /_warehouse_WRITE.

The rewrite directives on lines 6 and 11 append the $request_type variable to the name of the Warehouse API policy, thereby splitting the policy section into two. Now different policies apply to read and write operations.

Both the /_warehouse_READ and /_warehouse_WRITE policies require the client to present a valid JWT. However, in the case of a request using a WRITE method (POST, PATCH, or DELETE), we also require that the JWT includes the claim "admin":true (line 32). This approach of having separate policies for different request methods is not limited to authentication. Other controls can also be applied on a per‑method basis, such as rate limiting, logging, and routing to different backends.

JWT authentication is exclusive to NGINX Plus.

Controlling Request Sizes

HTTP APIs commonly use the request body to contain instructions and data for the backend API service to process. This is true of XML/SOAP APIs as well as JSON/REST APIs. Consequently, the request body can pose an attack vector to the backend API services, which may be vulnerable to buffer overflow attacks when processing very large request bodies.

By default, NGINX Plus rejects requests with bodies larger than 1 MB. This can be increased for APIs that specifically deal with large payloads such as image processing, but for most APIs we set a lower value.

The client_max_body_size directive on line 19 limits the size of the request body. With this configuration in place, we can compare the behavior of the API gateway upon receiving two different PATCH requests to the pricing service. The first curl command send a small piece of JSON data, whereas the second command attempts to send the contents of a large file (/etc/services).

$ curl -iX PATCH -d '{"price":199.99}' https://api.example.com/api/warehouse/pricing/item001
HTTP/1.1 204 No Content
Server: nginx/1.13.10
Connection: keep-alive

$ curl -iX PATCH -d@/etc/services https://api.example.com/api/warehouse/pricing/item001
HTTP/1.1 413 Request Entity Too Large
Server: nginx/1.13.10
Content-Type: application/json
Content-Length: 45
Connection: close

{"status":413,"message":"Payload too large"}

Validating Request Bodies

In addition to being vulnerable to buffer overflow attacks with large request bodies, backend API services can be susceptible to bodies that contain invalid or unexpected data. For applications that require correctly formatted JSON in the request body, we can use the NGINX JavaScript module to verify that JSON data is parsed without error before proxying it to the backend API service.

With the JavaScript module installed, we use the js_include directive to reference the file containing the JavaScript code for the function that validates JSON data.

The js_set directive defines a new variable, $validated, which is evaluated by calling the json_validator function.

0 ) {
JSON.parse(req.variables.request_body);
}
return req.variables.upstream;
} catch (e) {
req.log(‘JSON.parse exception’);
return ‘127.0.0.1:10415’; // Address for error response
}
} –>

The json_validator function attempts to parse the request body using the JSON.parse method. If successful, then the name of the intended upstream group for this request is returned (line 6). If the request body cannot be parsed (causing an exception), then a local server address is returned (line 9). The return directive populates the $validated variable so that we can use it to determine where to send the request.

In the policy section for the Warehouse API, we modify the proxy_pass directive on line 23. It passes the request to the backend API service as before, but now uses the $validated variable as the destination address. If the client body was successfully parsed as JSON then we proxy to the upstream group as normal. If however, there was an exception, we use the returned value of 127.0.0.1:10415 to send an error response to the client.

When requests are proxied to this virtual server, NGINX Plus sends the 415 (Unsupported Media Type) response to the client.

With this complete configuration in place, NGINX Plus proxies requests to the backend API service only if they have correctly formatted JSON bodies.

$ curl -iX POST -d '{"sku":"item002","price":85.00}' https://api.example.com/api/warehouse/pricing
HTTP/1.1 201 Created
Server: nginx/1.13.10
Location: /api/warehouse/pricing/item002

$ curl -X POST -d 'item002=85.00' https://api.example.com/api/warehouse/pricing
{"status":415,"message":"Unsupported media type"}

A Note about the $request_body Variable

The JavaScript function json_validator uses the $request_body variable to perform JSON parsing. However, NGINX Plus does not populate this variable by default, and simply streams the request body to the backend without making intermediate copies. By using the mirror directive inside the Warehouse API policy section (line 19) we create a copy of the client request, and consequently populate the $request_body variable.

The directives on lines 20 and  21 control how NGINX Plus handles the request body internally. We set client_body_buffer_size to the same size as client_max_body_size so that the request body is not written to disk. This improves overall performance by minimizing disk I/O operations, but at the expense of additional memory utilization. For most API gateway use cases with small request bodies this is a good compromise.

As mentioned above, the mirror directive creates a copy of the client request. Other than populating $request_body, we have no need for this copy so we send it to a “dead end” location (/_NULL) that we define in the top‑level API gateway entry point.

This location does nothing more than send the 204 (No Content) response. As this response is related to a mirrored request it is ignored, therefore adding negligible overhead to the processing of the original client request.

Summary

The first blog post in this series looks at the essential use cases for an API gateway and described how to deploy NGINX Plus in that environment. In this article we examine some of the challenges of running an API gateway in production, focusing on the security issues and safeguards required to protect backend API services. NGINX Plus uses the same technology for managing API traffic that is used to power and protect the busiest sites on the Internet today.

To try NGINX Plus as an API gateway, start your free 30-day trial today. Use the complete set of configuration files from our GitHub Gist repo.

The post Deploying NGINX Plus as an API Gateway, Part 2: Protecting Backend Services appeared first on NGINX.

Source: Deploying NGINX Plus as an API Gateway, Part 2: Protecting Backend Services

About KENNETH 19688 Articles
지락문화예술공작단

Be the first to comment

Leave a Reply

Your email address will not be published.


*


이 사이트는 스팸을 줄이는 아키스밋을 사용합니다. 댓글이 어떻게 처리되는지 알아보십시오.