Maximizing Python Performance with NGINX, Part II: Load Balancing and Monitoring

2016-04-21 KENNETH 0

Maximizing Python Performance with NGINX, Part II: Load Balancing and Monitoring Introduction: Using Multiple Servers Part I of this two-post blog series tells you how to maximize Python application server performance with a single-server implementation and how to implement static file caching and microcaching using NGINX. Both kinds of caching can be implemented either in a single-server or – for better performance – a multiserver environment. Python is known for being a high-performance scripting language; NGINX can help in ways that are complementary to the actual execution speed of your code. For a single-server implementation, moving to NGINX as the web server for your application server can open the door to big increases in performance. In theory, static file caching can roughly double performance for web pages that are half made up of static files,as many are. Caching dynamic application [ more… ]

Overcoming Ephemeral Port Exhaustion in NGINX and NGINX Plus

2016-04-20 KENNETH 0

Overcoming Ephemeral Port Exhaustion in NGINX and NGINX Plus NGINX and NGINX Plus are extremely powerful HTTP, TCP, and UDP load balancers. They are very efficient at proxying large bursts of requests and maintaining a large number of concurrent connections. In general these characteristics make NGINX and NGINX Plus particularly vulnerable to ephemeral port exhaustion. (Both products have this issue, but for the sake of brevity we’ll refer just to NGINX Plus for the remainder of this blog.) In this blog, we discuss the components of a TCP connection and how its contents are decided before a connection is established. We then show how to determine when NGINX Plus is being affected by ephemeral port exhaustion. Lastly, we discuss strategies for combatting those limitations using both Linux kernel tweaks and NGINX Plus directives. A Brief Overview of Network Sockets When a connection is established over TCP, [ more… ]

Introducing the NGINX Microservices Reference Architecture

2016-04-19 KENNETH 0

Introducing the NGINX Microservices Reference Architecture The Microservices Reference Architecture will be made available later this year, and will be discussed in detail at nginx.conf 2016 in Austin, TX, from September 15-17. Early bird discounts are available now. Author’s note – This blog post is the first in a series; we will extend this list as new posts appear: Introducing the NGINX Microservices Reference Architecture (this post) Upcoming posts will cover the use of Mesosphere DCOS services in the Microservices Reference Architecture; each of the three models included in the Microservices Reference Architecture; and the Ingenious photo-sharing sample app. I’ve written a separate article about Web frontends for microservices applications. We also have a very useful and popular series by Chris Richardson about microservices application design, plus many other microservices blog posts and microservices webinars. Introduction NGINX has been involved [ more… ]

Monitoring Microservices in Docker Containers with NGINX Amplify

2016-04-16 KENNETH 0

Monitoring Microservices in Docker Containers with NGINX Amplify Most microservices deployments use container technologies in one way or another. When deploying NGINX in containers, thousands of users choose Docker for its ease of use and rapidly growing community. The NGINX image is one the most frequently downloaded in the Docker Registry. Recently we announced the beta version of NGINX Amplify, a new monitoring and configuration recommendation service, and we have received a lot of questions regarding the use of NGINX Amplify and Docker in the same deployment. In this post we discuss how these new free technologies can work together to improve the performance and manageability of your microservices deployment. NGINX Amplify Components and Workflow NGINX Amplify consists of multiple components. The ones that are visible to the user are: The NGINX Amplify Agent, which runs on the same system as NGINX The NGINX Amplify cloud [ more… ]

Load Balancing DNS Traffic with NGINX and NGINX Plus

2016-04-15 KENNETH 0

Load Balancing DNS Traffic with NGINX and NGINX Plus Layer 4 Load Balancing with UDP and TCP NGINX Plus R9 introduces the ability to reverse proxy and load balance UDP traffic, a significant enhancement to NGINX Plus’ Layer 4 load-balancing capabilities. This blog post looks at the challenges of running a DNS server in a modern application infrastructure to illustrate how the open source NGINX software and NGINX Plus can effectively and efficiently load balance both UDP and TCP traffic (for brevity, we’ll refer to NGINX Plus for the rest of the post). Why Load Balance UDP Traffic? Unlike TCP, UDP by design does not guarantee the end-to-end delivery of data. It is akin to sending a message by carrier pigeon – you definitely know the message was sent, but cannot be sure it arrived. There are several benefits to this “connectionless” approach – most notably, lower latency than [ more… ]