3 Ways to Automate with NGINX and NGINX Plus
In many organizations today, manual processes are slowing down the process of deploying and managing applications. Manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks – using tools, scripts, and other techniques – is a great way to improve operational efficiency and accelerate the rollout of new features and apps.
The potential improvements to productivity with automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50 times per day, creating a more stable application and increasing customer satisfaction.
High‑performing DevOps teams turn to the open source NGINX software and to NGINX Plus to build fully automated, self‑service pipelines that developers use to effortlessly push out new features, security patches, bug fixes, and whole new applications to production with almost no manual intervention.
This blog covers three methods for automating common workflows using NGINX and NGINX Plus.
For more, register for our live webinar, 3 Ways to Automate App Deployments with NGINX, to be held on Wednesday, July 27, 2016 at 10:00 AM PDT.
Method 1 – Pushing New App Versions to Production
Releasing a new version is one of the most common occurrences in the lifecycle of any software application. Common reasons for an update are introducing a new feature or fixing bugs in existing functionality. Updating becomes much simpler and less time‑consuming when it’s automated, as you can ensure that the same process is happening on all your servers simultaneously, and you can define rollback procedures or fallback code.
To facilitate automation, NGINX Plus has an easy‑to‑use HTTP‑based API that allows on‑the‑fly reconfiguration of upstream server groups. With the API, you can modify an upstream group of servers by adding or removing instances as part of your deployment script. The changes are reflected in NGINX Plus immediately, which means you can simply add a line that makes a curl
request to your NGINX load balancer as a final step in your deployment script, and the servers are updated automatically.
To use the HTTP interface, create a separate location
block in the configuration block for a virtual server and include the upstream_conf
directive. We’re using the conventional name for this location, /upstream_conf. We want to restrict use of the interface to administrators on the local LAN, so we also include allow
and deny
directives:
server {
listen 8080; # Listen on a local port
location /upstream_conf {
allow 10.0.0.0/8; # Allow access only from LAN
deny all; # Deny everyone else
upstream_conf;
}
}
The API requires a shared memory zone to store information about an upstream group of servers, so we include the zone
directive in the configuration, as in this example for an upstream group called backend:
upstream backend {
zone backend 64k;
server 10.2.2.90:8000;
server 10.2.2.91:8000;
server 10.2.2.92:8000;
}
Using the HTTP‑Based API
Let’s say we’ve created a new server with IP address 10.2.2.93 to host the updated version of our application and want to add it to the backend upstream group. We can do this by running curl
with the upstream
and server
arguments in the URI.
# curl 'http://localhost:8080/upstream_conf?add=&upstream=backend&server=10.2.2.93:8000'
server 10.2.2.93:8000; # id=3
Then we want to remove from service the server that’s running the previous application version. Abruptly ending all the connections to that server would result in a bad user experience, so instead we first drain sessions on the server. To identify which server to drain, we need to display the IDs assigned to the servers in the group:
# curl 'http://localhost:8080/upstream_conf?upstream=backend'
server 10.2.2.90:8000; # id=0
server 10.2.2.91:8000; # id=1
server 10.2.2.92:8000; # id=2
server 10.2.2.93:8000; # id=3
We know that the server with the old application version has IP address 10.2.2.92, and the output shows us its ID is 2. We identify the server by that ID in the command that puts it in draining state:
# curl 'http://localhost:8080/upstream_conf?upstream=backend&id=2&drain=1'
server 10.2.2.92:8000; # id=2 draining
Active connections to a server can be tracked using the upstreams.peers.id.active
JSON object from the NGINX Plus extended Status module, where id
is the same ID number for the server that we retrieved in the previous step. When there are no more connections to the server, we can remove it completely from the upstream group:
# curl 'http://localhost:8080/upstream_conf?upstream=backend&id=2&remove=1'
server 10.2.2.90:8000; # id=0
server 10.2.2.91:8000; # id=1
server 10.2.2.93:8000; # id=3
This is just a sample of what you can do with the API, and you can script workflows that use the API to fully automate the release process. For more details, see our three‑part blog series, Using NGINX Plus for Backend Upgrades with Zero Downtime.
Method 2 – Automated Service Discovery
A modern microservices‑based application can have tens or even hundreds of services, each with multiple instances that are updated and deployed multiple times per day. With large numbers of services in deployment, it quickly becomes impossible to manually configure and reconfigure each service every time you deploy a new version or scale up and down to handle fluctuating traffic load.
With service discovery, you shift the burden of configuration to your application infrastructure, making the entire process a lot easier. NGINX and NGINX Plus support several service discovery methods for automatically updating a set of service instances.
NGINX Plus can automatically discover new service instances and help cycle out old ones using a familiar DNS interface. In this scenario, NGINX Plus pulls the definitive list of available services from your service registry by requesting DNS SRV
records. DNS SRV
records contain the port number of the service, which is typically dynamically assigned in microservice architectures. The service registry can be one of many service registration tools such as Consul, SkyDNS/etcd , or ZooKeeper.
In the following example, the service=http
parameter to the server
directive configures NGINX Plus to request DNS SRV
records for the servers in the my_service upstream group. As a result, the application instances backing my_service are discovered automatically.
http {
resolver dns-server-ip;
upstream my_service {
zone backend 64k;
server hostname-for-my_service service=http resolve;
}
server {
...
location /my_service {
...
proxy_pass http://my_service;
}
}
}
With open source NGINX, a generic configuration template tool like consul‑template can be used. When new service instances are detected, a new NGINX configuration is generated with the new service instances and NGINX is gracefully reloaded. For a detailed example of this solution, see this blog by Shane Sveller at Belly.
For more details on DNS‑based service discovery with NGINX Plus, see Using DNS for Service Discovery with NGINX and NGINX Plus on our blog.
Method 3 – Orchestration and Management
Large enterprises might deploy many instances of NGINX or NGINX Plus in production to load balance application traffic. It quickly becomes difficult to manage the configuration of each one individually, and this is where DevOps tools for configuration and management – such as Ansible, Chef, and Puppet – come into play. These tools allow you to manage and update configuration at a central location, from which changes are pushed out automatically to all managed nodes.
NGINX Plus has several points of integration to help automate your processes and facilitate infrastructure and application management:
- Chef – If you’re a Chef user, we have tutorials for installing NGINX and setting up high‑availability (HA) NGINX Plus clusters. There are quite a few recipes for managing NGINX in the Chef cookbook for NGINX.
- Puppet – Puppet has a well‑maintained GitHub repository with a multipurpose NGINX module.
- Ansible – Ansible, recently acquired by Red Hat, is becoming increasingly popular because it is “agentless”, as opposed to Chef and Puppet which require you to install a software agent on every server you want to manage. Ansible instead connects directly to managed servers using standard SSH. For step‑by‑step instructions, see Installing NGINX and NGINX Plus with Ansible on our blog.
Bonus Method – Push‑Button Deployments with Jenkins
Jenkins is a popular open source CI/CD tool, and becomes even more powerful when combined with NGINX and NGINX Plus. DevOps teams can simply check the desired NGINX configuration changes into GitHub, and Jenkins pushes them out to production servers.
Our recent Bluestem Brands case study includes a great example of the combination in action. Using Jenkins, Bluestem Brands automates every aspect of their deployment and keep things in sync: when developers update code, GitHub launches a Jenkins build. After the code updates are deployed to the upstream application instances, it’s just a simple API call to make sure that new instances with a clean cache and the latest code are handling requests.
Conclusion
Maintaining large‑scale deployments is always challenging, but automating the process can alleviate some of the difficulty. Automation allows you to create testable procedures and codify important processes, ensuring that everyone on the DevOps team is on the same page. Numerous NGINX users have told us how automation has reduced complexity, eliminated bugs caused by manual processes, and made it easier to roll out new functionality into production.
Both NGINX Plus and the open source NGINX software provide multiple integrations for updating your DevOps workflow and automating your architecture, whether you’re launching your first instance or managing hundreds of them.
To learn more, register for our live webinar, 3 Ways to Automate App Deployments with NGINX, to be held on Wednesday, July 27, 2016 at 10:00 AM PDT.
The post 3 Ways to Automate with NGINX and NGINX Plus appeared first on NGINX.