{"id":8773,"date":"2016-07-23T08:08:36","date_gmt":"2016-07-22T23:08:36","guid":{"rendered":"https:\/\/jirak.net\/wp\/3-ways-to-automate-with-nginx-and-nginx-plus\/"},"modified":"2016-07-23T08:34:41","modified_gmt":"2016-07-22T23:34:41","slug":"3-ways-to-automate-with-nginx-and-nginx-plus","status":"publish","type":"post","link":"https:\/\/jirak.net\/wp\/3-ways-to-automate-with-nginx-and-nginx-plus\/","title":{"rendered":"3 Ways to Automate with NGINX and NGINX\u00a0Plus"},"content":{"rendered":"<p>3 Ways to Automate with NGINX and NGINX\u00a0Plus<\/p>\n<div class=\"ngx_blockquote_wrap\">\n<div class=\"ngx_blockquote\"><span class=\"left-quote\">&#8220;<\/span>How long would it take your organization to deploy a change that involves just one single line of code?<span class=\"right-quote\">&#8221;<\/span><\/div>\n<div class=\"ngx_blockquote_author\">&ndash; Lean software development guru <a target='_blank' href='http:\/\/www.poppendieck.com\/people.htm'>Mary Poppendieck<\/a><\/div>\n<\/div>\n<p>In many organizations today, manual processes are slowing down the process of deploying and managing applications. Manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks&nbsp;&ndash; using tools, scripts, and other techniques&nbsp;&ndash; is a great way to improve operational efficiency and accelerate the rollout of new features and apps.<\/p>\n<p>The potential improvements to productivity with automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50&nbsp;times per day, creating a more stable application and increasing customer satisfaction.<\/p>\n<p>High&#8209;performing DevOps teams turn to the open source <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\">NGINX<\/a> software and to <a href=\"https:\/\/www.nginx.com\/products\">NGINX&nbsp;Plus<\/a> to build fully automated, self&#8209;service pipelines that developers use to effortlessly push out new features, security patches, bug fixes, and whole new applications to production with almost no manual intervention. <\/p>\n<p>This blog covers three methods for automating common workflows using NGINX and NGINX&nbsp;Plus.<\/p>\n<p>For more, register for our live webinar, <a href=\"https:\/\/www.nginx.com\/resources\/webinars\/three-ways-to-automate-with-nginx-and-nginx-plus\/\">3 Ways to Automate App Deployments with NGINX<\/a>, to be held on Wednesday, July&nbsp;27,&nbsp;2016 at 10:00&nbsp;AM&nbsp;PDT.<\/p>\n<h2>Method 1&nbsp;&ndash; Pushing New App Versions to Production<\/h2>\n<p>Releasing a new version is one of the most common occurrences in the lifecycle of any software application. Common reasons for an update are introducing a new feature or fixing bugs in existing functionality. Updating becomes much simpler and less time&#8209;consuming when it\u2019s automated, as you can ensure that the same process is happening on all your servers simultaneously, and you can define rollback procedures or fallback code.<\/p>\n<p>To facilitate automation, NGINX&nbsp;Plus has an easy&#8209;to&#8209;use <a href=\"https:\/\/www.nginx.com\/blog\/dynamic-reconfiguration-with-nginx-plus\/#upstream_conf\">HTTP&#8209;based API<\/a> that allows on&#8209;the&#8209;fly reconfiguration of upstream server groups. With the API, you can modify an upstream group of servers by adding or removing instances as part of your deployment script. The changes are reflected in NGINX&nbsp;Plus immediately, which means you can simply add a line that makes a <code>curl<\/code> request to your NGINX load balancer as a final step in your deployment script, and the servers are updated automatically. <\/p>\n<p>To use the HTTP interface, create a separate <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#location\"><code>location<\/code><\/a> block in the configuration block for a virtual server and include the <a target=\"_blank\" href=\"http:\/\/nginx.org\/r\/upstream_conf\"><code>upstream_conf<\/code><\/a> directive. We\u2019re using the conventional name for this location, <strong>\/upstream_conf<\/strong>. We want to restrict use of the interface to administrators on the local LAN, so we also include <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_access_module.html\"><code>allow<\/code> and <code>deny<\/code><\/a> directives:<\/p>\n<pre><code class=\"config\">server {<br \/>\n    listen 8080;          # Listen on a local port<\/p>\n<p>    location \/upstream_conf {<br \/>\n        allow 10.0.0.0\/8; # Allow access only from LAN<br \/>\n        deny all;         # Deny everyone else<\/p>\n<p>        <strong>upstream_conf<\/strong>;<br \/>\n    }<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<p>The API requires a shared memory zone to store information about an upstream group of servers, so we include the <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#zone\"><code>zone<\/code><\/a> directive in the configuration, as in this example for an upstream group called <strong>backend<\/strong>:<\/p>\n<pre><code class=\"config\">upstream backend {<br \/>\n    zone backend 64k;<br \/>\n    server 10.2.2.90:8000;<br \/>\n    server 10.2.2.91:8000;<br \/>\n    server 10.2.2.92:8000;<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<h3>Using the HTTP&#8209;Based API<\/h3>\n<p>Let\u2019s say we&#8217;ve created a new server with IP&nbsp;address&nbsp;10.2.2.93 to host the updated version of our application and want to add it to the <strong>backend<\/strong> upstream group. We can do this by running <code>curl<\/code> with the <code>upstream<\/code> and <code>server<\/code> arguments in the URI.<\/p>\n<pre><code class=\"terminal\"># <strong>curl 'http:\/\/localhost:8080\/upstream_conf?add=&amp;upstream=backend&amp;server=10.2.2.93:8000'<\/strong><br \/>\nserver 10.2.2.93:8000; # id=3<\/pre>\n<p><\/code><\/p>\n<p>Then we want to remove from service the server that&#8217;s running the previous application version. Abruptly ending all the connections to that server would result in a bad user experience, so instead we first <a href=\"https:\/\/www.nginx.com\/products\/session-persistence\/#session-draining\">drain&nbsp;sessions<\/a> on the server. To identify which server to drain, we need to display the IDs assigned to the servers in the group:<\/p>\n<pre><code class=\"terminal\"># <strong>curl 'http:\/\/localhost:8080\/upstream_conf?upstream=backend'<\/strong><br \/>\nserver 10.2.2.90:8000; # id=0<br \/>\nserver 10.2.2.91:8000; # id=1<br \/>\nserver 10.2.2.92:8000; # id=2<br \/>\nserver 10.2.2.93:8000; # id=3<\/pre>\n<p><\/code><\/p>\n<p>We know that the server with the old application version has IP address 10.2.2.92, and the output shows us its ID is 2. We identify the server by that ID in the command that puts it in draining state:<\/p>\n<pre><code class=\"terminal\"># <strong>curl 'http:\/\/localhost:8080\/upstream_conf?upstream=backend&amp;id=2&amp;drain=1'<\/strong><br \/>\nserver 10.2.2.92:8000; # id=2 draining<\/pre>\n<p><\/code><\/p>\n<p>Active connections to a server can be tracked using the <code>upstreams.peers.<em>id<\/em>.active<\/code> JSON object from the NGINX&nbsp;Plus extended <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_status_module.html\">Status module<\/a>, where <code><em>id<\/em><\/code> is the same ID number for the server that we retrieved in the previous step. When there are no more connections to the server, we can remove it completely from the upstream group:<\/p>\n<pre><code class=\"terminal\"># <strong>curl 'http:\/\/localhost:8080\/upstream_conf?upstream=backend&amp;id=2&amp;remove=1'<\/strong><br \/>\nserver 10.2.2.90:8000; # id=0<br \/>\nserver 10.2.2.91:8000; # id=1<br \/>\nserver 10.2.2.93:8000; # id=3<\/pre>\n<p><\/code><\/p>\n<p>This is just a sample of what you can do with the API, and you can script workflows that use the API to fully automate the release process. For more details, see our three&#8209;part blog series, <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-overview\">Using NGINX&nbsp;Plus for Backend Upgrades with Zero Downtime<\/a>.<\/p>\n<h2>Method 2&nbsp;&ndash; Automated Service Discovery<\/h2>\n<p>A modern microservices&#8209;based application can have tens or even hundreds of services, each with multiple instances that are updated and deployed multiple times per day. With large numbers of services in deployment, it quickly becomes impossible to manually configure and reconfigure each service every time you deploy a new version or scale up and down to handle fluctuating traffic load. <\/p>\n<p>With <a href=\"https:\/\/www.nginx.com\/blog\/service-discovery-in-a-microservices-architecture\/\">service discovery<\/a>, you shift the burden of configuration to your application infrastructure, making the entire process a lot easier. NGINX and NGINX&nbsp;Plus support several service discovery methods for automatically updating a set of service instances.<\/p>\n<p>NGINX&nbsp;Plus can automatically discover new service instances and help cycle out old ones using a familiar DNS interface. In this scenario, NGINX&nbsp;Plus pulls the definitive list of available services from your service registry by requesting DNS <code>SRV<\/code> records. DNS <code>SRV<\/code> records contain the port number of the service, which is typically dynamically assigned in microservice architectures. The service registry can be one of many service registration tools such as <a href=\"https:\/\/www.nginx.com\/blog\/service-discovery-nginx-plus-srv-records-consul-dns\/\">Consul<\/a>, <a href=\"https:\/\/www.nginx.com\/blog\/service-discovery-nginx-plus-etcd\/\">SkyDNS\/etcd <\/a>, or <a href=\"https:\/\/www.nginx.com\/blog\/service-discovery-nginx-plus-zookeeper\/\">ZooKeeper<\/a>. <\/p>\n<p>In the following example, the <code>service=http<\/code> parameter to the <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#server\"><code>server<\/code><\/a> directive configures NGINX&nbsp;Plus to request DNS <code>SRV<\/code> records for the servers in the <strong>my_service<\/strong> upstream group. As a result, the application instances backing <strong>my_service<\/strong> are discovered automatically.<\/p>\n<pre><code class=\"config\">http {<br \/>\n    resolver dns-server-ip;<\/p>\n<p>    upstream my_service {<br \/>\n        zone backend 64k;<br \/>\n        server hostname-for-my_service <strong>service=http<\/strong> resolve;<br \/>\n    }<br \/>\n   server {<br \/>\n       ...<br \/>\n       location \/my_service {<br \/>\n            ...<br \/>\n            proxy_pass http:\/\/my_service;<br \/>\n        }<br \/>\n    }<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<p>With open source NGINX, a generic configuration template tool like <a target=\"_blank\" href=\"https:\/\/github.com\/hashicorp\/consul-template\">consul&#8209;template<\/a> can be used. When new service instances are detected, a new NGINX configuration is generated with the new service instances and NGINX is gracefully reloaded. For a detailed example of this solution, see <a target=\"_blank\" href=\"https:\/\/tech.bellycard.com\/blog\/load-balancing-docker-containers-with-nginx-and-consul-template\/\">this blog<\/a> by Shane&nbsp;Sveller at Belly.<\/p>\n<p>For more details on DNS&#8209;based service discovery with NGINX&nbsp;Plus, see <a href=\"https:\/\/www.nginx.com\/blog\/dns-service-discovery-nginx-plus\/\">Using DNS for Service Discovery with NGINX and NGINX&nbsp;Plus<\/a> on our blog.<\/p>\n<h2>Method 3&nbsp;&ndash; Orchestration and Management<\/h2>\n<p>Large enterprises might deploy many instances of NGINX or NGINX&nbsp;Plus in production to load balance application traffic. It quickly becomes difficult to manage the configuration of each one individually, and this is where DevOps tools for configuration and management&nbsp;&ndash; such as Ansible, Chef, and Puppet&nbsp;&ndash; come into play. These tools allow you to manage and update configuration at a central location, from which changes are pushed out automatically to all managed nodes.<\/p>\n<p>NGINX&nbsp;Plus has several points of integration to help automate your processes and facilitate infrastructure and application management:<\/p>\n<ul>\n<li><strong>Chef<\/strong>&nbsp;&ndash; If you\u2019re a Chef user, we have tutorials for <a href=\"https:\/\/www.nginx.com\/blog\/installing-nginx-nginx-plus-chef\/\">installing NGINX<\/a> and <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-high-availability-chef\">setting up high&#8209;availability (HA) NGINX&nbsp;Plus clusters<\/a>. There are quite a few recipes for managing NGINX in the <a target=\"_blank\" href=\"https:\/\/supermarket.chef.io\/cookbooks\/nginx\">Chef cookbook for NGINX<\/a>. <\/li>\n<li><strong>Puppet<\/strong>&nbsp;&ndash; Puppet has a well&#8209;maintained <a target=\"_blank\" href=\"https:\/\/github.com\/jfryman\/puppet-nginx\">GitHub repository<\/a> with a multipurpose NGINX module.<\/li>\n<li><strong>Ansible<\/strong>&nbsp;&ndash; Ansible, recently acquired by Red Hat, is becoming increasingly popular because it is \u201cagentless\u201d, as opposed to Chef and Puppet which require you to install a software agent on every server you want to manage. Ansible instead connects directly to managed servers using standard SSH. For step&#8209;by&#8209;step instructions, see <a href=\"https:\/\/www.nginx.com\/blog\/installing-nginx-nginx-plus-ansible\/\">Installing NGINX and NGINX&nbsp;Plus with Ansible<\/a> on our blog. <\/li>\n<\/ul>\n<h2>Bonus Method&nbsp;&ndash; Push&#8209;Button Deployments with Jenkins<\/h2>\n<p><a target=\"_blank\" href=\"https:\/\/jenkins.io\/\">Jenkins<\/a> is a popular open source CI\/CD tool, and becomes even more powerful when combined with NGINX and NGINX&nbsp;Plus. DevOps teams can simply check the desired NGINX configuration changes into GitHub, and Jenkins pushes them out to production servers.<\/p>\n<p>Our recent <a href=\"https:\/\/www.nginx.com\/blog\/bluestem-brands-migrates-from-monolith-to-microservices-efficiently-with-nginx-plus\/\">Bluestem&nbsp;Brands case study<\/a> includes a great example of the combination in action. Using Jenkins, Bluestem&nbsp;Brands automates every aspect of their deployment and keep things in sync: when developers update code, GitHub launches a Jenkins build. After the code updates are deployed to the upstream application instances, it\u2019s just a simple API call to make sure that new instances with a clean cache and the latest code are handling requests.<\/p>\n<h2>Conclusion<\/h2>\n<p>Maintaining large&#8209;scale deployments is always challenging, but automating the process can alleviate some of the difficulty. Automation allows you to create testable procedures and codify important processes, ensuring that everyone on the DevOps team is on the same page. Numerous NGINX users have told us how automation has reduced complexity, eliminated bugs caused by manual processes, and made it easier to roll out new functionality into production.<\/p>\n<p>Both NGINX&nbsp;Plus and the open source NGINX software provide multiple integrations for updating your DevOps workflow and automating your architecture, whether you\u2019re launching your first instance or managing hundreds of them.<\/p>\n<p>To learn more, register for our live webinar, <a href=\"https:\/\/www.nginx.com\/resources\/webinars\/three-ways-to-automate-with-nginx-and-nginx-plus\/\">3 Ways to Automate App Deployments with NGINX<\/a>, to be held on Wednesday, July&nbsp;27,&nbsp;2016 at 10:00&nbsp;AM&nbsp;PDT.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\/blog\/3-ways-to-automate-nginx-nginx-plus\/\">3 Ways to Automate with NGINX and NGINX&nbsp;Plus<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\">NGINX<\/a>.<\/p>\n<p>Source: <a href=\"https:\/\/www.nginx.com\/blog\/3-ways-to-automate-nginx-nginx-plus\/\" target=\"_blank\">3 Ways to Automate with NGINX and NGINX\u00a0Plus<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"mh-excerpt\"><p>3 Ways to Automate with NGINX and NGINX\u00a0Plus &#8220;How long would it take your organization to deploy a change that involves just one single line of code?&#8221; &ndash; Lean software development guru Mary Poppendieck In many organizations today, manual processes are slowing down the process of deploying and managing applications. Manual processes create extra work for developers and operations teams, cause unnecessary delays, and increase the time it takes to get new features and critical bug and security fixes into the hands of customers. Automating common tasks&nbsp;&ndash; using tools, scripts, and other techniques&nbsp;&ndash; is a great way to improve operational efficiency and accelerate the rollout of new features and apps. The potential improvements to productivity with automation are impressive. With the proper components in place, some companies have been able to deploy new code to production more than 50&nbsp;times per <a class=\"mh-excerpt-more\" href=\"https:\/\/jirak.net\/wp\/3-ways-to-automate-with-nginx-and-nginx-plus\/\" title=\"3 Ways to Automate with NGINX and NGINX\u00a0Plus\">[ more&#8230; ]<\/a><\/p>\n<\/div>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[169],"tags":[652],"class_list":["post-8773","post","type-post","status-publish","format-standard","hentry","category-news","tag-nginx"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/8773","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/comments?post=8773"}],"version-history":[{"count":1,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/8773\/revisions"}],"predecessor-version":[{"id":8774,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/8773\/revisions\/8774"}],"wp:attachment":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/media?parent=8773"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/categories?post=8773"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/tags?post=8773"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}