{"id":8192,"date":"2016-06-22T06:58:15","date_gmt":"2016-06-21T21:58:15","guid":{"rendered":"https:\/\/jirak.net\/wp\/using-nginx-plus-for-backend-upgrades-with-zero-downtime-part-2-individual-servers\/"},"modified":"2016-06-22T07:34:22","modified_gmt":"2016-06-21T22:34:22","slug":"using-nginx-plus-for-backend-upgrades-with-zero-downtime-part-2-individual-servers","status":"publish","type":"post","link":"https:\/\/jirak.net\/wp\/using-nginx-plus-for-backend-upgrades-with-zero-downtime-part-2-individual-servers\/","title":{"rendered":"Using NGINX\u00a0Plus for Backend Upgrades with Zero Downtime, Part\u00a02\u00a0\u2013\u00a0Individual Servers"},"content":{"rendered":"<p>Using NGINX\u00a0Plus for Backend Upgrades with Zero Downtime, Part\u00a02\u00a0\u2013\u00a0Individual Servers<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jirak.net\/wp\/wp-content\/uploads\/2016\/06\/dashboard-demoapp-upstream-2-servers-up.png\" width=\"975\" height=\"223\"><\/p>\n<p>This is the second of three articles in our series about using NGINX&nbsp;Plus to upgrade backend servers with zero downtime. In the <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-overview\">first article<\/a>, we describe the two <a href=\"https:\/\/www.nginx.com\/products\">NGINX&nbsp;Plus<\/a> features you can use for backend upgrades with zero downtime&nbsp;&ndash;&nbsp;the <a href=\"https:\/\/www.nginx.com\/products\/on-the-fly-reconfiguration\/\">on-the-fly reconfiguration API<\/a> and <a href=\"https:\/\/www.nginx.com\/products\/application-health-checks\/\">application health checks<\/a>&nbsp;&ndash;&nbsp;and discuss the advantages of each method.<\/p>\n<p>In this second article, we explore use cases around upgrading the software or hardware on an individual server, which is one of the most common reasons to take servers offline. We could just take the server offline with no preparation, but that kills all the current client connections, making for a bad user experience. What we want is to stop sending any new requests or connections to the server, while letting it finish off any outstanding work. Then we can safely take it offline without impacting clients. NGINX&nbsp;Plus provides a few methods for achieving this outcome.<\/p>\n<ul>\n<li><a href=\"#server-api\">Using the API<\/a><\/li>\n<li><a href=\"#server-api-persistence\">Using the API with session persistence<\/a><\/li>\n<li><a href=\"#server-health-checks\">Using health checks<\/a><\/li>\n<\/ul>\n<p>For use cases around upgrading the version of an application on a group of upstream servers, see the third article, <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-application-version\">Using NGINX&nbsp;Plus for Backend Upgrades with Zero Downtime, Part&nbsp;3&nbsp;&ndash;&nbsp;Application Version<\/a>.<\/p>\n<h2 id=\"base-configuration\">Base Configuration for the Use Cases<\/h2>\n<p>For the API examples we will be making the API calls from the NGINX&nbsp;Plus instance, so they will be sent to <code>localhost<\/code>. <\/p>\n<p>The base configuration for the use cases starts with two servers in a single <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#upstream\"><code>upstream<\/code><\/a> configuration block called <strong>demoapp<\/strong>. In the first <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#server\"><code>server<\/code><\/a> configuration block, we configure a virtual server listening on port 80 that load balances all requests to the <strong>demoapp<\/strong> upstream group. <\/p>\n<p>We\u2019re configuring an application health check, which is a best practice for reducing the impact of backend server errors on the user experience and for improving monitoring. Here we configure the health check to succeed if the server returns the file <strong>healthcheck.html<\/strong> with an HTTP <code>2xx<\/code> or <code>3xx<\/code> response code (the default success criterion for health checks). <\/p>\n<p>Though it&#8217;s not strictly necessary for basic health checks, we&#8217;re putting the <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#health_check\"><code>health_check<\/code><\/a> directive in its own <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_core_module.html#location\"><code>location<\/code><\/a> block. This is a good practice as it allows us to configure different settings, such as timeouts and headers, for health checks versus regular traffic. For a use case where a separate location for the <code>health_check<\/code> directive is required, see <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-application-version\/#application-dark-launch\">Doing a Dark Launch<\/a>.<\/p>\n<pre><code class=\"config\"># In the HTTP context<br \/>\nupstream demoapp {<br \/>\n    zone demoapp 64k;<br \/>\n    server 172.16.210.81:80;<br \/>\n    server 172.16.210.82:80;<br \/>\n}<\/p>\n<p>server {<br \/>\n    listen 80;<br \/>\n    status_zone demoapp;<\/p>\n<p>    location \/ {<br \/>\n        proxy_pass http:\/\/demoapp;<br \/>\n    }<\/p>\n<p>    location @healthcheck {<br \/>\n        internal;<br \/>\n        proxy_pass http:\/\/demoapp;<br \/>\n        health_check uri=\/healthcheck.html;<br \/>\n    }<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<p>We also configure a second virtual server that listens on port 8080 for requests to locations corresponding to the dynamic reconfiguration API (<strong>\/upstream_conf<\/strong>), the NGINX&nbsp;Plus status dashboard (<strong>\/status.html<\/strong>), and the NGINX&nbsp;Plus status API (<strong>\/status<\/strong>). Note that these location names are the conventional ones, but you can choose different names if you wish.<\/p>\n<p>It is a best practice to secure all traffic to the reconfiguration and status APIs and the dashboard, which we do here by granting access only to users on internal IP addresses in the range 192.168.100.0 to 192.168.100.255. For stronger security, use client certificates, HTTP Basic authentication, or the <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_auth_request_module.html\">Auth&nbsp;Request<\/a> module to integrate with external authorization systems like <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-authenticate-users\/\">LDAP<\/a>.<\/p>\n<pre><code class=\"config\"># In the HTTP context<br \/>\nserver {<br \/>\n    listen 8080;<br \/>\n    allow 192.168.100.0\/24;<br \/>\n    deny all;<\/p>\n<p>    location = \/ {<br \/>\n        return 301 \/status.html;<br \/>\n    }<\/p>\n<p>    location \/upstream_conf {<br \/>\n        upstream_conf;<br \/>\n    }<\/p>\n<p>    location = \/status.html {<br \/>\n        root \/usr\/share\/nginx\/html;<br \/>\n    }<\/p>\n<p>    location \/status {<br \/>\n        status;<br \/>\n    }<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<p>With this configuration in place, the base command for the API commands in this article is <\/p>\n<pre><code class=\"config\">http:\/\/localhost:8080\/upstream_conf?upstream=demoapp<\/pre>\n<p><\/code><\/p>\n<h2 id=\"server-api\">Using the API to Upgrade an Individual Server<\/h2>\n<p>To verify that the two servers in the <strong>demoapp<\/strong> upstream group (configured in the previous section) are active, we look at the <strong>Upstreams<\/strong> tab on the NGINX&nbsp;Plus live activity monitoring dashboard. The numbers in the <strong>Requests<\/strong> and <strong>Conns<\/strong> columns confirm that the servers are processing traffic:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.wp.nginx.com\/wp-content\/uploads\/2016\/06\/dashboard-demoapp-upstream-2-servers-up.png\" alt=\"Screenshot of the NGINX Plus live activity monitoring dashboard's Upstreams tab, showing that both servers in the 'demoapp' upstream group are up\" width=\"975\" height=\"223\" class=\"aligncenter size-full wp-image-36702\" \/><\/p>\n<p>Now we take server 172.16.210.82 offline for maintence. To see the ID number assigned to it, we send the base API command. The response tells us the server has <code>id=1<\/code>:<\/p>\n<pre><code class=\"config\">http:\/\/localhost:8080\/upstream_conf?upstream=demoapp<br \/>\nserver 172.16.210.81:80; # id=0<br \/>\nserver 172.16.211.82:80; # id=1<\/pre>\n<p><\/code><\/p>\n<p>To mark the server as <code>down<\/code>, we append this string to the base command:<\/p>\n<pre><code class=\"config\">...&amp;id=1&amp;down=<\/pre>\n<p><\/code><\/p>\n<p>Now the dashboard shows that the active connection count (<strong>Conns &gt; A<\/strong>) for 172.16.210.82 is zero, so it is safe to take it offline for maintenance.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/assets.wp.nginx.com\/wp-content\/uploads\/2016\/06\/dashboard-demoapp-upstream-1-server-down.png\" alt=\"Screenshot of the NGINX Plus live activity monitoring dashboard&#039;s Upstreams tab, showing that one servers in the &#039;demoapp&#039; upstream group has been taken down (its Sent\/s and Rvcd\/s counts are zero)\" width=\"975\" height=\"182\" class=\"aligncenter size-full wp-image-36707\" \/><\/p>\n<p>When maintenance is complete, we can bring the server back online by appending this string to the base command:<\/p>\n<pre><code class=\"config\">...&amp;id=1&amp;up=<\/pre>\n<p><\/code><\/p>\n<p>Note that you can also use the editing interface on the dashboard&#8217;s <strong>Upstreams<\/strong> tab to change server state (mark a server as <code>up<\/code>, <code>down<\/code>, or <code>drain<\/code>) rather than sending commands to the API. For instructions, see our <a href=\"https:\/\/www.nginx.com\/blog\/dashboard-r7\/#upstreams-tab\">blog<\/a>.<\/p>\n<h2 id=\"server-api-persistence\">Using the API to Upgrade an Individual Server with Session Persistence Configured<\/h2>\n<p>When we enable <a href=\"https:\/\/www.nginx.com\/resources\/glossary\/session-persistence\/\">session persistence<\/a>, clients are directed to the same backend server for all requests during a session. NGINX&nbsp;Plus supports several session persistence methods with the <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_upstream_module.html#sticky\"><code>sticky<\/code><\/a> directive to the <code>upstream<\/code> block; here we use the <em>sticky&nbsp;cookie<\/em> method: <\/p>\n<pre><code class=\"config\"># In the HTTP context<br \/>\nupstream demoapp {<br \/>\n    zone demoapp 64k;<br \/>\n    server 172.16.210.81:80;<br \/>\n    server 172.16.210.82:80;<br \/>\n    <strong>sticky cookie srv_id expires=1h domain=.example.com path=\/<\/strong>;<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<p>Session persistence is required for any application that keeps state information for users (such as a shopping cart), but it complicates upgrades because now it is not enough just to wait until there are no active connections to our server before taking it offline. There might be clients that aren&#8217;t sending requests right now but haven&#8217;t ended their session with the server. For the best user experience, we want to keep the active sessions open&nbsp;&ndash;&nbsp;the amount of time depends on the application&nbsp;&ndash;&nbsp;but don\u2019t want any new sessions to start. <\/p>\n<p>Fortunately, the NGINX&nbsp;Plus <code>drain<\/code> state does exactly this. Session draining adds one more step to the process outlined in the previous section. Instead of immediately marking the server as <code>down<\/code>, we mark it as <code>drain<\/code> by appending the following string to the base command:<\/p>\n<pre><code class=\"config\">...&amp;id=1&amp;drain=<\/pre>\n<p><\/code><\/p>\n<p>In this case, before taking the server offline we don&#8217;t only want the number of active connections to be zero, but also for all sessions to end. That  translates to the server being idle for some amount of time, which depends on the application. We can periodically check the dashboard or use the status API to determine that the server is idle, but we can also automate the process of marking a server <code>drain<\/code> and verifying it is idle before marking it <code>down<\/code>. <\/p>\n<p>I\u2019ve created the following Python program called <span><strong>server-drain-down.py<\/strong><\/span> as an example. It takes the upstream group name and the IP address and port of the server as input, and marks the specified server with <code>drain<\/code>. It then marks the server <code>down<\/code> after either it has been idle for 60&nbsp;seconds, or 5&nbsp;minutes have elapsed since session draining began (even if the server isn\u2019t idle). The program uses the status API to get the timestamp of the last request sent to the server and the number of active connections. It uses the configuration API to mark the server with <code>drain<\/code> and then <code>down<\/code>.<\/p>\n<pre class=\"scrollable jq_custom_scroll_dark\"><code class=\"config\">#!\/usr\/bin\/env python<br \/>\n################################################################################<br \/>\n# Copyright (C) 2016 NGINX, Inc.<br \/>\n#<br \/>\n# This program is provided for demonstration purposes only and is not covered<br \/>\n# by your NGINX Plus support agreement.<br \/>\n#<br \/>\n# It is a proof of concept for automating the process of taking a server offline<br \/>\n# when it is configured for session persistence.<br \/>\n#<br \/>\n# This program takes two command line-arguments:<br \/>\n#   - upstream group name<br \/>\n#   - server IP address and port<br \/>\n#<br \/>\n# It uses the NGINX Plus status API to get the server's ID and the<br \/>\n# upstream_conf API to set the state of the server to 'drain'. It then loops,<br \/>\n# waiting to mark the server down until either it has been idle for a<br \/>\n# configured period of time or a configured maximum time has elapsed (even if<br \/>\n# the server is not idle).<br \/>\n################################################################################<\/p>\n<p>import requests<br \/>\nimport json<br \/>\nimport sys<br \/>\nimport time<\/p>\n<p>if len(sys.argv) != 3:<br \/>\n    print \"Error: Wrong number of arguments. Usage is:\"<br \/>\n    print \"    server-drain-down.py  \"<br \/>\n    sys.exit(1)<\/p>\n<p>upstream=sys.argv[1]<br \/>\nserver=sys.argv[2]<\/p>\n<p># The URL for the NGINX Plus status API<br \/>\nstatusURL = 'http:\/\/localhost:8080\/status'<br \/>\n# The URL for the NGINX Plus reconfiguration API<br \/>\nconfURL = 'http:\/\/localhost:8080\/upstream_conf' <\/p>\n<p># The time the server needs to be idle before being marked down, in seconds<br \/>\nmaxIdleTime = 60 <\/p>\n<p># The total elapsed time before marking the server down even if it isn't idle,<br \/>\n# in seconds<br \/>\nmaxTime = 300 <\/p>\n<p>sleepInterval = 1<\/p>\n<p>client = requests.Session() # Create a session for making HTTP requests<\/p>\n<p>################################################################################<br \/>\n# Function sendRequest<br \/>\n#<br \/>\n# Send an HTTP request. Status 200 is expected for all requests.<br \/>\n################################################################################<br \/>\ndef sendRequest(url):<br \/>\n    try:<br \/>\n        response = client.get(url) # Make an NGINX Plus status API call<br \/>\n        if response.status_code == 200:<br \/>\n            return response<br \/>\n        else:<br \/>\n            print (\"Error: Response code %d\") %(response.status_code)<br \/>\n            sys.exit(1)<br \/>\n    except requests.exceptions.ConnectionError:<br \/>\n        print \"Error: Unable to connect to \" + url<br \/>\n        sys.exit(1)<\/p>\n<p>################################################################################<br \/>\n# Main<br \/>\n################################################################################<br \/>\nurl = statusURL + '\/upstreams\/' + upstream + '\/peers'<br \/>\nresponse = sendRequest(url)<br \/>\nnginxstats = json.loads(response.content) # Convert JSON to dict<br \/>\nid = \"\"<br \/>\nstate = \"\"<br \/>\nserverFound = False<br \/>\nfor stats in nginxstats:<br \/>\n    if stats['server'] == server:<br \/>\n        serverFound = True<br \/>\n        id = stats['id']<br \/>\n        state = stats['state']<br \/>\n        # The last time a request was sent to this server, converted to seconds<br \/>\n        lastSelected = stats['selected'] \/ 1000<br \/>\n        break<br \/>\nif not serverFound:<br \/>\n    print(\"Server %s not found in Upstream Group %s\") %(server, upstream)<br \/>\n    sys.exit(1)<br \/>\nif state == 'down':<br \/>\n    print \"The server is already marked as down\"<br \/>\n    sys.exit(0)<br \/>\nelif state == 'unhealthy' or state == 'unavailable':<br \/>\n    # The server is not healthy so it won't be receiving requests and can be<br \/>\n    # marked down<br \/>\n    url = confURL + '?upstream=' + upstream + '&amp;id=' + str(id) + '&amp;down='<br \/>\n    response = sendRequest(url)<br \/>\n    print \"The server was unhealthy or unavailable and has been marked down\"<br \/>\n    sys.exit(0)<br \/>\nif state == 'up':<br \/>\n    print \"Set server to drain\"<br \/>\n    url = confURL + '?upstream=' + upstream + '&amp;id=' + str(id) + '&amp;drain='<br \/>\n    response = sendRequest(url)<\/p>\n<p>startTime = int(time.time())<br \/>\nwhile True: # Loop forever<br \/>\n    now = int(time.time())<br \/>\n    totalTime = now - startTime<br \/>\n    if totalTime &gt;= maxTime:<br \/>\n        print \"Max time has expired. Mark server as down\"<br \/>\n        url = confURL + '?upstream=' + upstream + '&amp;id=' + str(id) + '&amp;down='<br \/>\n        response = sendRequest(url)<br \/>\n        break<br \/>\n    idleTime = now - lastSelected<br \/>\n    if idleTime &gt;= maxIdleTime:<br \/>\n        if nginxstats['active'] == 0:<br \/>\n            print \"Idle time has expired. Mark server as down\"<br \/>\n            url = confURL + '?upstream=' + upstream + '&amp;id=' + str(id) + '&amp;down='<br \/>\n            response = sendRequest(url)<br \/>\n            break<br \/>\n        else:<br \/>\n            print(\"Idle time has expired but there are still active \"<br \/>\n                  \"connections.  %d max seconds\") %(totalTime)<br \/>\n    else:<br \/>\n        print(\"Server idle for %d seconds.  %d max seconds\") %(idleTime, totalTime)<br \/>\n    url = statusURL + '\/upstreams\/' + upstream + '\/peers\/' + str(id)<br \/>\n    response = sendRequest(url)<br \/>\n    nginxstats = json.loads(response.content)<br \/>\n    lastSelected = nginxstats['selected'] \/ 1000<br \/>\n    time.sleep(sleepInterval)<\/pre>\n<p><\/code><\/p>\n<p>Whether we use the program or verify manually that the server is idle, after it is marked <code>down<\/code> we proceed as in the previous section: take the server offline, do the upgrade, and mark it as <code>up<\/code> to return it to service.<\/p>\n<h2 id=\"server-health-checks\">Using Health Checks to Upgrade an Individual Server<\/h2>\n<p>Recall that we set up a health check with the <code>health_check<\/code> directive in the first <code>server<\/code> block we configured in <a href=\"#base-configuration\">Base Configuration for the Use Cases<\/a>. Now we use it to control server state. The health check succeeds if the server returns the file <strong>healthcheck.html<\/strong> with an HTTP <code>2xx<\/code> or <code>3xx<\/code> response code. <\/p>\n<pre><code class=\"config\"># In the first server block<br \/>\nlocation @healthcheck {<br \/>\n    internal;<br \/>\n    proxy_pass http:\/\/demoapp;<br \/>\n    health_check uri=\/healthcheck.html;<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<p> When we want to take a server offline, we simply rename the file to <strong>fail&#8209;healthcheck.html<\/strong> and health checks fail. NGINX&nbsp;Plus stops sending any new requests to the server, but allows existing requests to complete (equivalent to the <code>down<\/code> state set with the API). After making the health check fail, we use the dashboard or the status API to monitor the server as we did <a href=\"#server-api\">when using the API<\/a> to mark the server <code>down<\/code>. We wait for connections to go to zero before taking the server offline to do the upgrade. When the server is ready to return to service, we rename the file back to <strong>healthcheck.html<\/strong> and health checks once again succeed. <\/p>\n<p>As previously mentioned, with health checks we can make use of the <a href=\"https:\/\/www.nginx.com\/resources\/admin-guide\/load-balancer\/#slow_start\">slow-start<\/a> feature if the server requires some warm-up time before it is ready to receive its full share of traffic. Here we modify the servers in the upstream group so that NGINX&nbsp;Plus ramps up traffic gradually during the 30&nbsp;seconds after they come up:<\/p>\n<pre><code class=\"config\"># In the HTTP context<br \/>\nupstream demoapp {<br \/>\n    zone demoapp 64K;<br \/>\n    server 172.16.210.81:80 <strong>slow_start=30s<\/strong>;<br \/>\n    server 172.16.210.82:80 <strong>slow_start=30s<\/strong>;<br \/>\n    sticky cookie srv_id expires=1h domain=.example.com path=\/;<br \/>\n}<\/pre>\n<p><\/code><\/p>\n<h2>Conclusion<\/h2>\n<p>NGINX&nbsp;Plus provides operations and DevOps engineers with several options for managing software and hardware upgrades on individual servers while continuing to provide a good customer experience by avoiding downtime. <\/p>\n<p>Check out the other two articles in this series:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-overview\">An overview of the upgrade methods<\/a><\/li>\n<li><a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-application-version\">Upgrading to a new version of an application<\/a> by switching traffic to completely different servers or upstream groups<\/li>\n<\/ul>\n<p>Try NGINX&nbsp;Plus out for yourself and see how it makes upgrades easier and more efficient&nbsp;&ndash;&nbsp;start a <a href=\"#free-trial\">30&#8209;day&nbsp;free&nbsp;trial<\/a> today or <a href=\"#contact-us\">contact&nbsp;us<\/a> for a live demo.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-individual-servers\/\">Using NGINX&nbsp;Plus for Backend Upgrades with Zero Downtime, Part&nbsp;2&nbsp;&ndash;&nbsp;Individual Servers<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\">NGINX<\/a>.<\/p>\n<p>Source: <a href=\"https:\/\/www.nginx.com\/blog\/nginx-plus-backend-upgrades-individual-servers\/\" target=\"_blank\">Using NGINX\u00a0Plus for Backend Upgrades with Zero Downtime, Part\u00a02\u00a0\u2013\u00a0Individual Servers<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"mh-excerpt\"><p>Using NGINX\u00a0Plus for Backend Upgrades with Zero Downtime, Part\u00a02\u00a0\u2013\u00a0Individual Servers This is the second of three articles in our series about using NGINX&nbsp;Plus to upgrade backend servers with zero downtime. In the first article, we describe the two NGINX&nbsp;Plus features you can use for backend upgrades with zero downtime&nbsp;&ndash;&nbsp;the on-the-fly reconfiguration API and application health checks&nbsp;&ndash;&nbsp;and discuss the advantages of each method. In this second article, we explore use cases around upgrading the software or hardware on an individual server, which is one of the most common reasons to take servers offline. We could just take the server offline with no preparation, but that kills all the current client connections, making for a bad user experience. What we want is to stop sending any new requests or connections to the server, while letting it finish off any outstanding work. Then <a class=\"mh-excerpt-more\" href=\"https:\/\/jirak.net\/wp\/using-nginx-plus-for-backend-upgrades-with-zero-downtime-part-2-individual-servers\/\" title=\"Using NGINX\u00a0Plus for Backend Upgrades with Zero Downtime, Part\u00a02\u00a0\u2013\u00a0Individual Servers\">[ more&#8230; ]<\/a><\/p>\n<\/div>","protected":false},"author":1,"featured_media":8193,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[169],"tags":[652],"class_list":["post-8192","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-nginx"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/8192","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/comments?post=8192"}],"version-history":[{"count":1,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/8192\/revisions"}],"predecessor-version":[{"id":8194,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/8192\/revisions\/8194"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/media\/8193"}],"wp:attachment":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/media?parent=8192"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/categories?post=8192"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/tags?post=8192"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}