{"id":21393,"date":"2017-12-22T03:53:02","date_gmt":"2017-12-21T18:53:02","guid":{"rendered":"https:\/\/jirak.net\/wp\/autoscaling-and-orchestration-with-nginx-plus-and-chef\/"},"modified":"2017-12-22T04:34:30","modified_gmt":"2017-12-21T19:34:30","slug":"autoscaling-and-orchestration-with-nginx-plus-and-chef","status":"publish","type":"post","link":"https:\/\/jirak.net\/wp\/autoscaling-and-orchestration-with-nginx-plus-and-chef\/","title":{"rendered":"Autoscaling and Orchestration with NGINX Plus and Chef"},"content":{"rendered":"<p>Autoscaling and Orchestration with NGINX Plus and Chef<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jirak.net\/wp\/wp-content\/uploads\/2017\/12\/Autoscaling1-1024x512.png\" width=\"1024\" height=\"512\"><\/p>\n<h2>Introduction<\/h2>\n<p>There are many solutions for handling autoscaling in cloud environments, but they\u2019re usually dependent on the specific infrastructure of a given cloud provider. Leveraging the flexibility of NGINX&nbsp;Plus with the functionality of <a href=\"https:\/\/www.chef.io\/\" rel=\"noopener\" target=\"_blank\">Chef<\/a>, we can build an autoscaling system that can be used on most cloud providers.<\/p>\n<p>Chef has a tool, <a href=\"https:\/\/docs.chef.io\/knife.html\" rel=\"noopener\" target=\"_blank\">knife<\/a>, which you can use at the command line to act on objects such as cookbooks, nodes, data bags, and more. Knife plugins help you extend knife. So we use knife plugins to help abstract out functionality specific to one specific cloud, enabling knife commands to work the same way across clouds. <\/p>\n<h2>Requirements<\/h2>\n<p>For this setup, we\u2019ll be leveraging our NGINX Chef cookbook. The installation and a basic overview of this cookbook can be found <a href=\"https:\/\/github.com\/nginxinc\/NGINX-Demos\/tree\/master\/nginx-cookbook\" rel=\"noopener\" target=\"_blank\">here<\/a>. Also, we\u2019ll be utilizing Hosted Chef, to make switching between clouds more straightforward. This setup is currently configured to work with <a href=\"https:\/\/aws.amazon.com\/\" rel=\"noopener\" target=\"_blank\">AWS<\/a>, <a href=\"https:\/\/azure.microsoft.com\/en-us\/\" rel=\"noopener\" target=\"_blank\">Azure<\/a>, and <a href=\"https:\/\/www.openstack.org\/\" rel=\"noopener\" target=\"_blank\">OpenStack<\/a>. It\u2019s possible to extend it to cover all of the <a href=\"https:\/\/docs.chef.io\/plugin_knife.html\" rel=\"noopener\" target=\"_blank\">Knife cloud plug-ins<\/a>, but they haven\u2019t been tested.<\/p>\n<h2>Basic Setup<\/h2>\n<p>This configuration heavily relies on role membership to look up information about the different nodes that are part of the cluster. You\u2019ll need to have three basic roles; one for the NGINXPlus servers, one for the autoscaler server, and one for the upstream application servers. The autoscaler server is a node that monitors the NGINX&nbsp;Plus status page, and will make the API calls to scale up or down servers based on NGINX&nbsp;Plus statistics. Here are the example roles:<\/p>\n<h3>NGINX Plus server role:<\/h3>\n<pre><code class=\"config\">name \"nginx_plus_autoscale\"<br \/>\ndescription \"An example role to install nginx plus\"<br \/>\nrun_list \"recipe[nginx]\",\"recipe[nginx::autoscale]\"<br \/>\ndefault_attributes \"nginx\" =&gt; { \"install_source\" =&gt; \"plus\",<br \/>\n                                \"plus_status_enable\" =&gt; true,<br \/>\n                                \"enable_upstream_conf\" =&gt; true,<br \/>\n                                \"plus_status_allowed_ips\" =&gt; ['104.245.19.144', '172.31.0.0\/16', '127.0.0.1'],<br \/>\n                                \"server_name\" =&gt; \"test.local\",<br \/>\n                                \"upstream\" =&gt; \"test\",<br \/>\n                                \"nginx_repo_key\" =&gt; \"-----BEGIN PRIVATE KEY-----nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCbYwum24BwEYDf4P4x0\/KZjkKN7\/EE\/gg0qAU3ebG5kY8gWb8NpQ2itj\/DfmwPAEnvI6In86c6YFokAZxeo6HbkKkeQKBgQDGQEHp2lCON9FLgRNMjtcp4S2VYjxdAMVinDLkIgVb9qgh6BvTDt5hRY\/Vhcx8xV70+BCnoMSzbvLWhZbpSrdmD9nOj1KkPcWn4ArSv6prlYItUwWbNtFLw\/E=n-----END PRIVATE KEY-----\",<br \/>\n  \"nginx_repo_crt\" =&gt; \"-----BEGIN CERTIFICATE-----nMIIDrDCCApSgAwIBAgICBs8wDQYJKoZIhvcNAQEFBQAwXjELMAkGA1UI2pLoSbonYiEvivb4Cg7POn+cQBwurcYUH\/jB9zLPPSwlqcUiG2hScuEeaBiEoK\/ixHIRuMV9nyp3xTi3b0ZKvOFjEZpBHB8WIdQVneTNRvaFLbiwznhiAe7D4uMaAEYqF96GTgX2XnbovinLlYPfdi7BhlXTI9u78+tqbo0YVsSBiDV49hcIA=n-----END CERTIFICATE-----\" }<\/pre>\n<p><\/code><\/p>\n<h3>Upstream application server role:<\/h3>\n<pre><code class=\"config\">name \"test-upstream\"<br \/>\ndescription \"An example role to install nginx plus hello demo\"<br \/>\nrun_list \"recipe[nginx::hello-demo]\"<br \/>\ndefault_attributes \"nginx\" =&gt; { \"application_port\" =&gt; \"80\"}<\/pre>\n<p><\/code><\/p>\n<h3>Autoscaler server role:<\/h3>\n<pre><code class=\"config\">name \"autoscaler\"<br \/>\ndescription \"An example role to install autoscaler script\"<br \/>\nrun_list \"recipe[nginx::autoscale_script]\"<br \/>\ndefault_attributes \"nginx\" =&gt; { \"server_name\" =&gt; \"test.local\",<br \/>\n\t\t\t\t\"upstream\" =&gt; \"test\",<br \/>\n\t\t\t\t\"cloud_provider\" =&gt; \"ec2\" }<\/pre>\n<p><\/code><\/p>\n<p>Here\u2019s a quick breakdown of the different attributes leveraged in the roles:<\/p>\n<ul>\n<li><code>install_source<\/code>&nbsp;&ndash; Tells the NGINX cookbook to install NGINX Plus instead of open source<\/li>\n<li><code>plus_status_enable<\/code>&nbsp;&ndash; Enables the NGINX&nbsp;Plus status page<\/li>\n<li><code>enable_upstream_conf<\/code>&nbsp;&ndash; Enables the dynamic reconfiguration API<\/li>\n<li><code>plus_status_allowed_ips<\/code>&nbsp;&ndash; List of IPs or IP ranges that are allowed to access the status page and reconfiguration API<\/li>\n<li><code>server_name<\/code>&nbsp;&ndash; Defines a server directive in the NGINX configuration <\/li>\n<li><code>upstream<\/code>&nbsp;&ndash; Defines an upstream group to be used with the <code>server_name<\/code> configuration above<\/li>\n<li><code>nginx_repo_key<\/code>&nbsp;&ndash; Defines a certificate key to be used to access the NGINX Plus repositories<\/li>\n<li><code>nginx_repo_crt<\/code>&nbsp;&ndash; Defines a certificate to be used to access the NGINX Plus repositories<\/li>\n<li><code>application_port<\/code>&nbsp;&ndash; The port that the upstream application servers will listen on<\/li>\n<li><code>cloud_provider<\/code>&nbsp;&ndash; Defines the cloud provider(ec2\/azure\/google\/openstack) to be used for the <code>autoscale_nginx<\/code> script\n<\/ul>\n<p>You\u2019ll also need to configure your <strong>knife.rb<\/strong> file to have the necessary credentials to access the different cloud providers you would like to leverage. Here\u2019s an example <strong>knife.rb<\/strong> with the different supported cloud provider details:<\/p>\n<pre><code class=\"config\">current_dir = File.dirname(__FILE__)<br \/>\nlog_level                :info<br \/>\nlog_location             STDOUT<br \/>\nnode_name                \"damiancurry\"<br \/>\nclient_key               \"#{current_dir}\/damiancurry.pem\"<br \/>\nchef_server_url          \"https:\/\/api.chef.io\/organizations\/nginx\"<br \/>\ncookbook_path            [\"#{current_dir}\/..\/cookbooks\"]<br \/>\n#AWS variables<br \/>\nknife[:aws_access_key_id] =<br \/>\nknife[:aws_secret_access_key] =<br \/>\n#azure variables<br \/>\nknife[:azure_tenant_id] =<br \/>\nknife[:azure_subscription_id] =<br \/>\nknife[:azure_client_id] =<br \/>\nknife[:azure_client_secret] =<br \/>\n#openstack variables<br \/>\nknife[:openstack_auth_url] =<br \/>\nknife[:openstack_username] =<br \/>\nknife[:openstack_password] =<br \/>\nknife[:openstack_tenant] =<br \/>\nknife[:openstack_image] =<br \/>\nknife[:openstack_ssh_key_id] = \"demo_key\"<\/pre>\n<p><\/code><\/p>\n<p>Now we can check out the few scripts that make the autoscaling happen. First is the script that runs on the NGINX&nbsp;Plus nodes to watch for changes in the nodes that are online:<\/p>\n<pre class=\"scrollable jq_custom_scroll_dark\"><code class=\"config\">#!\/bin\/bash<br \/>\nNGINX_NODES=\"$(mktemp)\"<br \/>\n\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?upstream=\"| \/usr\/bin\/awk '{print $2}' | \/bin\/sed -r 's\/;\/\/g' | \/usr\/bin\/sort &gt; $NGINX_NODES<br \/>\nCONFIG_NODES=\"$(mktemp)\"<br \/>\n\/bin\/grep -E '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' \/etc\/nginx\/conf.d\/-upstream.conf | \/usr\/bin\/awk '{print $2}' | \/bin\/sed -r 's\/;\/\/g' | \/usr\/bin\/sort &gt; $CONFIG_NODES<br \/>\nDIFF_OUT=\"$(mktemp)\"<br \/>\n\/usr\/bin\/diff $CONFIG_NODES $NGINX_NODES &gt; $DIFF_OUT<br \/>\nADD_NODE=`\/usr\/bin\/diff ${CONFIG_NODES} ${NGINX_NODES} | \/bin\/grep \"\" | \/usr\/bin\/awk '{print $2}'`<\/p>\n<p>for i in $ADD_NODE; do<br \/>\n    echo \"adding node ${i}\";<br \/>\n    \/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?add=&amp;upstream=&amp;server=${i}&amp;max_fails=0\"<br \/>\ndone<br \/>\nfor i in $DEL_NODE; do<br \/>\n    echo \"removing node ${i}\";<br \/>\n    #NODE_ID=`\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?upstream=\" | \/bin\/grep ${i} | \/usr\/bin\/awk '{print $4}' | \/bin\/sed -r 's\/id=\/\/g'`<br \/>\n    NODE_ID=`\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?upstream=\" | \/bin\/grep ${i} | \/bin\/grep -oP 'id=Kd+'`<br \/>\n    NODE_COUNT=`\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?upstream=\" | \/bin\/grep -n ${i} | \/bin\/grep -oP 'd+:server' | \/bin\/sed -r 's\/:server\/\/g'`<br \/>\n    JSON_NODE_NUM=$(expr $NODE_COUNT - 1)<br \/>\n    NODE_CONNS=`\/usr\/bin\/curl -s \"http:\/\/localhost:\/status\" | \/usr\/bin\/jq \".upstreams..peers[${JSON_NODE_NUM}].active\"`<br \/>\n    NODE_STATE=`\/usr\/bin\/curl -s \"http:\/\/localhost:\/status\" | \/usr\/bin\/jq \".upstreams..peers[${JSON_NODE_NUM}].state\"`<br \/>\n    if [[ ${NODE_STATE} == '\"up\"' ]] &amp;&amp; [[ ${NODE_CONNS} == 0 ]]; then<br \/>\n\techo \"nodes is up with no active connections, removing ${i}\"<br \/>\n\t\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?remove=&amp;upstream=&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"draining\"' ]] &amp;&amp; [[ ${NODE_CONNS} == 0 ]]; then<br \/>\n    echo \"nodes is draining with no active connections, removing ${i}\"<br \/>\n    \/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?remove=&amp;upstream=&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"down\"' ]]; then<br \/>\n\techo \"node state is down, removing ${i}\":<br \/>\n\t\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?remove=&amp;upstream=&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"unhealthy\"' ]]; then<br \/>\n\techo \"node state is down, removing ${i}\":<br \/>\n\t\/usr\/bin\/curl -s \"http:\/\/localhost:\/upstream_conf?remove=&amp;upstream=&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"up\"' ]] &amp;&amp; [[ ${NODE_CONNS} != 0 ]]; then<br \/>\n\techo \"node has active connections, draining connections on ${i}\"<br \/>\n    fi<br \/>\ndone<\/p>\n<p>rm $NGINX_NODES $CONFIG_NODES $DIFF_OUT<\/pre>\n<p><\/code><\/p>\n<p>The script is a little harder to read because it\u2019s still in the Chef template version. It\u2019s comparing the running config from the extended status page to the upstream config file managed by Chef. Here\u2019s the logic used to generate the upstream configs:<\/p>\n<p><pre><code class=\"config\">upstream_node_ips = []<br \/>\nupstream_role = (node[:nginx][:upstream]).to_s<br \/>\nsearch(:node, \"role:#{node[:nginx][:upstream]}-upstream\") do |nodes|<br \/>\n  host_ip = nodes['ipaddress']<br \/>\n  unless host_ip.to_s.strip.empty?<br \/>\n    host_port = nodes['nginx']['application_port']<br \/>\n    upstream_node_ips &lt;&lt; &quot;#{host_ip}:#{host_port}&quot; # if value.has_key?(&quot;broadcast&quot;)\n  end\nend\n\ntemplate &quot;\/etc\/nginx\/conf.d\/#{node[:nginx][:upstream]}-upstream.conf&quot; do\n  source &#039;upstreams.conf.erb&#039;\n  owner &#039;root&#039;\n  group node[&#039;root_group&#039;]\n  mode 0644\n  variables(\n    hosts: upstream_node_ips\n  )\n  # notifies :reload, &#039;service[nginx]&#039;, :delayed\n  notifies :run, &#039;execute[run_api_update_script]&#039;, :delayed\nend[\/config]\n\nYou can see that we\u2019re using the Chef search functionality to find nodes that are currently assigned to the upstream role you defined for this application. It then extracts the IP and application port for the node, and passes it to the template as an array. Here\u2019s the templated version of the upstream configuration:\n\n[config]upstream  {<br \/>\n       zone  64k;<br \/>\n       <br \/>\n       server ;<br \/>\n       <br \/>\n   }<\/pre>\n<p><\/code><\/p>\n<p>Finally, we can take a look at the actual script that will handle the autoscaling:<\/p>\n<pre class=\"scrollable jq_custom_scroll_dark\"><code class=\"config\">require 'chef\/api_client'<br \/>\nrequire 'chef\/config'<br \/>\nrequire 'chef\/knife'<br \/>\nrequire 'chef\/node'<br \/>\nrequire 'chef\/search\/query'<br \/>\nrequire 'net\/http'<br \/>\nrequire 'json'<br \/>\nclass MyCLI<br \/>\n  include Mixlib::CLI<br \/>\nend<\/p>\n<p>Chef::Config.from_file(File.expand_path(\"~\/.chef\/knife.rb\"))<br \/>\nnginx_node = \"\"<br \/>\ncloud_provider = \"\"<br \/>\nnginx_upstream = \"\"<br \/>\nnginx_server_zone = \"\"<br \/>\nif cloud_provider == \"ec2\"<br \/>\n  create_args = [\"#{cloud_provider}\", 'server', 'create', '-r', \"role[#{nginx_upstream}-upstream]\", '-S', 'chef-demo', '-I', 'ami-93d80ff3', '--region', 'us-west-2', '-f', 'm1.medium', '-g', 'chef-demo', '--ssh-user', 'ubuntu', '-i', '~\/.ssh\/chef-demo.pem']<br \/>\nelsif cloud_provider == \"openstack\"<br \/>\n  create_args = [\"#{cloud_provider}\", 'server', 'create', '-i', '~\/.ssh\/demo_key.pem', '--ssh-user', 'ubuntu', '-f', 'demo_flavor', '--openstack-private-network', '-Z', 'nova', '-r', \"role[#{nginx_upstream}-upstream]\"]<br \/>\nelse<br \/>\n  puts \"Please specify a valid cloud provider\"<br \/>\n  exit<br \/>\nend<br \/>\nsleep_interval_in_seconds = 10<br \/>\nmin_server_count = 1<br \/>\nmax_server_count = 10<br \/>\nmin_conns = 10<br \/>\nmax_conns = 20<br \/>\nnginx_status_url = \"http:\/\/#{nginx_node}:8080\/status\"<\/p>\n<p>def get_nginx_active_servers(nginx_status_data, nginx_upstream)<br \/>\n  active_nodes = Array.new<br \/>\n  peers = nginx_status_data[\"upstreams\"][\"#{nginx_upstream}\"][\"peers\"]<br \/>\n  peers.each do |node|<br \/>\n    if node[\"state\"] == \"up\"<br \/>\n      active_nodes.push node[\"server\"]<br \/>\n    end<br \/>\n  end<br \/>\n  return active_nodes<br \/>\nend<\/p>\n<p>def get_nginx_server_conns(nginx_status_data, nginx_server_zone)<br \/>\n  return nginx_status_data[\"server_zones\"][\"#{nginx_server_zone}\"][\"processing\"]<br \/>\nend<\/p>\n<p>def add_backend_node(create_args)<br \/>\n  #search for existing hostnames to pick a new one<br \/>\n  query = Chef::Search::Query.new<br \/>\n  #nodes = query.search('node', 'role:#{nginx_upstream}-upstream').first rescue []<br \/>\n  nodes = query.search('node', 'role:-upstream').first rescue []<br \/>\n  hosts = Array.new<br \/>\n  used_num = Array.new<br \/>\n  nodes.each do |node|<br \/>\n    node_name = node.name<br \/>\n    hosts.push node_name<br \/>\n    num = node_name.scan(\/d+\/)<br \/>\n    used_num.push num<br \/>\n  end<br \/>\n  used_num.sort!<br \/>\n  fixed1 = used_num.flatten.collect do |num| num.to_i end<br \/>\n  fixed_num = fixed1.sort!<br \/>\n  firstnum = fixed_num.first<br \/>\n  lastnum = fixed_num.last<br \/>\n  firsthost = hosts.sort[0].to_i<br \/>\n  lasthost = hosts.sort[-1].to_i<\/p>\n<p>  unless firstnum.nil? &amp;&amp; lastnum.nil?<br \/>\n    total = (1..lastnum).to_a<br \/>\n    missingnum = total-fixed_num<br \/>\n  end<br \/>\n  newhostname = \"\"<br \/>\n  if missingnum.nil?<br \/>\n    puts \"No existing hosts\"<br \/>\n    fixnum = \"1\"<br \/>\n    newnum = fixnum.to_i<br \/>\n    newhostname = \"-app-#{newnum}\"<br \/>\n  elsif missingnum.any?<br \/>\n    puts \"Missing numbers are #{missingnum}\"<br \/>\n    newnum = missingnum.first<br \/>\n    newhostname = \"-app-#{newnum}\"<br \/>\n  else<br \/>\n    newnum = lastnum + 1<br \/>\n    puts \"new number is n\"<br \/>\n    newhostname = \"-app-#{newnum}\"<br \/>\n  end<br \/>\n  new_create_args = create_args + ['--node-name', newhostname]<br \/>\n  knife = Chef::Knife.new<br \/>\n  knife.options=MyCLI.options<br \/>\n  Chef::Knife.run(new_create_args, MyCLI.options)<br \/>\n  #sleep to wait for chef run<br \/>\n  1.upto(10) do |n|<br \/>\n    puts \".\"<br \/>\n    sleep 1 # second<br \/>\n  end<br \/>\nend<\/p>\n<p>def del_backend_node(nginx_status_data, nginx_node, active_nodes, cloud_provider, nginx_upstream)<br \/>\n  #lookup hostnames\/ips and pick a backend at random<br \/>\n  query = Chef::Search::Query.new<br \/>\n  #nodes = query.search('node', 'role:#{nginx_upstream}-upstream').first rescue []<br \/>\n  nodes = query.search('node', 'role:-upstream').first rescue []<br \/>\n  hosts = Array.new<br \/>\n  nodes.each do |node|<br \/>\n    node_name = node.name<br \/>\n    node_ip = node['ipaddress']<br \/>\n    if active_nodes.any? { |val| \/#{node_ip}\/ =~ val }<br \/>\n      hosts.push \"#{node_name}:#{node_ip}\"<br \/>\n    end<br \/>\n  end<br \/>\n  del_node = hosts.sample<br \/>\n  node_name = del_node.rpartition(\":\").first<br \/>\n  node_ip = del_node.rpartition(\":\").last<br \/>\n  puts \"Removing #{node_name}\"<br \/>\n  nginx_url = \"http:\/\/#{nginx_node}:8080\/upstream_conf?upstream=#{nginx_upstream}\"<br \/>\n  response = Net::HTTP.get(URI(nginx_url))<br \/>\n  node_id = response.lines.grep(\/#{node_ip}\/).first.split('id=').last.chomp<br \/>\n  drain_url = \"http:\/\/#{nginx_node}:8080\/upstream_conf?upstream=#{nginx_upstream}&amp;id=#{node_id}&amp;drain=1\"<br \/>\n  Net::HTTP.get(URI(drain_url))<br \/>\n  sleep(5)<br \/>\n  knife = Chef::Knife.new<br \/>\n  knife.options=MyCLI.options<br \/>\n  #delete_args = [\"#{cloud_provider}\", 'server', 'delete', \"#{node_name}\", '--purge', '-y']<br \/>\n  #Chef::Knife.run(delete_args, MyCLI.options)<br \/>\n  delete_args = \"#{cloud_provider} server delete -N #{node_name} -P -y\"<br \/>\n  `knife #{delete_args}`<br \/>\nend<\/p>\n<p>last_conns_count = -1<\/p>\n<p>while true<br \/>\n  response = Net::HTTP.get(URI(nginx_status_url))<br \/>\n  nginx_status_data = JSON.parse(response)<\/p>\n<p>  active_nodes = get_nginx_active_servers(nginx_status_data, nginx_upstream)<br \/>\n  server_count = active_nodes.length<br \/>\n  current_conns = get_nginx_server_conns(nginx_status_data, nginx_server_zone)<\/p>\n<p>  conns_per_server = current_conns \/ server_count.to_f<\/p>\n<p>  puts \"Current connections = #{current_conns}\"<br \/>\n  puts \"connections per server = #{conns_per_server}\"<\/p>\n<p>  if server_count  max_conns<br \/>\n    if server_count &lt; max_server_count\n      puts &quot;Creating new #{cloud_provider} Instance&quot;\n      add_backend_node(create_args)\n    end\n  elsif conns_per_server  min_server_count<br \/>\n      del_backend_node(nginx_status_data, nginx_node, active_nodes, cloud_provider, nginx_upstream)<br \/>\n    end<\/p>\n<p>  end<\/p>\n<p>  last_conns_count = current_conns<br \/>\n  sleep(sleep_interval_in_seconds)<br \/>\nend<\/pre>\n<p><\/code><\/p>\n<p>The primary roles of this script are to monitor the NGINX&nbsp;Plus status page and to add and remove nodes based on statistics to and from the NGINX&nbsp;Plus node. In its current state, the script is making decisions based on the number of active connections, divided by the amount of active servers in the load-balanced pool. You can easily modify this to use any of the other statistics available from the NGINX&nbsp;Plus status page.<\/p>\n<h2>Deploying an Autoscaling Stack<\/h2>\n<p>First, we\u2019re going to start an autoscaler instance. We\u2019ll use the <code>knife-ec2<\/code> plug-in:<\/p>\n<pre><code class=\"terminal\">Damians-MacBook-Pro:chef-repo damiancurry$ <span style=\"color:#66ff99;font-weight: bold\">knife ec2 server create -r \"role[autoscaler]\" -g sg-1f285866 -I ami-93d80ff3 -f m1.medium -S chef-demo --region us-west-2  --node-name autoscaler-test --ssh-user ubuntu -i ~\/.ssh\/chef-demo.pem<\/span><br \/>\nInstance ID: i-0c359f3a443d18d64<br \/>\nFlavor: m1.medium<br \/>\nImage: ami-93d80ff3<br \/>\nRegion: us-west-2<br \/>\nAvailability Zone: us-west-2a<br \/>\nSecurity Group Ids: sg-1f285866<br \/>\nTags: Name: autoscaler-test<br \/>\nSSH Key: chef-demo<\/p>\n<p>Waiting for EC2 to create the instance......<br \/>\nPublic DNS Name: ec2-35-164-35-19.us-west-2.compute.amazonaws.com<br \/>\nPublic IP Address: 35.164.35.19<br \/>\nPrivate DNS Name: ip-172-31-27-162.us-west-2.compute.internal<br \/>\nPrivate IP Address: 172.31.27.162<\/p>\n<p>Waiting for sshd access to become available<br \/>\nSSH Target Address: ec2-35-164-35-19.us-west-2.compute.amazonaws.com(dns_name)<br \/>\ndone<\/p>\n<p>SSH Target Address: ec2-35-164-35-19.us-west-2.compute.amazonaws.com()<br \/>\nCreating new client for autoscaler-test<br \/>\nCreating new node for autoscaler-test<br \/>\nConnecting to ec2-35-164-35-19.us-west-2.compute.amazonaws.com<br \/>\nec2-35-164-35-19.us-west-2.compute.amazonaws.com -----&gt; Installing Chef Omnibus (-v 12)<br \/>\n\u2026<br \/>\nec2-35-164-35-19.us-west-2.compute.amazonaws.com Chef Client finished, 6\/6 resources updated in 13 seconds<\/pre>\n<p><\/code><\/p>\n<p>Now let\u2019s take a look at the script that will actually handle the autoscaling that runs on this node, <strong>\/usr\/bin\/autoscale_nginx.rb<\/strong>:<\/p>\n<pre class=\"scrollable jq_custom_scroll_dark\"><code class=\"config\">require 'chef\/api_client'<br \/>\nrequire 'chef\/config'<br \/>\nrequire 'chef\/knife'<br \/>\nrequire 'chef\/node'<br \/>\nrequire 'chef\/search\/query'<br \/>\nrequire 'net\/http'<br \/>\nrequire 'json'<br \/>\nclass MyCLI<br \/>\n  include Mixlib::CLI<br \/>\nend<\/p>\n<p>Chef::Config.from_file(File.expand_path(\"~\/.chef\/knife.rb\"))<br \/>\nnginx_node = \"[]\"<br \/>\ncloud_provider = \"ec2\"<br \/>\nnginx_upstream = \"test\"<br \/>\nnginx_server_zone = \"test.local\"<br \/>\nif cloud_provider == \"ec2\"<br \/>\n  create_args = [\"#{cloud_provider}\", 'server', 'create', '-r', \"role[#{nginx_upstream}-upstream]\", '-S', 'damiancurry', '-I', 'ami-93d80ff3', '--region', 'us-west-2', '-f', 'm1.medium', '--ssh-user', 'ubuntu', '-i', '~\/.ssh\/damiancurry.pem']<br \/>\nelsif cloud_provider == \"openstack\"<br \/>\n  create_args = [\"#{cloud_provider}\", 'server', 'create', '-i', '~\/.ssh\/demo_key.pem', '--ssh-user', 'ubuntu', '-f', 'demo_flavor', '--openstack-private-network', '-Z', 'nova', '-r', \"role[#{nginx_upstream}-upstream]\"]<br \/>\nelse<br \/>\n  puts \"Please specify a valid cloud provider\"<br \/>\n  exit<br \/>\nend<br \/>\nsleep_interval_in_seconds = 10<br \/>\nmin_server_count = 1<br \/>\nmax_server_count = 10<br \/>\nmin_conns = 10<br \/>\nmax_conns = 20<br \/>\nnginx_status_url = \"http:\/\/#{nginx_node}:8080\/status\"<\/p>\n<p>def get_nginx_active_servers(nginx_status_data, nginx_upstream)<br \/>\n  active_nodes = Array.new<br \/>\n  peers = nginx_status_data[\"upstreams\"][\"#{nginx_upstream}\"][\"peers\"]<br \/>\n  peers.each do |node|<br \/>\n    if node[\"state\"] == \"up\"<br \/>\n      active_nodes.push node[\"server\"]<br \/>\n    end<br \/>\n  end<br \/>\n  return active_nodes<br \/>\nend<\/p>\n<p>def get_nginx_server_conns(nginx_status_data, nginx_server_zone)<br \/>\n  return nginx_status_data[\"server_zones\"][\"#{nginx_server_zone}\"][\"processing\"]<br \/>\nend<\/p>\n<p>def add_backend_node(create_args)<br \/>\n  knife = Chef::Knife.new<br \/>\n  knife.options=MyCLI.options<br \/>\n  Chef::Knife.run(create_args, MyCLI.options)<br \/>\n  #sleep to wait for chef run<br \/>\n  1.upto(10) do |n|<br \/>\n    puts \".\"<br \/>\n    sleep 1 # second<br \/>\n  end<br \/>\nend<\/p>\n<p>def del_backend_node(nginx_status_data, nginx_node, active_nodes, cloud_provider, nginx_upstream)<br \/>\n  #lookup hostnames\/ips and pick a backend at random<br \/>\n  query = Chef::Search::Query.new<br \/>\n  #nodes = query.search('node', 'role:#{nginx_upstream}-upstream').first rescue []<br \/>\n  nodes = query.search('node', 'role:test-upstream').first rescue []<br \/>\n  hosts = Array.new<br \/>\n  nodes.each do |node|<br \/>\n    node_name = node.name<br \/>\n    node_ip = node['ipaddress']<br \/>\n    if active_nodes.any? { |val| \/#{node_ip}\/ =~ val }<br \/>\n      hosts.push \"#{node_name}:#{node_ip}\"<br \/>\n    end<br \/>\n  end<br \/>\n  del_node = hosts.sample<br \/>\n  node_name = del_node.rpartition(\":\").first<br \/>\n  node_ip = del_node.rpartition(\":\").last<br \/>\n  puts \"Removing #{node_name}\"<br \/>\n  nginx_url = \"http:\/\/#{nginx_node}:8080\/upstream_conf?upstream=#{nginx_upstream}\"<br \/>\n  response = Net::HTTP.get(URI(nginx_url))<br \/>\n  node_id = response.lines.grep(\/#{node_ip}\/).first.split('id=').last.chomp<br \/>\n  drain_url = \"http:\/\/#{nginx_node}:8080\/upstream_conf?upstream=#{nginx_upstream}&amp;id=#{node_id}&amp;drain=1\"<br \/>\n  Net::HTTP.get(URI(drain_url))<br \/>\n  sleep(5)<br \/>\n  knife = Chef::Knife.new<br \/>\n  knife.options=MyCLI.options<br \/>\n  #delete_args = [\"#{cloud_provider}\", 'server', 'delete', \"#{node_name}\", '--purge', '-y']<br \/>\n  #Chef::Knife.run(delete_args, MyCLI.options)<br \/>\n  delete_args = \"#{cloud_provider} server delete #{node_name} -P -y\"<br \/>\n  `knife #{delete_args}`<br \/>\nend<\/p>\n<p>last_conns_count = -1<\/p>\n<p>while true<br \/>\n  response = Net::HTTP.get(URI(nginx_status_url))<br \/>\n  nginx_status_data = JSON.parse(response)<\/p>\n<p>  active_nodes = get_nginx_active_servers(nginx_status_data, nginx_upstream)<br \/>\n  server_count = active_nodes.length<br \/>\n  current_conns = get_nginx_server_conns(nginx_status_data, nginx_server_zone)<\/p>\n<p>  conns_per_server = current_conns \/ server_count.to_f<\/p>\n<p>  puts \"Current connections = #{current_conns}\"<br \/>\n  puts \"connections per server = #{conns_per_server}\"<\/p>\n<p>  if server_count  max_conns<br \/>\n    if server_count &lt; max_server_count\n      puts &quot;Creating new #{cloud_provider} Instance&quot;\n      add_backend_node(create_args)\n    end\n  elsif conns_per_server  min_server_count<br \/>\n      del_backend_node(nginx_status_data, nginx_node, active_nodes, cloud_provider, nginx_upstream)<br \/>\n    end<\/p>\n<p>  end<\/p>\n<p>  last_conns_count = current_conns<br \/>\n  sleep(sleep_interval_in_seconds)<br \/>\nend<\/p>\n<p>root@ip-172-31-27-162:~#<\/pre>\n<p><\/code><\/p>\n<p>You can see that the <code>nginx_node<\/code> variable at the top of the script does not have an IP address associated with it yet. This is because we haven\u2019t created an NGINX&nbsp;Plus server yet, but Chef will update the script with that information once it has been created.<\/p>\n<p>Now we can start our NGINX&nbsp;Plus server:<\/p>\n<pre class=\"scrollable jq_custom_scroll\"><code class=\"terminal\">Damians-MacBook-Pro:default damiancurry$ <span style=\"color:#66ff99;font-weight: bold\">knife ec2 server create -r \"role[nginx_plus_autoscale]\" -g sg-1f285866 -I ami-93d80ff3 -f m1.medium -S chef-demo --region us-west-2 --ssh-user ubuntu -i ~\/.ssh\/chef-demo.pem --node-name nginx-autoscale<br \/>\nInstance ID: i-0856ee80f54c8f3e6<br \/>\nFlavor: m1.medium<br \/>\nImage: ami-93d80ff3<br \/>\nRegion: us-west-2<br \/>\nAvailability Zone: us-west-2b<br \/>\nSecurity Group Ids: sg-1f285866<br \/>\nTags: Name: nginx-autoscale<br \/>\nSSH Key: chef-demo<\/p>\n<p>Waiting for EC2 to create the instance.......<br \/>\nPublic DNS Name: ec2-35-165-171-46.us-west-2.compute.amazonaws.com<br \/>\nPublic IP Address: 35.165.171.46<br \/>\nPrivate DNS Name: ip-172-31-38-163.us-west-2.compute.internal<br \/>\nPrivate IP Address: 172.31.38.163<\/p>\n<p>Waiting for sshd access to become available<br \/>\nSSH Target Address: ec2-35-165-171-46.us-west-2.compute.amazonaws.com(dns_name)<br \/>\ndone<\/p>\n<p>SSH Target Address: ec2-35-165-171-46.us-west-2.compute.amazonaws.com()<br \/>\nCreating new client for nginx-autoscale<br \/>\nCreating new node for nginx-autoscale<br \/>\nConnecting to ec2-35-165-171-46.us-west-2.compute.amazonaws.com<br \/>\nec2-35-165-171-46.us-west-2.compute.amazonaws.com -----&gt; Installing Chef Omnibus (-v 12)<br \/>\n\u2026<br \/>\nec2-35-165-171-46.us-west-2.compute.amazonaws.com Chef Client finished, 24\/34 resources updated in 43 seconds<\/pre>\n<p><\/code><\/p>\n<p>Now we can check our autoscaler script on the autoscaler instancer to make sure the script was updated with the new node&#8217;s IP address:<\/p>\n<pre><code class=\"config\">root@ip-172-31-27-162:~# <span style=\"color:#66ff99;font-weight: bold\">grep 'nginx_node =' \/usr\/bin\/autoscale_nginx.rb<\/span><br \/>\nnginx_node = \"172.31.38.163\"<br \/>\nroot@ip-172-31-27-162:~#<\/pre>\n<p><\/code><\/p>\n<p>You should now be able to hit the status page for your NGINX&nbsp;Plus node, and it should look like below:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-1.wp.nginx.com\/wp-content\/uploads\/2017\/10\/Autoscaling1-1024x512.png\" alt=\"\" width=\"1024\" height=\"512\" class=\"alignnone size-large wp-image-54399\" style=\"border:2px solid #666666;padding:2px;margin:2px\" \/><\/p>\n<p>You can also hit the NGINX&nbsp;Plus server on port&nbsp;80, and you\u2019ll see a <span><code>502<\/code> <code>Bad<\/code> <code>Gateway<\/code><\/span> error page because you haven\u2019t started any backend application servers yet.<\/p>\n<p>Before we fire up the autoscaler script and get some application nodes started, let\u2019s take a look at the script that will add these new nodes to the running NGINX config, <strong>\/tmp\/api_update.sh<\/strong>:<\/p>\n<pre class=\"scrollable jq_custom_scroll_dark\"><code class=\"config\"><br \/>\n#!\/bin\/bash<br \/>\nNGINX_NODES=\"$(mktemp)\"<br \/>\n\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?upstream=test\"| \/usr\/bin\/awk '{print $2}' | \/bin\/sed -r 's\/;\/\/g' | \/usr\/bin\/sort &gt; $NGINX_NODES<br \/>\nCONFIG_NODES=\"$(mktemp)\"<br \/>\n\/bin\/grep -E '[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}' \/etc\/nginx\/conf.d\/test-upstream.conf | \/usr\/bin\/awk '{print $2}' | \/bin\/sed -r 's\/;\/\/g' | \/usr\/bin\/sort &gt; $CONFIG_NODES<br \/>\nDIFF_OUT=\"$(mktemp)\"<br \/>\n\/usr\/bin\/diff $CONFIG_NODES $NGINX_NODES &gt; $DIFF_OUT<br \/>\nADD_NODE=`\/usr\/bin\/diff ${CONFIG_NODES} ${NGINX_NODES} | \/bin\/grep \"\" | \/usr\/bin\/awk '{print $2}'`<\/p>\n<p>for i in $ADD_NODE; do<br \/>\n    echo \"adding node ${i}\";<br \/>\n    \/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?add=&amp;upstream=test&amp;server=${i}&amp;max_fails=0\"<br \/>\ndone<br \/>\nfor i in $DEL_NODE; do<br \/>\n    echo \"removing node ${i}\";<br \/>\n    #NODE_ID=`\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?upstream=test\" | \/bin\/grep ${i} | \/usr\/bin\/awk '{print $4}' | \/bin\/sed -r 's\/id=\/\/g'`<br \/>\n    NODE_ID=`\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?upstream=test\" | \/bin\/grep ${i} | \/bin\/grep -oP 'id=Kd+'`<br \/>\n    NODE_COUNT=`\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?upstream=test\" | \/bin\/grep -n ${i} | \/bin\/grep -oP 'd+:server' | \/bin\/sed -r 's\/:server\/\/g'`<br \/>\n    JSON_NODE_NUM=$(expr $NODE_COUNT - 1)<br \/>\n    NODE_CONNS=`\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/status\" | \/usr\/bin\/jq \".upstreams.test.peers[${JSON_NODE_NUM}].active\"`<br \/>\n    NODE_STATE=`\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/status\" | \/usr\/bin\/jq \".upstreams.test.peers[${JSON_NODE_NUM}].state\"`<br \/>\n    if [[ ${NODE_STATE} == '\"up\"' ]] &amp;&amp; [[ ${NODE_CONNS} == 0 ]]; then<br \/>\n\techo \"nodes is up with no active connections, removing ${i}\"<br \/>\n\t\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?remove=&amp;upstream=test&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"draining\"' ]] &amp;&amp; [[ ${NODE_CONNS} == 0 ]]; then<br \/>\n    echo \"nodes is draining with no active connections, removing ${i}\"<br \/>\n    \/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?remove=&amp;upstream=test&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"down\"' ]]; then<br \/>\n\techo \"node state is down, removing ${i}\":<br \/>\n\t\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?remove=&amp;upstream=test&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"unhealthy\"' ]]; then<br \/>\n\techo \"node state is down, removing ${i}\":<br \/>\n\t\/usr\/bin\/curl -s \"http:\/\/localhost:8080\/upstream_conf?remove=&amp;upstream=test&amp;id=${NODE_ID}\"<br \/>\n    elif [[ ${NODE_STATE} == '\"up\"' ]] &amp;&amp; [[ ${NODE_CONNS} != 0 ]]; then<br \/>\n\techo \"node has active connections, draining connections on ${i}\"<br \/>\n    fi<br \/>\ndone<\/p>\n<p>rm $NGINX_NODES $CONFIG_NODES $DIFF_OUT<br \/>\nubuntu@ip-172-31-38-163:~$<\/pre>\n<p><\/code><\/p>\n<p>This script will be called every time Chef runs, and it\u2019ll compare the existing running config to the upstream config file defined for the autoscaling group. As you can see from the recipe snippet below, Chef manages the config file, but doesn\u2019t reload NGINX when it is updated. Instead, it calls the <code>apt_update<\/code> script:<\/p>\n<pre><code class=\"config\">template \"\/etc\/nginx\/conf.d\/#{node[:nginx][:upstream]}-upstream.conf\" do<br \/>\n  source 'upstreams.conf.erb'<br \/>\n  owner 'root'<br \/>\n  group node['root_group']<br \/>\n  mode 0644<br \/>\n  variables(<br \/>\n    hosts: upstream_node_ips<br \/>\n  )<br \/>\n  # notifies :reload, 'service[nginx]', :delayed<br \/>\n  notifies :run, 'execute[run_api_update_script]', :delayed<br \/>\nend<\/pre>\n<p><\/code><\/p>\n<p>Now we can start the autoscaler script and get some application servers brought online. Because we need to utilize the Ruby binary shipped with the Chef client, we have to fully qualify the path to the Ruby binary for the script to run properly:<\/p>\n<pre class=\"scrollable jq_custom_scroll\"><code class=\"terminal\">ubuntu@ip-172-31-27-162:~$ <span style=\"color:#66ff99;font-weight: bold\">\/opt\/chef\/embedded\/bin\/ruby \/usr\/bin\/autoscale_nginx.rb<\/span><br \/>\nCurrent connections = 0<br \/>\nconnections per server = NaN<br \/>\nCreating new ec2 Instance<br \/>\nNo existing hosts<br \/>\ntest-app-1<br \/>\nInstance ID: i-0c671d851a1c5e6d0<br \/>\nFlavor: m1.medium<br \/>\nImage: ami-93d80ff3<br \/>\nRegion: us-west-2<br \/>\nAvailability Zone: us-west-2b<br \/>\nSecurity Group Ids: chef-demo<br \/>\nTags: Name: test-app-1<br \/>\nSSH Key: chef-demo<\/p>\n<p>Waiting for EC2 to create the instance...<br \/>\n\u2026<br \/>\nec2-35-165-4-158.us-west-2.compute.amazonaws.com Chef Client finished, 16\/26 resources updated in 34 seconds<br \/>\n\u2026<br \/>\nPrivate IP Address: 172.31.40.186<br \/>\nEnvironment: _default<br \/>\nRun List: role[test-upstream]<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\n.<br \/>\nCurrent connections = 0<br \/>\nconnections per server = 0.0<br \/>\nCurrent connections = 0<br \/>\nconnections per server = 0.0<\/pre>\n<p><\/code><\/p>\n<p>Now that you have one application node up, you can go back to your NGINX&nbsp;Plus node. You should see this demo page now, instead of the <code>502<\/code> error page:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-1.wp.nginx.com\/wp-content\/uploads\/2017\/10\/Autoscaling2.png\" alt=\"\" width=\"810\" height=\"621\" class=\"alignnone size-full wp-image-54400\" style=\"border:2px solid #666666;padding:2px;margin:2px\" \/><\/p>\n<p>And if you go back to the status page, you now have an upstream defined:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-1.wp.nginx.com\/wp-content\/uploads\/2017\/10\/Autoscaling3.png\" alt=\"\" width=\"584\" height=\"516\" class=\"alignnone size-full wp-image-54401\" \/><\/p>\n<p>Next, we can use a tool like <code>wrk<\/code> to generate some load against the site:<\/p>\n<pre><code class=\"terminal\">Damians-MacBook-Pro:wrk damiancurry$ <span style=\"color:#66ff99;font-weight: bold\">.\/wrk -c 25 -t 2 -d 10m http:\/\/ec2-35-165-171-46.us-west-2.compute.amazonaws.com\/<\/span><br \/>\nRunning 10m test @ http:\/\/ec2-35-165-171-46.us-west-2.compute.amazonaws.com\/<br \/>\n  2 threads and 25 connections<\/p>\n<p>And on the autoscaler node, you can see the script catch the increase in connections and start a new instance:<\/p>\n<p>[terminal]Current connections = 0<br \/>\nconnections per server = 0.0<br \/>\nCurrent connections = 24<br \/>\nconnections per server = 24.0<br \/>\nCreating new ec2 Instance<br \/>\nnew number is<br \/>\n2<br \/>\ntest-app-2<br \/>\nInstance ID: i-07186f5451c7d9e77<br \/>\nFlavor: m1.medium<br \/>\nImage: ami-93d80ff3<br \/>\nRegion: us-west-2<br \/>\nAvailability Zone: us-west-2b<br \/>\nSecurity Group Ids: chef-demo<br \/>\nTags: Name: test-app-2<br \/>\nSSH Key: chef-demo<\/p>\n<p>Waiting for EC2 to create the instance......<br \/>\n\u2026.<br \/>\nec2-35-166-214-136.us-west-2.compute.amazonaws.com Chef Client finished, 16\/26 resources updated in 35 seconds<br \/>\nCurrent connections = 24<br \/>\nconnections per server = 12.0<br \/>\nCurrent connections = 24<br \/>\nconnections per server = 12.0<\/pre>\n<p><\/code><\/p>\n<p>You should now be able to see two upstream nodes in your dashboard. Now the script will stay at this point, because it's configured to scale up when the nodes have more than 20 active connections on average. If you go back and refresh your browser pointed at port&nbsp;80 of your NGINX server, you should see the data change as it switches between the different backend nodes. If we stop the traffic from being generated, you can see the script take one of the nodes offline, as it is configured to always have a minimum of one server running.<\/p>\n<pre><code class=\"terminal\">Current connections = 24<br \/>\nconnections per server = 12.0<br \/>\nCurrent connections = 0<br \/>\nconnections per server = 0.0<br \/>\nRemoving test-app-2<br \/>\nno instance id is specific, trying to retrieve it from node name<br \/>\nWARNING: Deleted server i-0dcf4740c1b34417f<br \/>\nWARNING: Deleted node test-app-2<br \/>\nWARNING: Deleted client test-app-2<br \/>\nCurrent connections = 0<br \/>\nconnections per server = 0.0<\/pre>\n<p><\/code><\/p>\n<h2>Conclusion<\/h2>\n<p>This is a rather basic script at this point, but it should provide a starting point for you to build a customized autoscaling solution that fits your environment. And if you ever want to migrate to a different cloud provider, it\u2019s as simple as changing one attribute in the Chef configuration.<\/p>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\/blog\/autoscaling-and-orchestration-with-nginx-plus-and-chef\/\">Autoscaling and Orchestration with NGINX Plus and Chef<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\">NGINX<\/a>.<\/p>\n<p>Source: <a href=\"https:\/\/www.nginx.com\/blog\/autoscaling-and-orchestration-with-nginx-plus-and-chef\/\" target=\"_blank\">Autoscaling and Orchestration with NGINX Plus and Chef<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"mh-excerpt\"><p>Autoscaling and Orchestration with NGINX Plus and Chef Introduction There are many solutions for handling autoscaling in cloud environments, but they\u2019re usually dependent on the specific infrastructure of a given cloud provider. Leveraging the flexibility of NGINX&nbsp;Plus with the functionality of Chef, we can build an autoscaling system that can be used on most cloud providers. Chef has a tool, knife, which you can use at the command line to act on objects such as cookbooks, nodes, data bags, and more. Knife plugins help you extend knife. So we use knife plugins to help abstract out functionality specific to one specific cloud, enabling knife commands to work the same way across clouds. Requirements For this setup, we\u2019ll be leveraging our NGINX Chef cookbook. The installation and a basic overview of this cookbook can be found here. Also, we\u2019ll be utilizing <a class=\"mh-excerpt-more\" href=\"https:\/\/jirak.net\/wp\/autoscaling-and-orchestration-with-nginx-plus-and-chef\/\" title=\"Autoscaling and Orchestration with NGINX Plus and Chef\">[ more&#8230; ]<\/a><\/p>\n<\/div>","protected":false},"author":1,"featured_media":21394,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[169],"tags":[652],"class_list":["post-21393","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news","tag-nginx"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/21393","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/comments?post=21393"}],"version-history":[{"count":1,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/21393\/revisions"}],"predecessor-version":[{"id":21395,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/21393\/revisions\/21395"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/media\/21394"}],"wp:attachment":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/media?parent=21393"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/categories?post=21393"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/tags?post=21393"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}