{"id":30267,"date":"2019-04-03T02:27:29","date_gmt":"2019-04-02T17:27:29","guid":{"rendered":"https:\/\/jirak.net\/wp\/secure-distribution-of-ssl-private-keys-with-nginx\/"},"modified":"2019-04-03T02:34:20","modified_gmt":"2019-04-02T17:34:20","slug":"secure-distribution-of-ssl-private-keys-with-nginx","status":"publish","type":"post","link":"https:\/\/jirak.net\/wp\/secure-distribution-of-ssl-private-keys-with-nginx\/","title":{"rendered":"Secure Distribution of SSL Private Keys with NGINX"},"content":{"rendered":"<p>Secure Distribution of SSL Private Keys with NGINX<\/p>\n<p>This blog post describes several methods for securely distributing the SSL private keys that NGINX uses when hosting SSL&#8209;encrypted websites. It explains:<\/p>\n<ul>\n<li>The <a href=\"#standard-config\">standard approach<\/a> for configuring SSL with NGINX, and the potential security limitations<\/li>\n<li>How to <a href=\"#encrypt-keys\">encrypt the keys<\/a> using passwords that are stored separately from the NGINX configuration<\/li>\n<li>How to <a href=\"#secure-distribution\">distribute the encryption passwords securely<\/a>, avoiding disk storage, and then revoke them when needed<\/li>\n<\/ul>\n<p>For many deployments, the standard approach is sufficient. The two more sophisticated approaches discussed in this post block other ways an attacker can obtain SSL private keys. We&#8217;ll also look at a couple more techniques in follow&#8209;up posts:<\/p>\n<ul>\n<li>Using third&#8209;party secret stores such as Hashicorp Vault to securely distribute passwords<\/li>\n<li>Automating the provisioning of certificates from Vault to NGINX&nbsp;Plus\u2019s <a target=\"_blank\" href=\"http:\/\/nginx.org\/en\/docs\/http\/ngx_http_keyval_module.html\" rel=\"noopener noreferrer\">key&#8209;value store<\/a>, so that private key material is never stored on disk<\/li>\n<\/ul>\n<p>The approaches presented in this post apply to users who need to manage their own keys and create their own secure key&#8209;distribution strategy. They are not necessary for users who are running NGINX in environments that already integrate with a secret store, such as <a target=\"_blank\" href=\"https:\/\/kubernetes.io\/docs\/concepts\/configuration\/secret\/\" rel=\"noopener noreferrer\">Kubernetes<\/a>.<\/p>\n<p>This post applies to both NGINX Open Source and NGINX&nbsp;Plus. For ease of reading, we&#8217;ll refer to NGINX throughout.<\/p>\n<h2>Why Protect the SSL Private Key?<\/h2>\n<p>SSL\/TLS is used to authenticate, encrypt, and verify the integrity of network transactions. Websites authenticate themselves using a <strong>public certificate<\/strong> signed by a Certificate Authority (CA), and demonstrate they own the certificate by performing calculations using the corresponding <strong>private key<\/strong> (which must be kept secret).<\/p>\n<p>If the private key is compromised (disclosed to another entity), there are two main risks.<\/p>\n<ul>\n<li><strong>Risk 1: Impersonation<\/strong>. An attacker who has the private key can intercept network traffic and then mount a <span>man-in-the-middle<\/span> (MITM) attack. This attack captures and decrypts all traffic, perhaps also modifying it, without clients or the website being aware.<\/li>\n<li><strong>Risk 2: Decryption<\/strong>. An attacker who has the private key and has recorded the network traffic is then able to decrypt the network traffic offline. Note that this attack cannot be used against connections that use a <a target=\"_blank\" href=\"https:\/\/en.wikipedia.org\/wiki\/Forward_secrecy\" rel=\"noopener noreferrer\">Perfect Forward Secrecy<\/a> (PFS) cipher.<\/li>\n<\/ul>\n<p>If the private key is compromised, your only recourse is to contact the CA and request that your certificate be revoked; you must then rely on clients to check and honor the revocation status.<\/p>\n<p>In addition, it is good practice to use certificates with short expiry times (for example, <a target=\"_blank\" href=\"https:\/\/letsencrypt.org\/2015\/11\/09\/why-90-days.html\" rel=\"noopener noreferrer\">Let&#8217;s&nbsp;Encrypt<\/a> certificates expire after 90 days). Shortly before a certificate expires, you need to generate a new private key and obtain a new certificate from the CA. This reduces your exposure in the event the private key is compromised.<\/p>\n<h2>The NGINX Security Boundary<\/h2>\n<p>Which people and processes can access SSL private keys in NGINX?  <\/p>\n<p>First of all, any user who gains <code>root<\/code> access to the server running NGINX is able to read and use all resources that NGINX itself uses. For example, there are known methods to extract the SSL private key from the memory of a running process. <\/p>\n<p>Therefore, no matter how the private key is stored and distributed, it\u2019s not possible to protect the private key from an attacker with <code>root<\/code> privileges on the host server.<\/p>\n<p>Next, any user who can modify and commit NGINX configuration can use that power in many ways \u2013 to open proxy access to internal services, to bypass authentication measures, etc. He or she can modify NGINX configuration to obtain <code>root<\/code> access (or equivalent) to the server, although tools like <a href=\"https:\/\/www.nginx.com\/blog\/using-nginx-plus-with-selinux\/\">SELinux<\/a> and <a target=\"_blank\" href=\"https:\/\/en.wikipedia.org\/wiki\/AppArmor\" rel=\"noopener noreferrer\">AppArmor<\/a> help mitigate against that possibility.<\/p>\n<p>Therefore, it is generally not possible to protect the private key from an attacker who can modify and commit NGINX configuration.<\/p>\n<p>Fortunately, any competent organization has sound security processes to make it difficult for an attacker to gain <code>root<\/code> privileges or to modify NGINX configuration.<\/p>\n<p>However, there are two other ways that a less privileged attacker might obtain access to the private key:<\/p>\n<ul>\n<li>A user might have a legitimate reason to need to view the NGINX configuration, or might obtain access to a configuration database or backup. NGINX private keys are typically stored in the configuration.<\/li>\n<li>A user might obtain access to the filesystem of the NGINX server, perhaps through a hypervisor or system backup. Any data stored on the filesystem, including the private key material, is potentially accessible.<\/li>\n<\/ul>\n<p>The processes described in this document explain how to seal these two disclosure methods.<\/p>\n<h2 id=\"standard-config\">Standard NGINX Configuration<\/h2>\n<p>We begin by reviewing what a typical NGINX configuration with SSL\/TLS looks like:<\/p>\n<pre><code class=\"config\">server {\r\n    listen 443 ssl;\r\n\r\n    server_name a.dev0; \r\n\r\n    <strong>ssl_certificate         ssl\/a.dev0.crt;<\/strong>\r\n    <strong>ssl_certificate_key     ssl\/a.dev0.key;<\/strong>\r\n\r\n    location \/ {\r\n        return 200 \"Hello from service An\";\r\n    }\r\n}<\/code><\/pre>\n<p>The SSL public certificate (<strong>a.dev0.crt<\/strong>) and private key (<strong>a.dev0.key<\/strong>) are stored in the filesystem, at <strong>\/etc\/nginx\/ssl\/<\/strong>. The private key is only read by the NGINX master process, which typically runs as <code>root<\/code>, so you can set the strictest possible access permissions on it:<\/p>\n<pre><code class=\"terminal\">root@web1:\/etc\/nginx\/ssl# <span style=\"color:#66ff99;font-weight: bold\">ls -l a.dev0.key<\/span>\r\n-r-------- 1 root root 1766 Aug 15 16:32 a.dev0.key<\/code><\/pre>\n<p>The private key must be available at all times; the NGINX master process reads it whenever the NGINX software starts, configuration is reloaded, or a syntax check is performed (<span><code>nginx<\/code> <code>-t<\/code><\/span>).<\/p>\n<p>For more information on configuring SSL\/TLS, see the <a target=\"_blank\" href=\"https:\/\/docs.nginx.com\/nginx\/admin-guide\/security-controls\/terminating-ssl-http\/\" rel=\"noopener noreferrer\">NGINX&nbsp;Plus Admin&nbsp;Guide<\/a>.<\/p>\n<h3>Security Implications of the Standard Configuration<\/h3>\n<p>As noted above, the SSL private key can be read by an attacker who gains <code>root<\/code> access to the running container, virtual machine, or server that is running the NGINX software.<\/p>\n<h2 id=\"encrypt-keys\">Encrypting SSL Private Keys<\/h2>\n<p>NGINX supports encrypted private keys, using secure algorithms such as AES256:<\/p>\n<pre><code class=\"terminal\">root@web1:\/etc\/nginx\/ssl# <span style=\"color:#66ff99;font-weight: bold\">mv a.dev0.key a.dev0.key.plain<\/span>\r\nroot@web1:\/etc\/nginx\/ssl# <span style=\"color:#66ff99;font-weight: bold\">openssl rsa -aes256 -in a.dev0.key.plain -out a.dev0.key<\/span>\r\nwriting RSA key\r\nEnter PEM pass phrase: <span style=\"color:#66ff99;font-weight: bold\"><em>secure password<\/em><\/span>\r\nVerifying - Enter PEM pass phrase: <span style=\"color:#66ff99;font-weight: bold\"><em>secure password again<\/em><\/span><\/code><\/pre>\n<p>When you then start NGINX, or reload or test NGINX configuration, NGINX requests the decryption password interactively:<\/p>\n<pre><code class=\"terminal\">root@web1:\/etc\/nginx# <span style=\"color:#66ff99;font-weight: bold\">nginx -t<\/span>\r\nEnter PEM pass phrase: <span style=\"color:#66ff99;font-weight: bold\"><em>secure password<\/em><\/span>\r\nnginx: the configuration file \/etc\/nginx\/nginx.conf syntax is ok\r\nnginx: configuration file \/etc\/nginx\/nginx.conf test is successful<\/code><\/pre>\n<h3>Using an SSL Password File<\/h3>\n<p>Entering passwords interactively is inconvenient and difficult to automate, but you can configure NGINX to use the passwords stored in a separate file named by the <a target=\"_blank\" href=\"https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_password_file\" rel=\"noopener noreferrer\"><code>ssl_password_file<\/code><\/a> directive. When NGINX needs to read a private key, it attempts to decrypt the key using each of the passwords in the file in turn. If none of the passwords is valid, NGINX refuses to start.<\/p>\n<pre><code class=\"config\">ssl_password_file \/var\/lib\/nginx\/ssl_passwords.txt;<\/code><\/pre>\n<p>The <code>ssl_password_file<\/code> must be distributed separately from the configuration, and be readable only by the <code>root<\/code> user. You can regard it as an authorization token that is placed on trusted servers. NGINX can only decrypt the private keys when it is running on a server with the authorization token.<\/p>\n<h3>Security Implications of Encrypted Keys<\/h3>\n<p>This method reduces the attack surface by making the NGINX configuration alone useless to an attacker. The attacker must also obtain the contents of the <code>ssl_password_file<\/code>.<\/p>\n<p>If an attacker does gain <code>root<\/code> access to the filesystem where the <code>ssl_password_file<\/code> is stored (for example, from a backup or through the host system), he or she can read the file and use the passwords to decrypt SSL private keys.<\/p>\n<p>You can reduce this risk by storing the <code>ssl_password_file<\/code> on a RAM disk or <strong>tmpfs<\/strong>. This storage is generally less accessible to an external attacker (for example, it\u2019s cleared when the server is restarted) and can be excluded from system backups. You need to ensure that the password file is initialized on system boot.<\/p>\n<h2 id=\"secure-distribution\">Distributing SSL Password Lists More Securely<\/h2>\n<p>The process below describes a more secure way to distribute lists of SSL passwords, from a central distribution point. <\/p>\n<p>Whenever NGINX needs to decrypt an SSL key, it queries the central distribution point and uses the passwords without ever storing them on the local disk. To authenticate itself with the central password server, the NGINX instance uses a token which you can revoke at any time to cut off access to the passwords.<\/p>\n<h3 id=\"pdp\">Creating a Central Password Distribution Point<\/h3>\n<p>Begin by creating a password distribution point (PDP). For this simple implementation, we&#8217;re using an HTTPS service to deliver the password list, authenticated by username and password:<\/p>\n<pre><code class=\"terminal\">$ <span style=\"color:#66ff99;font-weight: bold\">curl -u dev0:mypassword https:\/\/pdpserver.local\/ssl_passwords.txt<\/span>\r\npassword1\r\npassword2<\/code><\/pre>\n<p>You can then enable or revoke access by adding or removing authentication tokens at the PDP as needed. You can implement the password distribution server using a web server such as NGINX, and use whatever kind of authentication tokens is appropriate.<\/p>\n<p>Next, we need to set up NGINX to retrieve the passwords from the PDP. We start by creating a shell script called <strong>connector.sh<\/strong> with the following contents:<\/p>\n<pre><code class=\"config\">#!\/bin\/sh\r\n\r\n# Usage: connector.sh   \r\n\r\nCONNECTOR=$1\r\nCREDS=$2\r\nPDP_URL=$3\r\n\r\n[ -e $CONNECTOR ] &amp;&amp; \/bin\/rm -f $CONNECTOR\r\n\r\nmkfifo $CONNECTOR; chmod 600 $CONNECTOR\r\n\r\nwhile true; do\r\n    curl -s -u $CREDS -k $PDP_URL -o $CONNECTOR\r\ndone<\/code><\/pre>\n<p>The script needs to run as a background process, invoked as follows:<\/p>\n<pre><code class=\"terminal\">root@web1:~# <span style=\"color:#66ff99;font-weight: bold\">.\/connector.sh \/var\/run\/nginx\/ssl_passwords \r\ndev0:mypassword https:\/\/pdpserver.local\/ssl_passwords.txt &amp;<\/span><\/code><\/pre>\n<p>The connector attaches to the specified local path (<strong>\/var\/run\/nginx\/ssl_passwords<\/strong>), and you use the <code>ssl_password_file<\/code> directive to configure NGINX to access that path:<\/p>\n<pre><code class=\"config\">ssl_password_file \/var\/run\/nginx\/ssl_passwords;<\/code><\/pre>\n<p>Test the connector by reading from the connector path:<\/p>\n<pre><code class=\"terminal\">root@web1:~# <span style=\"color:#66ff99;font-weight: bold\">cat \/var\/run\/nginx\/ssl_passwords<\/span>\r\npassword1\r\npassword2<\/code><\/pre>\n<p>Verify that NGINX can read the password and decrypt the SSL keys:<\/p>\n<pre><code class=\"terminal\">root@web1:~# <span style=\"color:#66ff99;font-weight: bold\">nginx -t<\/span>\r\nnginx: the configuration file \/etc\/nginx\/nginx.conf syntax is ok\r\nnginx: configuration file \/etc\/nginx\/nginx.conf test is successful<\/code><\/pre>\n<p>You can use the central PDP approach to securely distribute any resource that NGINX normally reads from disk, for example individual private keys or other sensitive data.<\/p>\n<h3>Security Implications of a PDP<\/h3>\n<p>This solution has several benefits compared to storing SSL passwords on disk:<\/p>\n<ul>\n<li><strong>The SSL passwords are never stored on the server\u2019s filesystem<\/strong>, so an attacker who has access to the filesystem cannot access them directly.<\/li>\n<li><strong>Passwords are distributed from a central access point<\/strong>, making monitoring and auditing easier to perform.<\/li>\n<li><strong>Individual servers\u2019 access can be controlled centrally<\/strong>. For example, once a server is decommissioned, you revoke its access token.<\/li>\n<\/ul>\n<p>Note that a user who has access to the filesystem can potentially extract the credentials used to access the PDP. It is important to revoke these credentials when they are no longer needed.<\/p>\n<h2>Summary<\/h2>\n<p>There are many ways to protect SSL private keys from disclosure, with increasing security and complexity.<\/p>\n<p>For the large majority of organizations, it is sufficient to restrict access to the environments running NGINX so that unauthorized users cannot gain <code>root<\/code> access and cannot look at NGINX configuration.<\/p>\n<p>For some environments, it might not be possible to fully restrict access to NGINX configuration, so an SSL password file can be used.<\/p>\n<p>In limited cases, organizations may wish to ensure that keys and passwords are never stored on disk. The <a href=\"#pdp\">password distribution point<\/a> process illustrates a proof of concept for this solution.<\/p>\n<p>The following blog posts in this series will show additional steps that can be taken:<\/p>\n<ol>\n<li>Using HashiCorp Vault as the password distribution point. Vault provides scalable, secure distribution of secrets with fine&#8209;grained access control.<\/li>\n<li>Using a hardware security module (HSM) to store private keys remotely, exposing an API that can be used on&#8209;demand to perform a key operation.<\/li>\n<li>Using NGINX&nbsp;Plus\u2019s key&#8209;value store to manage private keys without them ever touching a disk.<\/li>\n<\/ol>\n<p>The post <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\/blog\/secure-distribution-ssl-private-keys-nginx\/\">Secure Distribution of SSL Private Keys with NGINX<\/a> appeared first on <a rel=\"nofollow\" href=\"https:\/\/www.nginx.com\">NGINX<\/a>.<\/p>\n<p>Source: <a href=\"https:\/\/www.nginx.com\/blog\/secure-distribution-ssl-private-keys-nginx\/\" target=\"_blank\" rel=\"noopener noreferrer\">Secure Distribution of SSL Private Keys with NGINX<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"mh-excerpt\"><p>Secure Distribution of SSL Private Keys with NGINX This blog post describes several methods for securely distributing the SSL private keys that NGINX uses when hosting SSL&#8209;encrypted websites. It explains: The standard approach for configuring SSL with NGINX, and the potential security limitations How to encrypt the keys using passwords that are stored separately from the NGINX configuration How to distribute the encryption passwords securely, avoiding disk storage, and then revoke them when needed For many deployments, the standard approach is sufficient. The two more sophisticated approaches discussed in this post block other ways an attacker can obtain SSL private keys. We&#8217;ll also look at a couple more techniques in follow&#8209;up posts: Using third&#8209;party secret stores such as Hashicorp Vault to securely distribute passwords Automating the provisioning of certificates from Vault to NGINX&nbsp;Plus\u2019s key&#8209;value store, so that private key material <a class=\"mh-excerpt-more\" href=\"https:\/\/jirak.net\/wp\/secure-distribution-of-ssl-private-keys-with-nginx\/\" title=\"Secure Distribution of SSL Private Keys with NGINX\">[ more&#8230; ]<\/a><\/p>\n<\/div>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[169],"tags":[652],"class_list":["post-30267","post","type-post","status-publish","format-standard","hentry","category-news","tag-nginx"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/30267","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/comments?post=30267"}],"version-history":[{"count":1,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/30267\/revisions"}],"predecessor-version":[{"id":30268,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/posts\/30267\/revisions\/30268"}],"wp:attachment":[{"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/media?parent=30267"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/categories?post=30267"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jirak.net\/wp\/wp-json\/wp\/v2\/tags?post=30267"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}