2021-01-08 | 20 minutes reading

Howto setup and secure web server

Web server is one of the most basic services you can self-host. Very simple to install, reasonably simple to configure for basic use. Not that hard to setup for more robust usage, but the hardest thing is to run it in secure way. This is also the reason why this episode is a bit longer than usual.

Previous articles of this series can be found here: episode 1, episode 2, episode 3 and episode 4.

Installation

In the episode 4 we talked about apache, nginx and also some other not that common web server implementations. Today we will focus on nginx. Nginx, together with apache2, takes over two thirds of the market share. But over the last months and years, apache is loosing its position and nginx is still on the rise. That's also the reason why I will focus on nginx today. All of the topics I will cover in this article apply on Apache2 too, just google the exact syntax variation of the steps.

Basic installation is as simple as 'sudo apt install nginx' in case of debian/ubuntu, but comparably simple in different distros too. Your web server should be now up and running, serving its default welcome page when typing localhost into your browser of choice. Now, there are 3 important places to look at:

/var/www/

First one is /var/www/ directory, which is the default directory to store web content that web server will serve. So you can find there the default info index.html page that was loaded in the browser. If you want to host some web, just create new directory under /var/www and copy the site content. Don't forget to apply correct rights as web server runs under www-data user and this user needs to be able to access those files. You don't need to stick with default directory, web server can serve files from any location if that location has the right permissions. In same cases, administrators even chroot the folders that web server hosts so in case, when it will be compromised, attacker would find himself in the sandbox. But I am not going to cover this option here.

/etc/nginx/sites-available/

This is the directory for virtual host configuration files. Best practice is to have different configuration file for different web pages (or web services). Web server will then execute them as a separate virtual server. Check the default configuration file to see how it look like. In most distros it will have plenty of comments. Check also the /etc/nginx/sites-enabled directory. You will see the symlink to the file in sites-available. That's because nginx serves only those configs that are enabled by creating a symlink to this directory. When you add a new symlink or remove one, you need to reload web server service (service nginx reload) or using the old non-systemd way on some distros: /etc/init.d/nginx reload.

/etc/nginx/nginx.conf

This is the main configuration file. It is the global configuration for web server itself and it will also apply to all configuration files in sites-available too, but they can override these global settings by defining the same configuration option again in their file. Main config imports everything in sites-enabled directory at the end so it is obvious what is the relationship between them and also why only those configs from the sites-available directory that are linked to sites-enabled are actively used.

Default config for a web page

Ok! Take a look at some basic virtual server configuration file now. Let's create it in /etc/nginx/sites-available/example :

server {
  listen          80;
  server_name     example.mizik.sk;
  charset         utf-8;
  root            /var/www/example.mizik.sk;
  index           index.html;

  location / {
    try_files $uri $uri/ =404;
  }
}

So what do we have here. We are defining server, that listens on port 80 (plain HTTP) and it will react only if the requested host will be example.mizik.sk. Default charset will be utf8, index file should be called index.html (it needs to be defined because it can be also php file or other file type) and root directory will be /var/www/example. Then inside the server section we can define multiple location sections. Configuration options declared inside location section will apply only to the defined location and recursively down from there. In our case, one location section is enough and we are not declaring much, only that first we will try to serve the path as a file, then as a directory. If we will symlink this new file to sites-enabled directory and reload nginx, we will be able to access the page defined in root clause using the hostname in server_name clause. (of course, there should be a valid DNS 'A' record that points example.mizik.sk to the IP of our server). Our web is now up and running!

Hardening

Now let's talk about more robust but important topic. Securing the default installation and the web pages we are going to host.

HTTPS

All mainstream browsers now almost force web pages to have a valid ssl certificate, otherwise your page may be declared as not secure. Fome time to time, there is some discussion over the topic of hosting webs only over https and therefore effectively blocking old computers from accessing it as they will not support the new TLS variants or they will have no computing power to do so. I don't consider this a reason to host over plain HTTP. We are talking about les than 0.1%. I am voting for using things as long as possible, but this may not be the situation. Almost everyone can afford secondhand Raspberry pi for a single digit amount of dollars or euro, which will have no problem with loading and rendering web page using latest mainstream browser.

So let's add the 's' to our http, shell we? Nowadays, there are multiple ssl certificate issuers that offer free and automatic way to generate and renew a valid certificate for your domain(s). Best known is letsencrypt.org. Just follow the official documentation, install certbot, run the command and you will generate new certificate for your domain(s) in couple of minutes. Then you will have to enable ssl in your virtual host config file under server section and point to the newly generated cert files like so:

ssl                   on;
ssl_certificate       /etc/letsencrypt/live/mizik.sk/fullchain.pem;
ssl_certificate_key   /etc/letsencrypt/live/mizik.sk/privkey.pem;

Then change the port from 80 to 443, reload web server and try it. You should have a valid https connection in the browser. For backwards compatibility, it is good to handle also port 80 and automatically upgrade the connection to https using redirect. Just add another server section above the one you already have:

server {
  listen              80;
  server_name         example.mizik.sk;
  return              301 https://$host$request_uri;
}

Disable old SSL/TLS versions

HTTPS is useless if it won't provide what promised only because there are security issues that will compromise the encryption. That's why we will disable unsecure versions of SSL and TLS which by default can be used to negotiate encrypted connection. Attacker mask themselves as clients that can connect only using old protocol or cipher and therefore forcing web server to use less secure and outdated version of it. We can reconfigure it to fail in those cases rather then obey. In /etc/nginx/nginx.conf set 'ssl_protocols TLSv1.2 TLSv1.3;' to disable SSLv3, TLSv1.0 and TLSv1.1. All the current browsers and mobile devices support v1.3, so in case you don't care about older (unsupported versions of browsers and mobile OSes) you can stick with 1.3 only, but in time of writing this article, 1.2 is still considered a viable TLS version.

Disable weak cipher suites

We have restricted TLS versions to only secure ones, now we need to do the same for the cipher that will be used in encryption itself. Add/replace these lines in your /etc/nginx/nginx.conf:

ssl_ciphers               ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384;
ssl_ecdh_curve            secp384r1;
ssl_prefer_server_ciphers on;

Enable SSL stapling

Online Certificate Status Protocol (OCSP) was created as an alternative to the Certificate Revocation List (CRL) protocol. Both protocols are used to check whether an SSL Certificate has been revoked. OCSP stapling can be used to enhance the OCSP protocol by letting the webhosting site be more proactive in improving the client (browsing) experience. OCSP stapling allows the certificate presenter (i.e. web server) to query the OCSP responder directly and then cache the response. OCSP stapling addresses a privacy concern with OCSP because the CA no longer receives the revocation requests directly from the client (browser). OCSP stapling also addresses concerns about OCSP SSL negotiation delays by removing the need for a separate network connection to a CA’s responders. To turn on stapling just add/update these two lines in /etc/nginx/nginx.conf.

ssl_stapling              on;
ssl_stapling_verify       on;

Create random and strong Diffie-Hellman

Diffie-Hellman is a key exchange mechanism which allows two parties who have not previously met to securely establish a key which they can use to secure their communications. Don't use pregenerated DH group because it is only 1024 bit and used on millions other servers that kept the original value, which makes them an optimal target for precomputation, and potential eavesdropping. We will generate custom one and with 4096 bits using openssl:

openssl dhparam -out dhparams.pem 4096

then create new directory 'ssl' int /etc/nginx and move the file over there. Don't forget to set the correct rights. pem file itself should be writable only to root. Then add/modify this line in /etc/nginx/nginx.conf:

ssl_dhparam              /etc/nginx/ssl/dhparams.pem;

Fight certificate mis-issuance using CAA

By using CAA DNS record you are letting the world (browser) know who should issue your domain SSL/TLS certificate. It prevents mis-issuance of the certificate, where attacker would by some chance be able to generate certificate for your domain signed by a trusted certificate authority. By setting CAA you are restricting to a specific CA you have used. In case of our example and in case of using letsencrypt as an issuer, the record would look like this:

example.mizik.sk.  CAA 0 issue "letsencrypt.org"

Many DNS admin web interfaces don't provide ability to set CAA record yet, because it is relatively young specification (2017), but many times if you are able to ask for it directly on provider's support, they will set it for you.

Don't show web server version in responses

Every software has bugs, web server by default present itself with the name and version. Based on this version, attacker may find out what vulnerabilities apply to it, so let's disable sending the version whatsoever by adding 'set server_tokens off;' in /etc/nginx/nginx.conf.

Fight buffer overflow attacks

There are some indications, that by reducing client buffer and body sizes, it will be much harder to exploit any potential buffer overflow bug in the web server by simply reducing amount of data attacker can send in the request. In cases when you are sending bigger data using forms or when you are making proxy for some service, these values won't suffice. But general rule is, start with most restrictive policy and then make changes if necessary. So let's add/modify another 4 lines in /etc/nginx/nginx.conf.

client_body_buffer_size       1K;
client_header_buffer_size     1k;
client_max_body_size          1k;
large_client_header_buffers 2 1k;

Fight MITM attacks using HSTS

HTTP Strict Transport Security (HSTS) is a web server directive that informs browsers how to handle its connection through a response header. This sets the Strict-Transport-Security policy field parameter. It forces those connections over HTTPS encryption, disregarding any script's call to load any resource in that domain over plain HTTP. By setting add_header parameter in the server section of our web configuration in sites-available, web server will send this header for every response it will make.

add_header "Strict-Transport-Security" "max-age=31536000; includeSubDomains; preload";

Fight XSS attacks

The HTTP X-XSS-Protection response header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. Although these protections are largely unnecessary in modern browsers when sites implement a strong Content-Security-Policy that disables the use of inline JavaScript ('unsafe-inline'), they can still provide protections for users of older web browsers that don't yet support CSP. But we will talk about CSP later. For now, just add another automated response header to your web page configuration like in case of HSTS:

add_header "X-XSS-Protection" "1; mode=block";

Fight clickjacking attacks

The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe>, <embed> or <object>. Sites can use this to avoid click-jacking attacks, by ensuring that their content is not embedded into other sites. So let's add third automated reponse header:

add_header "X-Frame-Options" "DENY";

Fight side-channel attacks

The X-Content-Type-Options response HTTP header is a marker used by the server to indicate that the MIME types advertised in the Content-Type headers should not be changed and be followed. This is a way to opt out of MIME type sniffing, or, in other words, to say that the MIME types are deliberately configured, so let's add another automated reponse header:

add_header "X-Content-Type-Options" "nosniff";

Disable unused http methods

If you serve only normal web page and not making proxy for some kind of REST API, then it is safe to restrict what HTTP methods can be used in HTTP requests from client. By this setting we will disable using of DELETE HTTP method as a possible attack vector. Add it to the server section of your web page configuration in sites-available.

if ($request_method !~ ^(GET|HEAD|POST)$) {
  return 444;
}

Setup CSP restrictions

Content Security Policy (CSP) is an added layer of security that helps to detect and mitigate certain types of attacks, including Cross Site Scripting (XSS) and data injection attacks. These attacks are used for everything from data theft to site defacement to distribution of malware. The implementation is in form of a response http header or <meta> tag. In it's value we are able to define from where can browser load specific web page sources like scripts, css, images and so on. Header below will disable all 3rd party resources and enable only those that are hosted together with html files. It will also disable inline definition of scripts and css which is potentially insecure. This should also be our go to configuration. If you have some specific reason why you need to enable some 3rd party resource, or some inline definition of css or script, you can compute a hash of that inline chunk, or define a custom nonce and add it the the header value. For more information, check developer.mozilla.org. In our case, let's try to stick with full restrictions:

add_header "Content-Security-Policy" "default-src 'self'; font-src 'self'; script-src 'self'; style-src 'self'; img-src 'self'; frame-ancestors 'self'; form-action 'self'; base-uri 'none'; upgrade-insecure-requests; block-all-mixed-content;";

Fight DoS attacks using Fail2ban

It is possible to prevent DoS attacks too by joining forces with fail2ban and firewall. The implementation consists of three steps. First one is to configure web server to log all suspicious requests to a specific log file. Second is to read this file by fail2ban which will then use the preconfigured hook (jail) to restrict IP of the requesting party using firewall. There are already two nice articles with example configurations, so if you are interested, check this and this link.

Check your results using ssllabs

Finally, ssllabs.com is really great tool to check if our ssl setup and hardening was successful and correct. Definitely use it.

Summary

Last but not least, keeping everything up to date is also crucial, but we got that covered by the automated updates discussed in the episode 3. So here we are with the web server up and running, properly secured and prepared for adventures much bigger than hosting couple of static sites. The hardening above is sufficient for most of the professional use cases.


tags: VPS, Linux, Self-host