How to Set Up Multiple SSLs on One IP With Nginx

Reading Time: 6 minutes

With the shortage of available address space in IPv4, IPs are becoming increasingly difficult to come by, and in some cases, increasingly expensive. However, in most instances, this is not a drawback. Servers are perfectly capable of hosting multiple websites on one IP address, as they have for years.

But, there was a time when using an SSL certificate to secure traffic to your site required having a separate IPv4 address for each secured domain. This is not because SSLs were bound to IPs, or even to servers, but because the request for SSL certificate information did not specify what domain was being loaded, and thus the server was forced to respond with only one certificate. A name mismatch caused an insecure certificate warning, and therefore, a server owner was required to have unique IPs for all SSL hosts.

Luckily, IPv4 limitations have brought new technologies and usability to the forefront, most notably, Server Name Indication (SNI).

 

Why Do I Need an SSL?

Secure Socket Layer (SSL) certificates allow two-way encrypted communication between a client and a server. This allows any data protection from prying eyes, including sensitive information like credit card numbers or passwords. SSLs are optionally signed by a well-known, third-party signing authority, such as GlobalSign. The most common use of such certificates are to secure web traffic over HTTPS.

When browsing an HTTPS site, rather than displaying a positive indicator, modern browsers show a negative indicator for a site that is not using an SSL. So, websites that don’t have an SSL will have a red flag right off the bat for any new visitors. Sites that want to maintain reputation are therefore forced to get an SSL.

Luckily, it is so easy to get and install an SSL, even for free, that this is reduced to a basic formality. We’ll cover the specifics of this below.

 

What is SNI?

Server Name Indication is a browser and web server capability in which an HTTPS request includes an extra header, server_name, to which the server can respond with the appropriate SSL certificate. This allows a single IP address to host hundreds or thousands of domains, each with their own SSL!

SNI technology is available on all modern browsers and web server software, so some 98+% of web users, according to W3, will be able to support it.

 

Pre-Flight Check

We’ll be working on a CentOS 7 server that uses Nginx and PHP-FPM to host websites without any control panel (cPanel, Plesk, etc.). This is commonly referred to as a “LEMP” stack, which substitutes Nginx for Apache in the “LAMP” stack. These instructions will be similar to most other flavors of Linux, though the installation of Let’s Encrypt for Ubuntu 18.04 will be different. I’ll include side-by-side instructions for both CentOS 7 and Ubuntu 18.04.

For the remainder of the instructions, we’ll assume you have Nginx installed and set up to host multiple websites, including firewall configuration to open necessary ports (80 and 443). We are connected over SSH to a shell on our server as root.

Note
If you have SSLs for each domain, but they are just not yet installed, you should use Step 3a to add them manually. If you do not have SSLs and would like to use the free Let’s Encrypt service to order and automatically configure them, you should use Step 3b.

 

Step 1: Enabling SNI in Nginx

Our first step is already complete! Modern repository versions of Nginx will be compiled with OpenSSL support to server SNI information by default. We can confirm this on the command line with:

nginx -V

This will output a bunch of text, but we are interested in just this line:

...
TLS SNI support enabled
...

If you do not have a line like this one, then Nginx will have to be re-compiled manually to include this support. This would be a very rare instance, such as in an outdated version of Nginx, one already manually compiled from source with a different OpenSSL library. The Nginx version installed by the CentOS 7 EPEL repository (1.12.2) and the one included with Ubuntu 18.04 (1.14.0) will support SNI.

Step 2: Configuring Nginx Virtual Hosts

Since you have already set up more than one domain in Nginx, you likely have server configuration blocks set up for each site in a separate file. Just in case you don’t, let’s first ensure that our domains are set up for non-SSL traffic. If they are, you can skip this step. We’ll be working on domain.com and example.com.

vim /etc/nginx/sites-available/domain.com

Note
If you don’t happen to have sites-enabled or sites-available folders, and you want to use them, you can create /etc/nginx/sites-available and /etc/nginx/sites-enabled with the mkdir command. Afterward,  inside /etc/nginx/nginx.conf, add this line anywhere inside the main http{} block (we recommend putting it right after the include line that talks about conf.d):

include /etc/nginx/sites-enabled/*;

Otherwise, you can make your configurations in /etc/nginx/conf.d/*.conf.

At the very least, insert the following options, replacing the document root with the real path to your site files, and adding any other variables you require for your sites:

server {
listen 80;
server_name domain.com;
root /var/www/domain.com;
...
}

A similar file should be set up for example.com, and any other domains you wish to host. Once these files are created, we can enable them with a symbolic link:

ln -s /etc/nginx/sites-available/domain.com /etc/nginx/sites-enabled/

ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Now, we restart Nginx…

systemctl reload nginx

This reloads the configuration files without restarting the application. We can confirm that the two we just made are loaded using:

nginx -T

You should see your server_name line for both domain.com and example.com.

Note
The listen line included in the server block above will allow the site to listen on any IP that is on the server. If you would like to specify an IP instead, you can use the IP:port format instead, like this:

server {
listen 123.45.67.89:80;
...
}

Step 3a: Add Existing SSLs to Nginx Virtual Hosts

Now that we have valid running configurations, we can add the SSLs we have for these domains as new server blocks in Nginx. First, save your SSL certificate and the (private) key to a global folder on the server, with names that indicate the relevant domain. Let’s say that you chose the global folder of /etc/ssl/. Our names, in this case, will be /etc/ssl/domain.com.crt (which contains the certificate itself and any chain certificates from the signing authority), and /etc/ssl/domain.com.key, which contains the private key. Edit the configuration files we created:

vim /etc/nginx/sites-available/domain.com

Add a brand new server block underneath the end of the existing one (outside of the last curly brace) with the following information:

server {
listen 443;
server_name domain.com;
root /var/www/domain.com;
ssl_certificate /etc/ssl/domain.com.crt;
ssl_certificate_key /etc/ssl/domain.com.key;
...
}

Note the change of the listening port to 443 (for HTTPS) and the addition of the ssl_certificate and ssl_certificate_key lines. Instead of rewriting the whole block, you could copy the original server block and then add these extra lines, while changing the listen port. Save this file and reload the Nginx configuration.

systemctl reload nginx

We again confirm the change is in place using:

nginx -T

For some setups you’ll see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443. If you do, you can skip to Step 4, otherwise continue to the next step.

Step 3b: Install and Configure Let’s Encrypt

Let’s next set up the free SSL provider Let’s Encrypt to automatically sign certificates for all of the domains we just set up in Nginx. On Ubuntu 18.04, add the PPA and install the certificate scripts with aptitude:

add-apt-repository ppa:certbot/certbot

apt-get update

apt-get install certbot python-certbot-nginx

In CentOS 7, we install the EPEL repository and install the certificate helper from there.

yum install epel-release

yum install certbot python2-certbot-nginx

On both systems, we can now read the Nginx configuration and ask the Certbot to assign us some certificates.

certbot --nginx

This will ask you some questions about which domains you would like to use (you can leave the option blank to select all domains) and whether you would like Nginx to redirect traffic to your new SSL (we would!). After it finishes it’s signing process, Nginx should automatically reload its configuration, but in case it doesn’t, reload it manually:

systemctl reload nginx

You can now check the running configuration with:

nginx -T

You should now instead see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443.

Let’s Encrypt certificates are only valid for 90 days from issuance, so we want to ensure that they are automatically renewed. Edit the cron file for the root user by running:

crontab -e

The cron should look like this:

45 2 * * 3,6 certbot renew && systemctl reload nginx

Once you save this file, every Wednesday and Saturday at 2:45 AM, the certbot command will check for any needed renewals, automatically download and install the certs, followed by a reload of the Nginx configuration.

Step 4: Verify Installation and Validity

We should now check the validity of our SSLs and ensure that browsers see the certificates properly. Visit https://sslcheck.liquidweb.com/ and type in your domain names to check the site’s SSL on your server. You should see four green checkmarks, indicatating SSL protection.

We hope you’ve enjoyed our tutorial on how to install SSLs on multiple sites within one server. Liquid Web customers have access to our support team 24/7.  We can help with signed SSL or ordering a new server for an easy transfer over to Liquid Web.

Top 4 Lessons Learned Using Ubuntu

Reading Time: 5 minutes

When choosing a server operating system, there are a number of factors and choices that must be decided. An often talked about and referenced OS, Ubuntu, is a popular choice and offers great functionality with a vibrant and helpful community. However; if you’re unfamiliar with Ubuntu and have not worked with either the server or desktop versions, you may encounter differences in common tasks and functionality from previous operating systems you’ve worked with. I’ve been a system administrator and running my own servers for a number of years, almost all of which were Ubuntu, here are the top four lessons I’ve learned while running Ubuntu on my server.

  1. Understanding the “server” vs. “desktop” model
  2. Deciphering the naming scheme/release schedule
  3. Figuring out the package manager
  4. Networking considerations on Ubuntu

 

Understanding “Server” vs. “Desktop” Model

Ubuntu comes in a number of images and released versions. One of the first choices to consider is whether you want the “desktop” or “server” release. When running an application or project on a server the choice to make is the “server” image. You may be wondering what the difference is, and it comes down to two main differences:

  • The “desktop” image has a graphical user interface (GUI) where the “server” image does not
  • The “desktop” image has default applications more suited for a user’s workstation, whereas the “server” image has default applications more suited for a web server.

It is really that simple. Currently, the majority of the differences relate to the default applications offereed when installing Ubuntu and whether or not you need a GUI. When I first started using Ubuntu, there were a number of differences between the two (such as a slightly tweaked Linux kernel on the “server” image designed to perform better on server grade hardware), but today it is a much more simplified decision process.

The choice comes down to saving time for the individual by either adding or removing applications (also called “packages”) that they do or don’t need. At Liquid Web we offer “server” images since it’s best designed for operating on a web server. You do have the choice of the Ubuntu version, which brings me to my next lesson learned:

 

Deciphering the Naming Scheme/Release Schedule

Both Ubuntu and CentOS are based on what is called an “upstream” provider, or another operating system that then CentOS/Ubuntu tweak and re-publish. For Ubuntu, this upstream provider is the OS, Debian, whereas CentOS is based on Red Hat (which is further based on Fedora). It was confusing at first for me to try and understand what the names for Ubuntu releases meant: “Vivid Vervet”, “Xenial Xerus”, “Zesty Zapus”; but that’s just a cute “name” the developers give a particular version until it is released.

The company that oversees Ubuntu, called Canonical, has a regular and set release schedule for Ubuntu. This is a distinct difference between CentOS, which relies on Red Hat, which further relies on Fedora. Every six months a release of Ubuntu is published, and every fourth release (or once every two years) a “long-term support” or LTS release is published. The numbering they use for releases is fairly straight-forward: YY.MM where YY is the two digit year and MM is the month of the release. The latest release is 19.04, meaning it was released in April of 2019.

This was tricky for me to initially navigate and created a headache since the different versions have different levels of support and updates provided to them. For some people, this is not an issue. As a developer, you may want to build your application on the latest and greatest OS that takes advantage of new technologies. Receiving support may not matter because you won’t be using the server for very long. You may also be in a position where you need to know that your OS won’t have any server-crashing bugs.

Which version do you choose? How do you know whether your OS will be stable and receive patches/updates? I ran into these issues with when accidentally creating a server that lost support just a few months later; however, I learned an easy-to-use trick to ensure that in the future I chose an LTS release:

  • LTS releases start with an even number and end in 04

Liquid Web offers a variety of Ubuntu releases within our Core-Managed and Self-Managed support tiers:

Notice that all the version of Ubuntu that we offer are the .04 releases. These are all LTS releases, ensuring that you receive the longest possible lifespan and vital security updates for your OS.

 

Ubuntu’s Package Manager

Prior to working with Ubuntu, I had the most experience with CentOS. Moving to Ubuntu meant having to learn how to install packages since the CentOS package manager, yum, was no longer available. I discovered that the general process is the same with a few minor differences.

The first difference I encountered is that Ubuntu uses the apt-get command. This is part of the Apt package manager and follows a very similar syntax to the yum command for the most basic command: installing packages. To install a package I had to use:

apt-get install $package

Where $package was the name of the package. I didn’t always know the name of the package I wanted to find though so I thought I could search much like I could with yum search $package; however there is another utility that Apt uses for searching: apt-cache search $package

If you forget,  you’ll get an error that you’ve run an illegal operation:

One similarity to yum that is super useful: the “y” flag to bypass any prompts about installing the package. This was my favorite “trick” on CentOS to speed up installing packages and improve my efficiency. Knowing that it works the same on Ubuntu was a major relief:

Of course installing packages meant I had to add the repositories I wanted. This was a significant change since I was use to the CentOS location of /etc/yum.repos.d/ for adding .repo files. With the shift in the Apt system on Ubuntu, I learned that I needed to add a .list file, pointing to the repository file in question in the /etc/apt/sources.list.d/ folder. Take Nginx for example; it was quite easy to add once I realized what had to go where:

 

From here it was as straight-forward as installing Nginx with the apt-get install nginx command:

 

There was a time when adding a repository did not immediately work for me; however, it was quickly resolved by telling Apt that it needed to refresh/update the list of repos it uses with the apt-get update command.

Note
Don’t worry, the apt-get update command does not update any packages needlessly, this simply runs through the configured repositories and ensures that your system is aware of them as well as the packages offered.

 

Networking Consideration on Ubuntu

The last major lesson I want to talk about was one learned painfully. Networking on Ubuntu, by default, is handled from the /etc/network/ directory:

There is a flat file called interfaces, and this will contain all the networking information for your host (configured IPs/gateways, DNS servers, etc…):

Notice in the above configuration that both the primary eth0 interface and the sub-interface, eth0:1, are configured in the same file (/etc/network/interfaces). This caught me in a bad spot once when I made a configuration syntax mistake, restarted networking, and suddenly lost all access to my host. That interfaces file contained all the networking components for my host, and when I restarted networking while it contained a mistake, it prevented ANY networking from coming back up. The only way to correct the issue was to console the server, correct the mistake, and restart networking again.

Though this is the default configuration for Ubuntu, there is now a way to ensure that you have the configuration setup for individual interfaces so, a simple mistake does not break all of the networking: /etc/network/interfaces.d/

To properly use this folder and have individual configuration files for each interface, you need to modify the /etc/network/interface file to tell it to look in the interfaces.d/ folder. This is a simple change of adding one line:

source /etc/network/interfaces.d/*

This will tell the main interfaces configuration file to also review /etc/network/interfaces.d/ for any additional configuration files and use those as well. This would have saved me a lot of hassle had I known about it earlier.

Hopefully, you can use my mistakes and struggles to your advantage. Ubuntu is a great operating system, is supported by a fantastic community, and is covered by “the most helpful humans in hosting” of Liquid Web. If you have any questions about Ubuntu, our support or how you might best use it in your environment feel free to contact us at support@liquidweb.com or via phone at 1-800-580-4985.

 

 

How to Redirect URLs Using Nginx

Reading Time: 3 minutes

What is a Redirect?

A redirect is a web server function that will redirect traffic from one URL to another. Redirects are an important feature when the need arises. There are several different types of redirects, but the more common forms are temporary and permanent. In this article, we will provide some examples of redirecting through the vhost file, forcing a secure HTTPS connection, redirection to www and non-www as well as the difference between temporary and permanent redirects.

Note
As this is an Nginx server, any .htaccess rules will not apply. If your using the other popular web server, Apache, you’ll find this article useful.

Common Methods for Redirects

Temporary redirects (response code: 302 Found) are helpful if a URL is temporarily being served from a different location. For example, these are helpful when performing maintenance and can redirect users to a maintenance page.

However, permanent redirects (response code: 301 Moved Permanently) inform the browser there was an old URL that it should forget and not attempt to access anymore. These are helpful when content has moved from one place to another.

 

How to Redirect

When it comes to Nginx, that is handled within a .conf file, typically found in the document root directory of your site(s), /etc/nginx/sites-available/directory_name.conf. The document root directory is where your site’s files live and it can sometimes be in the /html if you have one site on the server. Or if your server has multiple sites it can be at /domain.com.  Either way that will be your .conf file name. In the /etc/nginx/sites-available/ directory you’ll find the default file that you can copy or use to append your redirects. Or you can create a new file name html.conf or domain.com.conf.

Note
If you choose to create a new file be sure to update your symbolic links in the /etc/nginx/sites-enabled. With the command:

ln -s /etc/nginx/sites-available/domain.com.conf /etc/nginx/sites-enabled/domain.com.conf

The first example we’ll cover is redirection of a specific page/directory to the new page/directory.

Temporary Page to Page Redirect

server {
# Temporary redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage redirect;
}

Permanent Page to Page Redirect

server {
# Permanent redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage permanent;
}

Permanent www to non-www Redirect

server {
# Permanent redirect to non-www
server_name www.domain.com;
rewrite ^/(.*)$ http://domain.com/$1 permanent;
}

Permanent Redirect to www

server {
# Permanent redirect to www
server_name domain.com;
rewrite ^/(.*)$ http://www.newdomain.com/$1 permanent;
}

Sometimes the need will arise to change the domain name for a website. In this case, a redirect from the old sites URL to the new sites URL will be very helpful in letting users know the domain was moved to a new URL.

The next example we’ll cover is redirecting an old URL to a new URL.

Permanent Redirect to New URL

server {
# Permanent redirect to new URL
server_name olddomain.com;
rewrite ^/(.*)$ http://newdomain.com/$1 permanent;
}

We’ve added the redirect using the rewrite directive we discussed earlier. The ^/(.*)$ regular expression will use everything after the / in the URL. For example, http://olddomain.com/index.html will redirect to http://newdomain.com/index.html. To achieve the permanent redirect, we add permanent after the rewrite directive as you can see in the example code.

When it comes to HTTPS and being fully secure it is ideal for forcing everyone to use https:// instead of http://.

Redirect to HTTPS

server {
# Redirect to HTTPS
listen      80;
server_name domain.com www.domain.com;
return      301 https://example.com$request_uri;
}

After these rewrite rules are in place, testing the configuration prior to running a restart is recommended. Nginx syntax can be checked with the -t flag to ensure there is not a typo present in the file.

Nginx Syntax Check

nginx -t

If nothing is returned the syntax is correct and Nginx has to be reloaded for the redirects to take effect.

Restarting Nginx

service nginx reload

For CentOS 7 which unlike CentOS 6, uses systemd:

systemctl restart nginx

Redirects on Managed WordPress/WooCommerce

If you are on our Managed WordPress/WooCommerce products, redirects can happen through the /home/s#/nginx/redirects.conf file. Each site will have their own s# which is the FTP/SSH user per site. The plugin called ‘Redirection’ can be downloaded to help with a simple page to page redirect, otherwise the redirects.conf file can be utilized in adding more specific redirects as well using the examples explained above.

Due to the nature of a managed platform after you have the rules in place within the redirects.conf file, please reach out to support and ask for Nginx to be reloaded. If you are uncomfortable with performing the outlined steps above, contact our support team via chat, ticket or a phone call.  With Managed WordPress/WooCommerce you get 24/7 support available and ready to help you!

Configure Nginx to Read PHP on Ubuntu 16.04

Reading Time: 4 minutes

Nginx is an open source Linux web server that accelerates content while utilizing low resources. Known for its performance and stability Nginx has many other uses such as load balancing, reverse proxy, mail proxy, and HTTP cache. Nginx, by default, does not execute PHP scripts and must be configured to do so.  In this tutorial, we will show you how to enable and test PHP capabilities with your server.

Continue reading “Configure Nginx to Read PHP on Ubuntu 16.04”

Install Nginx on Ubuntu 16.04

Reading Time: 2 minutes

Nginx is an open source Linux web server that accelerates content while utilizing low resources. Known for its performance and stability Nginx has many other uses such as load balancing, reverse proxy, mail proxy, and HTTP cache. With all these qualities it makes a definite competitor for Apache. To install Nginx follow our straightforward tutorial. Continue reading “Install Nginx on Ubuntu 16.04”

Configuring NGINX for Managed WordPress

Reading Time: 2 minutes

Running a WordPress site can be incredibly simple and used right out of the box, but you may need to customize or add specific files in order to get the most out of your site. Our Managed WordPress customers can include custom NGINX configurations for individual sites because we know that adding simple redirects or adjusting browser cache settings are actions that many of our Managed WordPress users do on a regular basis. Read on to learn how you can use this functionality for your own site. Continue reading “Configuring NGINX for Managed WordPress”

How To Boost Web Server Performance with Nginx and Apache

Reading Time: 3 minutes

Apache is the most commonly used web server software on the Internet, and for good reason. Its long history, general reliability, thorough documentation and active support community have helped it grow to include support for nearly anything a web developer can think to throw at it. But Apache’s universal support has not come without some trade-offs.

Continue reading “How To Boost Web Server Performance with Nginx and Apache”

How to Install Nginx on Ubuntu 15.04

Reading Time: 1 minute

nginx is a free, open source, high-performance web server. Need HTTP and HTTPS but don’t want to run Apache? Then nginx may be your next go-to, at least for Linux.

Pre-Flight Check

  • These instructions are intended specifically for installing nginx on Ubuntu 15.04.
  • I’ll be working from a Liquid Web Self Managed Ubuntu 15.04 server, and I’ll be logged in as root.

Continue reading “How to Install Nginx on Ubuntu 15.04”

How to Install Nginx on Ubuntu 14.04 LTS

Reading Time: 1 minute

nginx is a free, open source, high-performance web server. Need HTTP and HTTPS but don’t want to run Apache? Then nginx may be your next go-to, at least for Linux.

Pre-Flight Check

  • These instructions are intended specifically for installing nginx on Ubuntu 14.04 LTS.
  • I’ll be working from a Liquid Web Self Managed Ubuntu 14.04 server, and I’ll be logged in as root.

Continue reading “How to Install Nginx on Ubuntu 14.04 LTS”