How to Configure Apache Virtual Hosts on CentOS 7

Reading Time: 4 minutes

Today, we will be reviewing how to configure Apache virtual hosts on a CentOS 7 server. If you host websites, chances are you are hosting more than one website. If so, knowing how and why these virtual hosts work should allow you to better understand why they are needed.
By default, Apache can host only one document root for all requests, which likely isn’t what you want to happen.

We can use VirtualHost blocks to translate named domains into their appropriate document roots, with new settings per-block as needed. But, what goes into a valid VirtualHost? Where should it be stored?

Continue reading “How to Configure Apache Virtual Hosts on CentOS 7”

How To Setup Let’s Encrypt on CentOS 7

Reading Time: 4 minutes

Securing Your Site

In this tutorial, we will be outlining a handy way of getting HTTPS enabled on all of your domains by using SSL’s to provide the first step in that process.

Domains secured with SSL’s are needed more often every day. If you don’t yet have an SSL on your site to encrypt your data passing over the net, you should reconsider this decision. Rather than showing an extra layer of security, modern browsers instead now display a warning when a website does not have an SSL.  This essentially requires sites to maintain a positive image by adding an SSL.

Let’s Encrypt has become a very popular solution for every sized business concerned with securing its connections to its website. To aid in implementing this, we recommend using Certbot. Certbot is a open source, free software tool for automatically installing and renewing SSLs certificates. Certbot implements these SSLs by working closely with Let’s Encrypt, the well known SSL provider, by creating the SSL’s for the server. Best news of all? Let’s Encrypt is completely free!

Continue reading “How To Setup Let’s Encrypt on CentOS 7”

How To Sync Two Apache Web Servers

Reading Time: 8 minutes

Load balancing and replicating multiple servers has a great array of benefits, though orchestrating and keeping them in sync can be very tricky. Here, we will walk through some of the load balancing options available, as well as setting up a very basic one-way replication sync between two or more servers behind a load balancer.

What is server replication?

Load balancing is a way to increase the processing power and redundancy of your web application by spreading the traffic among multiple different servers. Traffic is orchestrated by a load balancer and the web nodes are orchestrated by another data replication mechanism. That is to say, the load balancer itself has nothing to do with data replication; it only routes traffic to the web nodes. Something else is necessary to keep the web nodes’ data and configuration in sync, which is server replication.

There are a variety of methods for syncing files between web servers. These fall into four overlapping categories, which are synchronous and asynchronous, as well as one-way and two-way. Most synchronous sync types are two way, while asynchronous sync types could be one-way or two-way.

Synchronous sync types instantly share files between servers, via a shared storage node (such as a SAN or object store), and/or by coordinating a shared local or remote file system (OCFS2). These methods are complicated to configure, since beyond mounting the file systems at the same time, all servers must also communicate about when they are ready to write files and write to all locations at the same time. Liquid Web offers Managed Replication in our Enterprise Hosting offerings, removing the planning and maintenance burden from your shoulders so you can focus on your application.

Asynchronous sync types are simpler to set up but do not share files instantaneously. After files are completely written to one location, they are pushed out to another location by a service running on that server (lsyncd) or by a regularly timed sync cron. These are generally set up in one direction, so that a master server replicates out to slave servers.

What are the advantages and drawbacks of server replication?

As I mentioned, load balancing multiple servers serving the same set of data will increase the processing power behind your website, as well as introduce some redundancy. If one of your replicated nodes fails, the other (or others) can continue to serve traffic while the failed node is repaired or replaced. Load balanced systems can also be scaled easily; nodes can be added when more traffic is expected, and taken down if they are no longer needed.
But, with more servers comes more configuration. Keeping all of the server nodes in sync with each other requires additional applications to be set up and running, as well as adding additional hardware for load balancing. It is also a good idea to dedicate the web nodes to Apache only, so database information should be offloaded to a separate server as well. Further, though the software may be freely available, multiple servers and appliances cost more money to run than a single server, so load balanced clusters are inherently more expensive.

What are the requirements for setting up replication?

To set up server replication, you need two servers and one load balancer at bare minimum. But it is recommended to have a separate database server or cluster as well, to further increase redundancy.

You should also plan for the type of replication you want to use. Some replication types require additional hardware or configuration beyond what is covered here. If you are interested in those types of replication, chat with our architects about our Managed Replication products.

For the purposes of this article, we will use Liquid Web’s Cloud Load Balancer, along with two core-managed VPS web servers, and one core-managed VPS database server, connected to each other via the Cloud Private Network with private IPs. We will call the web nodes web01 and web02, and the database node db01. The web01 node is set up with password-less SSH keys into web02. All servers have one public IP.

We will also assume that Apache is set up on each web server for name-based virtual hosting and is installed at /etc/httpd/. PHP is also installed on both machines, and their configuration files are static and matched. Finally, our database server has mariadb installed and running, and the firewall is open for external mysql connections.

Note
Why aren’t we using cPanel? Account replication in cPanel requires rather a bit more replication work than we would be able to cover in this article, since cPanel account updates and creates also need to be synced. But, our Managed Replication services are built on cPanel servers, making your hosting management easier. We will use core-managed servers here for simplicity.

Step 1: Set Up Apache

In order to sync the Apache VirtualHost files, we will set up a single folder with single configuration files for each domain. In our main Apache config file at /etc/httpd/conf/httpd.conf, we will add the following line at the very end:

IncludeOptional vhosts/*.conf

This will allow Apache to load configuration files from the /etc/httpd/vhosts/ folder, if there are any, which we will make next. From the command line, run:

mkdir /etc/httpd/vhosts

Perform these steps on both web01 and web02.

In this folder, we can set up our individual domains’ configuration files. Make sure they end in .conf so that Apache will load them. In our example, we will make two files: domain.com.conf and domain.net.conf. These both will be set up with valid VirtualHost blocks, which we won’t get into the details of here. For this exercise, the docroots of the domains are at /var/www/domain.com/ and /var/www/domain.net/. We only need to make these configuration files on web01 for now, since we will sync them later.

Step 2: Set Up Databases

If your application requires it, set up databases on your dedicated database server, and add grants so that all of the web nodes can connect. For instance, after configuring the database user with a good strong password, you might run the following grant statement if your web nodes had private IPs of 192.168.0.11 and 192.168.0.12:

mysql -e “grant all privileges on your_db.* to your_user@’192.168.0.11’; grant all privileges on your_db.* to your_user@’192.168.0.12’”

To introduce elasticity, and if you are sure that all of your private IPs will have the same prefix, you might consider running this instead:

mysql -e “grant all privileges on your_db.* to your_user@’192.168.0.%’”

This has allowed all IPs that start with 192.168.0 to access this database, if they have valid credentials. Now, if you add another web node, it will also be able to access the database without additional grants being made.

We can now connect the web node to the database using its configuration file. On WordPress, for instance, this file is wp-config.php. Enter the appropriate connection credentials, using the private IP of the database server as the DB_HOST. We use the private IP so that we don’t waste public bandwidth on MySQL communication. Setting up the host connection to a hostname or IP like this will help ensure that the wp-config.php file will work on all your web servers the same way.

Note
Not using a database server? If you don’t have a separate node for databases, and are hosting them on web01, resist using ‘localhost’ in your configuration file! Once copied to your other web nodes, the connection won’t work. Use ‘web01’ or the exact private IP for web01 instead.

Step 3: Install and Configure LSyncD

For our asynchronous one-way replication, we will use Live Sync Daemon (lsyncd). This is a freely available daemon which can watch a folder for activity, and then replicate that activity with rsync in another local or remote location. We need to add the EPEL repository to install it via yum:

yum -y install epel-release
yum -y install lsyncd

Now that lsyncd is installed, we can configure it for each folder we want to sync to other nodes. In this case, we will be syncing the vhost directory we made earlier, as well as the docroots for each domain. In the /etc/lsyncd.conf file, delete the example sync command, and set up the following block of data:

sync {
default.rsyncssh,
source = "/var/www/domain.com",
host = "192.168.0.12",
targetdir = "/var/www/domain.com",
rsync = {
binary = "/usr/bin/rsync",
archive = true,
hard_links = true,
update = true
}
}

Notice the required commas after all but the last configuration lines. This is because the sync command can also be set up on one line. Make sure you don’t add a comma after the final element of any curly brace array.

Based off of this block, we can set up similar blocks for /var/www/domain.net and any other running sites. Just add them all one after the other in the /etc/lsyncd.conf file.

Lastly, enable and start lsyncd:

systemctl enable lsyncd
systemctl start lsyncd

Note
This final command should return no output, unless there was a problem. If there was, run systemctl status lsyncd and double check your config file syntax.

You can test that it’s working by checking out the contents of the directory on web02. Everything should be there! Make an update to a file, and check the target to see how long it takes to arrive. It should take just 5-10 seconds for lsyncd to pick up the change and copy the file.

Step 4: Set Up Apache Configuration Replication

All of the above is sufficient for servers that do not regularly change the number of domains they host. But, if you configure new domains or update your Apache configuration frequently, you need to set up a script to help you sync this and restart Apache on the other servers. We can call this from lsyncd as the rsync execution binary and have it run post-sync tasks. Create a file called /root/vhostsync.sh that looks like this:

#!/bin/bash
/usr/bin/rsync "$@"
[ $? -eq 0 ] && ssh 192.168.0.12 “systemctl reload httpd”

Exit your editor, then add execute permissions to the file:

chmod 700 /root/vhostsync.sh

This wrapper will perform the rsync task for you with the arguments passed by lsyncd, and if that is successful, connect to web02 by IP and reload the apache configuration. Now, head back into /etc/lsyncd.conf and add this sync block:

sync {
default.rsyncssh,
source = "/etc/httpd/vhosts",
host = "192.168.0.12",
targetdir = "/etc/httpd/",
rsync = {
binary = "/root/vhostsync.sh"
}
}

Since we declare our script as the rsync binary, this script will execute instead of running rsync by itself. Reload lsyncd to add this sync to the running config:

systemctl restart lsyncd

Once it reloads, lsyncd should copy the folder over to web02 and reload Apache. And, since we already set up the folder to be included, web02 should now also be serving the two virtualhost blocks we made for domain.com and domain.net.

Step 5: Route Traffic

That should just about do it! The final step is to test the configuration by connecting directly to web01 and web02 using hosts file modification, and make sure the content being served from both machines is the same. Also ensure that both servers can properly reach their database on db01.

Now, create the Cloud Load Balancer to route traffic to the IPs for web01 and web02. Use your hosts file again to connect to the load balancer’s VIP to ensure you can reach the nodes.
After testing, public DNS for the hosted domains can be changed to the VIP of the load balancer, allowing traffic to be routed into your cluster. Done!

Caveats

As you may have noticed, this only sets up replication from web01 to web02, not vice versa. Therefore, if you upload content or make changes to your website, you should always do so from web01. It is possible in some Load Balancer solutions to route traffic for your website’s administrative panel, such as /wp-admin, to only one of your nodes, i.e. web01. This will allow content creators to properly upload data only to the master web node for correct replication. This is not an option on the Cloud Load Balancer, though it is available in our Shared and Dedicated Load Balancer offerings.

If you decide to add a new domain to the servers, you will need to do all the following:

    • Create the document root on web01
    • Create any databases on db01 and set up SQL grants
    • Add a new virtualhost file to the /etc/httpd/vhosts/ folder
    • Add a new sync statement to /etc/lsyncd.conf for the new docroot
    • Restart lsyncd and apache on web01

Additionally, when adding or removing a server, new blocks will need to be set up manually inside of /etc/lsyncd.conf to sync to this new host too, as well as a new vhostsync wrapper for restarting Apache on those new nodes after configuration sync.

Note
None of this is an issue with our Managed Load Balancing solutions, which will be scaled as sites or servers are connected.

Finally, though this does provide some redundancy, your web01 server is still the master node, since it is running lsyncd. If web01 goes down, your website will still work, but you ought not add to or modify your sites until it can be repaired or replaced.

Do I Still Need Backups?

Absolutely!

Server replication is not a stand-in for server backups. If a hard disk crashes or your RAM goes bad, the other web node will still be able to stand up traffic. However, if your website is compromised, or you accidentally delete files, these changes will be replicated over to the other web node, removing your “backups”. Make sure you have local and full-server backups for disaster recovery.

If you are interested in setting up your own load balanced server replication, or would like more information about our managed products to help you with synchronous or asynchronous replication, chat with a hosting advisor today!

How to Set Up Multiple SSLs on One IP With Nginx

Reading Time: 6 minutes

With the shortage of available address space in IPv4, IPs are becoming increasingly difficult to come by, and in some cases, increasingly expensive. However, in most instances, this is not a drawback. Servers are perfectly capable of hosting multiple websites on one IP address, as they have for years.

But, there was a time when using an SSL certificate to secure traffic to your site required having a separate IPv4 address for each secured domain. This is not because SSLs were bound to IPs, or even to servers, but because the request for SSL certificate information did not specify what domain was being loaded, and thus the server was forced to respond with only one certificate. A name mismatch caused an insecure certificate warning, and therefore, a server owner was required to have unique IPs for all SSL hosts.

Luckily, IPv4 limitations have brought new technologies and usability to the forefront, most notably, Server Name Indication (SNI).

 

Why Do I Need an SSL?

Secure Socket Layer (SSL) certificates allow two-way encrypted communication between a client and a server. This allows any data protection from prying eyes, including sensitive information like credit card numbers or passwords. SSLs are optionally signed by a well-known, third-party signing authority, such as GlobalSign. The most common use of such certificates are to secure web traffic over HTTPS.

When browsing an HTTPS site, rather than displaying a positive indicator, modern browsers show a negative indicator for a site that is not using an SSL. So, websites that don’t have an SSL will have a red flag right off the bat for any new visitors. Sites that want to maintain reputation are therefore forced to get an SSL.

Luckily, it is so easy to get and install an SSL, even for free, that this is reduced to a basic formality. We’ll cover the specifics of this below.

 

What is SNI?

Server Name Indication is a browser and web server capability in which an HTTPS request includes an extra header, server_name, to which the server can respond with the appropriate SSL certificate. This allows a single IP address to host hundreds or thousands of domains, each with their own SSL!

SNI technology is available on all modern browsers and web server software, so some 98+% of web users, according to W3, will be able to support it.

 

Pre-Flight Check

We’ll be working on a CentOS 7 server that uses Nginx and PHP-FPM to host websites without any control panel (cPanel, Plesk, etc.). This is commonly referred to as a “LEMP” stack, which substitutes Nginx for Apache in the “LAMP” stack. These instructions will be similar to most other flavors of Linux, though the installation of Let’s Encrypt for Ubuntu 18.04 will be different. I’ll include side-by-side instructions for both CentOS 7 and Ubuntu 18.04.

For the remainder of the instructions, we’ll assume you have Nginx installed and set up to host multiple websites, including firewall configuration to open necessary ports (80 and 443). We are connected over SSH to a shell on our server as root.

Note
If you have SSLs for each domain, but they are just not yet installed, you should use Step 3a to add them manually. If you do not have SSLs and would like to use the free Let’s Encrypt service to order and automatically configure them, you should use Step 3b.

 

Step 1: Enabling SNI in Nginx

Our first step is already complete! Modern repository versions of Nginx will be compiled with OpenSSL support to server SNI information by default. We can confirm this on the command line with:

nginx -V

This will output a bunch of text, but we are interested in just this line:

...
TLS SNI support enabled
...

If you do not have a line like this one, then Nginx will have to be re-compiled manually to include this support. This would be a very rare instance, such as in an outdated version of Nginx, one already manually compiled from source with a different OpenSSL library. The Nginx version installed by the CentOS 7 EPEL repository (1.12.2) and the one included with Ubuntu 18.04 (1.14.0) will support SNI.

Step 2: Configuring Nginx Virtual Hosts

Since you have already set up more than one domain in Nginx, you likely have server configuration blocks set up for each site in a separate file. Just in case you don’t, let’s first ensure that our domains are set up for non-SSL traffic. If they are, you can skip this step. We’ll be working on domain.com and example.com.

vim /etc/nginx/sites-available/domain.com

Note
If you don’t happen to have sites-enabled or sites-available folders, and you want to use them, you can create /etc/nginx/sites-available and /etc/nginx/sites-enabled with the mkdir command. Afterward,  inside /etc/nginx/nginx.conf, add this line anywhere inside the main http{} block (we recommend putting it right after the include line that talks about conf.d):

include /etc/nginx/sites-enabled/*;

Otherwise, you can make your configurations in /etc/nginx/conf.d/*.conf.

At the very least, insert the following options, replacing the document root with the real path to your site files, and adding any other variables you require for your sites:

server {
listen 80;
server_name domain.com;
root /var/www/domain.com;
...
}

A similar file should be set up for example.com, and any other domains you wish to host. Once these files are created, we can enable them with a symbolic link:

ln -s /etc/nginx/sites-available/domain.com /etc/nginx/sites-enabled/

ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Now, we restart Nginx…

systemctl reload nginx

This reloads the configuration files without restarting the application. We can confirm that the two we just made are loaded using:

nginx -T

You should see your server_name line for both domain.com and example.com.

Note
The listen line included in the server block above will allow the site to listen on any IP that is on the server. If you would like to specify an IP instead, you can use the IP:port format instead, like this:

server {
listen 123.45.67.89:80;
...
}

Step 3a: Add Existing SSLs to Nginx Virtual Hosts

Now that we have valid running configurations, we can add the SSLs we have for these domains as new server blocks in Nginx. First, save your SSL certificate and the (private) key to a global folder on the server, with names that indicate the relevant domain. Let’s say that you chose the global folder of /etc/ssl/. Our names, in this case, will be /etc/ssl/domain.com.crt (which contains the certificate itself and any chain certificates from the signing authority), and /etc/ssl/domain.com.key, which contains the private key. Edit the configuration files we created:

vim /etc/nginx/sites-available/domain.com

Add a brand new server block underneath the end of the existing one (outside of the last curly brace) with the following information:

server {
listen 443;
server_name domain.com;
root /var/www/domain.com;
ssl_certificate /etc/ssl/domain.com.crt;
ssl_certificate_key /etc/ssl/domain.com.key;
...
}

Note the change of the listening port to 443 (for HTTPS) and the addition of the ssl_certificate and ssl_certificate_key lines. Instead of rewriting the whole block, you could copy the original server block and then add these extra lines, while changing the listen port. Save this file and reload the Nginx configuration.

systemctl reload nginx

We again confirm the change is in place using:

nginx -T

For some setups you’ll see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443. If you do, you can skip to Step 4, otherwise continue to the next step.

Step 3b: Install and Configure Let’s Encrypt

Let’s next set up the free SSL provider Let’s Encrypt to automatically sign certificates for all of the domains we just set up in Nginx. On Ubuntu 18.04, add the PPA and install the certificate scripts with aptitude:

add-apt-repository ppa:certbot/certbot

apt-get update

apt-get install certbot python-certbot-nginx

In CentOS 7, we install the EPEL repository and install the certificate helper from there.

yum install epel-release

yum install certbot python2-certbot-nginx

On both systems, we can now read the Nginx configuration and ask the Certbot to assign us some certificates.

certbot --nginx

This will ask you some questions about which domains you would like to use (you can leave the option blank to select all domains) and whether you would like Nginx to redirect traffic to your new SSL (we would!). After it finishes it’s signing process, Nginx should automatically reload its configuration, but in case it doesn’t, reload it manually:

systemctl reload nginx

You can now check the running configuration with:

nginx -T

You should now instead see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443.

Let’s Encrypt certificates are only valid for 90 days from issuance, so we want to ensure that they are automatically renewed. Edit the cron file for the root user by running:

crontab -e

The cron should look like this:

45 2 * * 3,6 certbot renew && systemctl reload nginx

Once you save this file, every Wednesday and Saturday at 2:45 AM, the certbot command will check for any needed renewals, automatically download and install the certs, followed by a reload of the Nginx configuration.

Step 4: Verify Installation and Validity

We should now check the validity of our SSLs and ensure that browsers see the certificates properly. Visit https://sslcheck.liquidweb.com/ and type in your domain names to check the site’s SSL on your server. You should see four green checkmarks, indicatating SSL protection.

We hope you’ve enjoyed our tutorial on how to install SSLs on multiple sites within one server. Liquid Web customers have access to our support team 24/7.  We can help with signed SSL or ordering a new server for an easy transfer over to Liquid Web.

How to Backup, Delete and Restore a PostgreSQL Database in CentOS 7 or Ubuntu 16

Reading Time: 5 minutes

Listing databases
Dump a database
Dumping all databases
Dump Grants
Delete or Drop a Database
Delete a Grant
Restore a Database
Restore Grant
Continue reading “How to Backup, Delete and Restore a PostgreSQL Database in CentOS 7 or Ubuntu 16”

Install and Configure ownCloud on Ubuntu 16.04

Reading Time: 7 minutes

What is ownCloud?

Have you ever used an online collaboration tool or shared files with a co-worker, family member, or friend? You might have used email to send those files, or an online editor to work on a spreadsheet or text document at the same time.

But have you considered the security behind these platforms? Who is safeguarding your data, and who else might have access to it? How can you be certain that content is properly encrypted so that only the intended recipients see it, away from the prying eyes of disgruntled employees, rogue agents, third party data miners, or government agencies? Many people want a certain level of control over exactly who is able to see their sensitive data, and this is where ownCloud comes into play.

Continue reading “Install and Configure ownCloud on Ubuntu 16.04”

Transfer an SSL to Ubuntu 16.04 or CentOS 7

Reading Time: 7 minutes

SSL certificates have become a de facto part of every website. If you don’t yet have an SSL on your site to encrypt data, you should. Rather than showing an extra layer of security on sites protected by SSL, modern browsers instead now display a warning when a website does not have an SSL, essentially requiring sites to maintain their positive image.

When moving from one server to another, what needs to happen to your SSL to maintain your secure status? We’ll cover the basics for transferring traditional and Let’s Encrypt SSLs to Ubuntu 16.04 and CentOS 7.

Note:
This article will address SSLs in Apache specifically, but the same concepts apply to any service that supports SSL encryption.

Can SSLs be transferred between servers?

Continue reading “Transfer an SSL to Ubuntu 16.04 or CentOS 7”

Useful Command Line for Linux Admins

Reading Time: 11 minutes

The command line terminal, or shell on your Linux server, is a potent tool for deciphering activity on the server, performing operations, or making system changes. But with several thousand executable binaries installed by default, what tools are useful, and how should you use them safely? Continue reading “Useful Command Line for Linux Admins”

Free Website Migration Service

Reading Time: 4 minutes

How To Request Free Website Migrations from Liquid Web

The Migration team at Liquid Web is dedicated to providing you with an efficient and as uneventful a migration as possible. Whether you are migrating from a current Liquid Web server (internal migration) or from another host (external migration) into Liquid Web, it is important that we work together to ensure an effective transfer of information.

Continue reading “Free Website Migration Service”