How To Secure Your WordPress Site

Reading Time: 4 minutes

WordPress is one of the most popular Content Management Systems on the Internet. Due to it’s popularity, it is also the target of many hackers.  We’re here to show you our top 5 recommendations on how to secure your WordPress site based on issues we’ve come across.

1. Keep WordPress Up to Date!

Our number one and top recommendation is to keep WordPress up to date! WordPress is a very active platform and updates come out regularly for it. The updates include many new features and changes in the backend, but also patch many bugs and exploits that the WordPress team finds. Just take a look at the releases and patch notes on https://wordpress.org/news/ sometime to get an idea of how much work gets put into finding and fixing these problems!

Being even just one or two versions behind can leave your site open to hackers that analyze the updates, create exploits, and go looking for outdated sites across the Internet. The longer a site isn’t updated, the more exploits and vulnerabilities are out there, and the increased likelihood that your site could be compromised.

The same rule applies to your Plugins and Themes, make sure those are all properly updated for the same reasons! Which brings us to item two on our list!

2. Review your Plugins and Themes!

WordPress is great because the plugins can quickly and easily give new features and customize your site very quickly, and themes can give your site a very professional appearance in a matter of seconds, but if they are not properly maintained, it could lead to problems down the road.

First, just remove any plugins and themes you don’t need. You could leave them disabled, but outright removing them is a safer option as the files wouldn’t be sitting on your server. Even if it’s disabled, it could still potentially be reached if an exploit can gain access to the files.

A side effect of removing the plugins is that it could actually speed up our site as well!

After removing any plugins and themes you don’t need, make sure to keep the ones you have left updated. WordPress can generally check for updates right in the admin area, but if you bought a plugin from a third party source, make sure to check with them for any updates. It would also be recommended to visit the website of each plugin and theme or even check reviews on other sites to make sure development is still active on it and that there are no known vulnerabilities.

3. Protect your logins!

Having your site publicly accessible out on the internet means customers and potential clients can access your site, but so can bots and people with malicious intentions! By default WordPress allows you to go to yourdomain.com/wp-login.php or yourdomain.com/wp-admin and should bring up a login page. If you try that on any of your sites and you get the login page, it’s highly recommended you use a plugin to hide where you go to log in.

WordPress does have some general security that blocks attempts after a few failed attempts, but if thousands of bots are all trying to log in and guess passwords, why even give them the chance to try? Make sure you use strong passwords, don’t use the same username and passwords in multiple locations, and go through your WordPress users to make sure they are all still valid. For example, if you gave a developer access years ago, maybe you don’t need that user sitting around there still.

4. Install protection plugins

I know I said to reduce your plugins earlier, but I would recommend having some plugins to block malicious connections, monitor suspicious activity, and scan site files for malware. We recommend iThemes Security, as it offers a lot of different features in just a single plugin, but you can look up what’s popular and read other reviews to help you decide on what would be the best fit with your site. For example, if you have a site where users can upload data, it would be a good idea to scan those files as they are being uploaded and block or at least report any that trigger warnings in a plugin that scans for viruses and malware.

Depending on how much protection you need, paid options would be recommended over free options to help increase the chances that newer exploits are blocked as well with more features and newer virus definitions.

5. Make sure you have good backups

Having good backups isn’t exactly a proactive step on how to protect your site, but it sure is a reactive step that can greatly help if the need arises. It’s a good idea for any business that relies on the data on their site. Think about all the orders, profiles, records, logs, and any other important information stored on your site, then imagine if something causes a problem and the whole site gets deleted, maybe a hard drive crash or malicious code is injected into it somewhere and causes the data to be lost. If you have no backups at all to restore from, then depending on the nature of your business this may be hundreds of work hours for a team to rebuild the site, lost revenue, lost customers, and would definitely be a major hit to your site’s reputation.

If you did have backups, depending on the frequency of the backups and how quickly the problem was noticed and rolled back, there may be little to no data loss, customers may not notice, and the site can quickly bounce back.

We highly recommend having multiple backups taken over a period of time. The more backups you have the more options you would have to restore from. Having only one daily backup could cause problems if an issue isn’t noticed until three days after it happens. Active sites may need continuous backups compared to a static site that maybe hasn’t changed in months.

Also storing your backups in different locations would help spread out the number of available backup copies. Like if a dedicated backup hard drive failed then you could still have remote backups saved on a different service that wouldn’t be affected. Think of it like not putting all your eggs in one basket! For more information on good backup practices, see Best Practices: Developing a BackUp Plan.

Hopefully, you gained something useful from this article! If you or a friend are in the market for a web host, feel free to talk to a Liquid Web tech by phone or in a chat 24 hours a day! Thanks for reading!

How to Set Up Multiple SSLs on One IP With Nginx

Reading Time: 6 minutes

With the shortage of available address space in IPv4, IPs are becoming increasingly difficult to come by, and in some cases, increasingly expensive. However, in most instances, this is not a drawback. Servers are perfectly capable of hosting multiple websites on one IP address, as they have for years.

But, there was a time when using an SSL certificate to secure traffic to your site required having a separate IPv4 address for each secured domain. This is not because SSLs were bound to IPs, or even to servers, but because the request for SSL certificate information did not specify what domain was being loaded, and thus the server was forced to respond with only one certificate. A name mismatch caused an insecure certificate warning, and therefore, a server owner was required to have unique IPs for all SSL hosts.

Luckily, IPv4 limitations have brought new technologies and usability to the forefront, most notably, Server Name Indication (SNI).

 

Why Do I Need an SSL?

Secure Socket Layer (SSL) certificates allow two-way encrypted communication between a client and a server. This allows any data protection from prying eyes, including sensitive information like credit card numbers or passwords. SSLs are optionally signed by a well-known, third-party signing authority, such as GlobalSign. The most common use of such certificates are to secure web traffic over HTTPS.

When browsing an HTTPS site, rather than displaying a positive indicator, modern browsers show a negative indicator for a site that is not using an SSL. So, websites that don’t have an SSL will have a red flag right off the bat for any new visitors. Sites that want to maintain reputation are therefore forced to get an SSL.

Luckily, it is so easy to get and install an SSL, even for free, that this is reduced to a basic formality. We’ll cover the specifics of this below.

 

What is SNI?

Server Name Indication is a browser and web server capability in which an HTTPS request includes an extra header, server_name, to which the server can respond with the appropriate SSL certificate. This allows a single IP address to host hundreds or thousands of domains, each with their own SSL!

SNI technology is available on all modern browsers and web server software, so some 98+% of web users, according to W3, will be able to support it.

 

Pre-Flight Check

We’ll be working on a CentOS 7 server that uses Nginx and PHP-FPM to host websites without any control panel (cPanel, Plesk, etc.). This is commonly referred to as a “LEMP” stack, which substitutes Nginx for Apache in the “LAMP” stack. These instructions will be similar to most other flavors of Linux, though the installation of Let’s Encrypt for Ubuntu 18.04 will be different. I’ll include side-by-side instructions for both CentOS 7 and Ubuntu 18.04.

For the remainder of the instructions, we’ll assume you have Nginx installed and set up to host multiple websites, including firewall configuration to open necessary ports (80 and 443). We are connected over SSH to a shell on our server as root.

Note
If you have SSLs for each domain, but they are just not yet installed, you should use Step 3a to add them manually. If you do not have SSLs and would like to use the free Let’s Encrypt service to order and automatically configure them, you should use Step 3b.

 

Step 1: Enabling SNI in Nginx

Our first step is already complete! Modern repository versions of Nginx will be compiled with OpenSSL support to server SNI information by default. We can confirm this on the command line with:

nginx -V

This will output a bunch of text, but we are interested in just this line:

...
TLS SNI support enabled
...

If you do not have a line like this one, then Nginx will have to be re-compiled manually to include this support. This would be a very rare instance, such as in an outdated version of Nginx, one already manually compiled from source with a different OpenSSL library. The Nginx version installed by the CentOS 7 EPEL repository (1.12.2) and the one included with Ubuntu 18.04 (1.14.0) will support SNI.

Step 2: Configuring Nginx Virtual Hosts

Since you have already set up more than one domain in Nginx, you likely have server configuration blocks set up for each site in a separate file. Just in case you don’t, let’s first ensure that our domains are set up for non-SSL traffic. If they are, you can skip this step. We’ll be working on domain.com and example.com.

vim /etc/nginx/sites-available/domain.com

Note
If you don’t happen to have sites-enabled or sites-available folders, and you want to use them, you can create /etc/nginx/sites-available and /etc/nginx/sites-enabled with the mkdir command. Afterward,  inside /etc/nginx/nginx.conf, add this line anywhere inside the main http{} block (we recommend putting it right after the include line that talks about conf.d):

include /etc/nginx/sites-enabled/*;

Otherwise, you can make your configurations in /etc/nginx/conf.d/*.conf.

At the very least, insert the following options, replacing the document root with the real path to your site files, and adding any other variables you require for your sites:

server {
listen 80;
server_name domain.com;
root /var/www/domain.com;
...
}

A similar file should be set up for example.com, and any other domains you wish to host. Once these files are created, we can enable them with a symbolic link:

ln -s /etc/nginx/sites-available/domain.com /etc/nginx/sites-enabled/

ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Now, we restart Nginx…

systemctl reload nginx

This reloads the configuration files without restarting the application. We can confirm that the two we just made are loaded using:

nginx -T

You should see your server_name line for both domain.com and example.com.

Note
The listen line included in the server block above will allow the site to listen on any IP that is on the server. If you would like to specify an IP instead, you can use the IP:port format instead, like this:

server {
listen 123.45.67.89:80;
...
}

Step 3a: Add Existing SSLs to Nginx Virtual Hosts

Now that we have valid running configurations, we can add the SSLs we have for these domains as new server blocks in Nginx. First, save your SSL certificate and the (private) key to a global folder on the server, with names that indicate the relevant domain. Let’s say that you chose the global folder of /etc/ssl/. Our names, in this case, will be /etc/ssl/domain.com.crt (which contains the certificate itself and any chain certificates from the signing authority), and /etc/ssl/domain.com.key, which contains the private key. Edit the configuration files we created:

vim /etc/nginx/sites-available/domain.com

Add a brand new server block underneath the end of the existing one (outside of the last curly brace) with the following information:

server {
listen 443;
server_name domain.com;
root /var/www/domain.com;
ssl_certificate /etc/ssl/domain.com.crt;
ssl_certificate_key /etc/ssl/domain.com.key;
...
}

Note the change of the listening port to 443 (for HTTPS) and the addition of the ssl_certificate and ssl_certificate_key lines. Instead of rewriting the whole block, you could copy the original server block and then add these extra lines, while changing the listen port. Save this file and reload the Nginx configuration.

systemctl reload nginx

We again confirm the change is in place using:

nginx -T

For some setups you’ll see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443. If you do, you can skip to Step 4, otherwise continue to the next step.

Step 3b: Install and Configure Let’s Encrypt

Let’s next set up the free SSL provider Let’s Encrypt to automatically sign certificates for all of the domains we just set up in Nginx. On Ubuntu 18.04, add the PPA and install the certificate scripts with aptitude:

add-apt-repository ppa:certbot/certbot

apt-get update

apt-get install certbot python-certbot-nginx

In CentOS 7, we install the EPEL repository and install the certificate helper from there.

yum install epel-release

yum install certbot python2-certbot-nginx

On both systems, we can now read the Nginx configuration and ask the Certbot to assign us some certificates.

certbot --nginx

This will ask you some questions about which domains you would like to use (you can leave the option blank to select all domains) and whether you would like Nginx to redirect traffic to your new SSL (we would!). After it finishes it’s signing process, Nginx should automatically reload its configuration, but in case it doesn’t, reload it manually:

systemctl reload nginx

You can now check the running configuration with:

nginx -T

You should now instead see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443.

Let’s Encrypt certificates are only valid for 90 days from issuance, so we want to ensure that they are automatically renewed. Edit the cron file for the root user by running:

crontab -e

The cron should look like this:

45 2 * * 3,6 certbot renew && systemctl reload nginx

Once you save this file, every Wednesday and Saturday at 2:45 AM, the certbot command will check for any needed renewals, automatically download and install the certs, followed by a reload of the Nginx configuration.

Step 4: Verify Installation and Validity

We should now check the validity of our SSLs and ensure that browsers see the certificates properly. Visit https://sslcheck.liquidweb.com/ and type in your domain names to check the site’s SSL on your server. You should see four green checkmarks, indicatating SSL protection.

We hope you’ve enjoyed our tutorial on how to install SSLs on multiple sites within one server. Liquid Web customers have access to our support team 24/7.  We can help with signed SSL or ordering a new server for an easy transfer over to Liquid Web.

How to Set Up A Firewall Using Iptables on Ubuntu 16.04

Reading Time: 5 minutes

This guide will walk you through the steps for setting up a firewall using iptables in Ubuntu 16.04. We’ll show you some common commands for manipulating the firewall, and teach you how to create your own rules.

 

What are Iptables in Ubuntu?

The utility iptables is a Linux based firewall that comes pre-installed on many Linux distributions. It is a leading solution for software-based firewalls. It’s a critical tool for Linux system administrators to learn and understand. Any publicly facing server on the Internet should have some form of firewall enabled for security reasons. In a typical configuration, you would only open ports for the services that you wish to be accessible via the Internet. All other ports would remain closed and inaccessible via the Internet. For example, in a typical server, you may want to open ports for your web services, but you probably would not want to make your database accessible to the public!

 

Pre-flight

Working with iptables requires root privileges on your Linux box. The rest of this guide assumes you have logged in as root. Please exercise caution, as commands issued to iptables take effect immediately. You will be manipulating how your server is accessible to the outside world, so it’s possible to lock yourself out from your own server!

Note
If you’re a Liquid Web customer, check out our VPN + IPMI remote management solutions. These solutions can help you restore access to your server even if you’ve blocked out the outside world. We have a VPN configuration guide to get you started. Of course, our support staff is also standing by 24×7 in the event you get locked out.

 

How Do Iptables Work?

Iptables works by inspecting predefined firewall rules. Incoming server traffic is compared against these rules, and if iptables finds a match, it takes action. If iptables is unable to find a match, it will apply a default policy action. Typical usage is to set iptables to allow matched rules, and deny all others.

 

How Can I See Firewall Rules in Ubuntu?

Before making any changes to your firewall, it is best practice to view the existing rule set and understand what ports are already open or closed. To list all firewall rules, run the following command.

iptables -L

If this is a brand new Ubuntu 16.04 installation, you may see there are no rules defined! Here is an example “empty” output with no rules set:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

If you’re running Ubuntu 16.04 on a Liquid Web VPS, you’ll see we’ve already configured a basic firewall for you. There are usually three essential sections to look at in the iptables ruleset. When dealing with iptables rulesets, they are called “chains”, particularly “Chain INPUT”, “Chain FORWARD”, and “Chain OUTPUT”. The input chain handles traffic coming into your server while the output chain handles the traffic leaving your server. The forwarding chain handles server traffic that is not destined for local delivery. As you can surmise, the traffic is forwarded by our server  to its intended destination.

 

Common Firewall Configurations

The default action is listed in “policy”. If traffic doesn’t match any of the chain rules, iptables will perform this default policy action. You can see that with an empty iptables configuration, the firewall is accepting all connections and not blocking anything! This is not ideal, so let’s change this. Here is an example firewall configuration allowing some common ports, and denying all other traffic.

iptables -A INPUT -i lo -j ACCEPT

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

iptables -A INPUT -p icmp -j ACCEPT

iptables -A INPUT -p tcp --dport 22 -j ACCEPT

iptables -A INPUT -p tcp --dport 80 -j ACCEPT

iptables -A INPUT -p tcp --dport 443 -j ACCEPT

iptables -A INPUT -p udp --dport 1194 -j ACCEPT

iptables -A INPUT -s 192.168.0.100 -j ACCEPT

iptables -A INPUT -s 192.168.0.200 -j DROP

iptables -P INPUT DROP

iptables -P FORWARD DROP

iptables -P OUTPUT ACCEPT

We will break down these rules one at a time.

iptables -A INPUT -i lo -j ACCEPT

This first command tells the INPUT chain to accept all traffic on your loopback interface. We specify the loopback interface with -i lo. The -j ACCEPT portion is telling iptables to take this action if traffic matches our rule.


iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

Next, we’ll allow connections that are already established or related. This can be especially helpful for traffic like SSH, where you may initiate an outbound connection and wish to accept incoming traffic of the connection you intentionally established.


iptables -A INPUT -p icmp -j ACCEPT

This command tells your server not to block ICMP (ping) packets. This can be helpful for network troubleshooting and monitoring purposes. Note that the -p icmp portion is telling iptables the protocol for this rule is ICMP.


How Do I Allow a Port in Ubuntu?

iptables -A INPUT -p tcp --dport 22 -j ACCEPT

TCP port 22 is commonly used for SSH. This command allows TCP connections on port 22. Change this if you are running SSH on a different port. Notice since SSH uses TCP, we’ve specified the protocol using -p tcp in this rule.


iptables -A INPUT -p tcp --dport 80 -j ACCEPT

iptables -A INPUT -p tcp --dport 443 -j ACCEPT

These two commands allow web traffic. Regular HTTP uses TCP port 80, and encrypted HTTPS traffic uses TCP port 443.


iptables -A INPUT -p udp --dport 1194 -j ACCEPT

This is a less commonly used port, but here is an example of how to open port 1194 utilizing the UDP protocol instead of TCP. Note that in this example we’ve specified UDP by using -p udp.


How Do I Allow an IP Address in Ubuntu?

iptables -A INPUT -s 192.168.0.100 -j ACCEPT

You can configure iptables to always accept connections from an IP address, regardless of what port the connections arrive on. This is commonly referred to as “whitelisting”, and can be helpful in certain circumstances. We’re whitelisting 192.168.0.100 in this example. Typically you would want to be very restrictive with this action and only allow trusted sources.


How Do I Block an IP Address in Ubuntu?

iptables -A INPUT -s 192.168.0.200 -j DROP

You can also use iptables to block all connections from an IP address or IP range, regardless of what port they arrive on. This can be helpful if you need to block specific known malicious IPs. We’re using 192.168.0.200 as our IP to block in this example.


How Do I Block All Other Ports?

iptables -P INPUT DROP

Next, we tell iptables to block all other inputs. We’re only allowing a few specific ports in our example, but if you had other ports needed, be sure to insert them before issuing the DROP command.


How Do I Forward Traffic in Ubuntu?

iptables -P FORWARD DROP

Likewise, we’re going to drop forwarded packets. Iptables is very powerful, and you can use it to configure your server as a network router. Our example server isn’t acting as a router, so we won’t be using the FORWARD chain.


How Do I Allow All Outbound Traffic?

iptables -P OUTPUT ACCEPT

Finally, we want to allow all outgoing traffic originating from our server. We’re mostly worried about outside traffic hitting our server, and not blocking our own box from accessing the outside world.


How Do I Permanently Save IP Rules?

To make your firewall rules persist after a reboot, we need to save them. The following command will save the current ruleset:

/sbin/iptables-save


How Do I Reset My Iptable?

To wipe out all existing firewall rules and return to a blank slate, you can issue the following command. Remember that an empty iptables configuration allows all traffic to your server, so you typically would not want to leave your server unprotected in this state for very long. Nevertheless, this can be very helpful when configuring new firewall rulesets and you need to revert to a blank slate.

iptables -F

 

We’ve covered a lot of ground in this article! Configuring iptables can seem like a daunting process when first looking at an extensive firewall ruleset, but if you break down the rules one at a time, it becomes much easier to understand the process. When used correctly, iptables is an indispensable tool for hardening your server’s security. Liquid Web customers enjoy our highly trained support staff, standing by 24×7, if you have questions on iptables configurations. Have fun configuring your firewall!

SSL Checker Tool

Reading Time: 4 minutes

The security of your website is vital to the success of your Internet business. One way you can protect your data (and your customers) is through the use of encrypted communication protocols. Secure Socket Layer (or SSL) was the original method of providing for basic encryption between servers and clients. The industry mostly uses Transport Layer Security (or TLS) protocols now, but the process is basically the same, and most users refer to this kind of encryption by the old name: SSL.  As part of our Web Hosting Toolkit, Liquid Web provides and SSL Tool to help you verify that your SSL is installed correctly and up-to-date.  Below is an insight on how to use this tool and as well as some core concepts and certificates types to know when dealing with SSL.

 

SSL Certificate Checker

You’ll want to confirm that everything is functioning correctly on the server once you’ve successfully ordered and installed your SSL. At this time, you’ll want to check on your domain SSL’s to confirm expiration dates, covered subdomains, or other information. While you can use various third-party SSL checkers on the Internet, Liquid Web makes gathering this information about your domain simple. Just go to the Liquid Web Internet Webhosting Toolkit page and click on SSL Tool.

How Do I Check If My SSL Certificate is Valid?

Enter your domain name in the box provided and click on Submit. You can enter either your primary domain name (like mydomain.com) or any of the subdomains you may have created SSL certificates for (like blog.mydomain.com). If an SSL certificate is installed on the server for the domain, the page will display the status of the certificate and additional information.

In this example, you can see that the certificate is valid and trusted by browsers and that the tested domain matches the certificate.

You can also see which Certificate Authority issued the certificate and the dates for which the certificate is valid.

Finally, you can see which signing algorithm was used to generate the certificate (indicating how complex and secure the certificate is) and which domains and subdomains are covered by the certificate.

How SSLs Work

SSL connections work through a series of tools that exist on your server and on a client’s web browser. At the simplest level, the server and a client computer exchange information and agree on a secret “handshake” that allows each computer to trust the other computer. This handshake is established through the use of private and public SSL certificate keys. The private key resides on the server, and the public key is available to a client computer. All information passed between the computers is encoded and can only be decoded if the keys match. These keys are generated by a Certificate Authority (like GlobalSign) and can vary in complexity and expiration date. These matched keys exist to prevent what are known as “man in the middle” attacks when a third-party intercepts the Internet traffic for the purpose of stealing valuable data (like passwords or credit card information). Because the third-party doesn’t possess the matching keys, they will be unable to read any of the intercepted information.

By using a trusted certificate your website user can enter their information with full confidence that their data is safe. Certificate Authorities only grant SSL certificates to operators who can prove that they are the legitimate owner of a domain and that the domain is hosted on the server for which the certificate is being issued. This proof is usually obtained by modifying the DNS records for a domain during the verification process of the certificate ordering transaction. To learn more about how to order an SSL through your Liquid Web account, see How To Order or Renew an SSL Certificate in Manage.

 

Types of SSL Certificates

While SSL certificates all provide the same essential functions, there are several different types of certificates to choose from. You’ll want to establish which certificate meets your needs before you decide to order one for your domain. The types we’ll discuss here are Self-Signed Certificates, Standard Domain Certificates, Wildcard Certificates, Extended Validation Certificates.

Self-Signed Certificates

Most servers have the capability of generating a Self-Signed SSL certificate. These certificates provide the same kinds of encrypted communication that certificate provided by Certificate Authorities provide. However, because they are self-signed, there is no proof that the server is the “real” server associated with a website. Many control panels use self-signed certificates because the owner of the server knows the IP address of the server and can trust that they are connecting to the correct site when using that IP address. The advantage of self-signed certificates is that they are easy to generate and are free to use for as long as you want to use them.

Standard Domain Certificates

If you only need to secure a single domain or subdomain, a standard domain SSL certificate is appropriate. Standard certificates are generally the least expensive option from Certificate Authorities and are designed to cover one domain or subdomain (generally both domain.com and www.domain.com are covered by a standard certificate).

Wildcard Certificates

If you have multiple subdomains, you may be able to save time and money by getting a wildcard SSL certificate. Wildcard certificates cover a domain and all of its subdomains. For instance, if you have a domain website that also has a mail subdomain, a blog, a news site, and a staging site that you want to be protected by SSL communication, a single wildcard would protect all of the sites.

Note
A wildcard certificate will only protect one level of subdomains. So, blog.mydomain.com is covered, but new.blog.mydomain.com would not be covered.
Extended Validation Certificates

SSL certificates are generally issued to companies that can prove they have the right to use a domain name on the Internet (normally because they can modify the DNS records for that domain). While that level of verification is sufficient for most companies, you may need to have additional evidence that your company is a reliable entity for business purposes. Organizational SSL certificates require additional vetting by a Certificate Authority, including checks about the physical location of your company and your right to conduct business. Organizational SSL details can be visible on your website if you install a Secure Site Seal. Additional vetting is available for companies that choose Extended Validation SSL certificates. Extended Validation processes are often used by banks and financial institutions to provide extra reassurance to their customers that their website is legitimate. EV SSLs will turn the address bar of the client’s browser green and display the company’s name on the right side of the address bar.

If you need help determining which type of SSL is right for your business, chat with our Solutions team for additional information.

Now that you’ve checked the details of your SSL certificate and confirmed that all of the information is correct, you’ll be sure that the communications between your server and your customer’s computers are secure as that information travels over the Internet. For more information about improving the overall security of your server, see Best Practices: Protecting Your Website from Compromise.

 

How to Setup Let’s Encrypt on Ubuntu 18.04

Reading Time: 3 minutes

Sites with SSL are needed more and more every day. It’s ubiquitious enforcement challenges website encryption and is even an effort that Google has taken up. Certbot and Let’s Encrypt are popular solutions for big and small businesses alike because of the ease of implementation.  Certbot is a software client that can be downloaded on a server, like our Ubuntu 18.04, to install and auto-renew SSLs. It obtains these SSLs by working with the well known SSL provider called Let’s Encrypt. In this tutorial, we’ll be showing you a swift way of getting HTTPS enabled on your site.  Let’s get started! Continue reading “How to Setup Let’s Encrypt on Ubuntu 18.04”

How to Install MariaDB on Ubuntu 18.04

Reading Time: 1 minute

MariaDB is a drop in replacement for MySQL, and its popularity makes for several other applications to work in conjunction with it. If you’re interested in a MariaDB server without the maintenance, then check out our high-availability platform. Otherwise, we’ll be installing MariaDB 10 onto our Liquid Web Ubuntu server, let’s get started! Continue reading “How to Install MariaDB on Ubuntu 18.04”

Things to Do After Installing a Ubuntu Server

Reading Time: 5 minutes

After spinning up a new Ubuntu server you may find yourself looking for a guide of what to do next.  Many times the default setting do not provide the top security that your server should have. Throughout this article we provide you security tips and pose questions to help determine the best kind of setup for your environment.

 

1. Secure the Root User

This should be the very first thing you do when setting up a fresh install of Ubuntu server. Typically setting up a password for the root user is done during the installation process. However, if you should ever find yourself in a position where you have assumed the responsibility of a Ubuntu server, it’s best to reset the password keeping in mind the best practices for passwords.

  • Don’t use English words
  • Use a mixture of symbols and alphanumeric characters
  • Length – based on probability and odds of guessing or cracking a password you can provide the best security after a password gets to a certain length. More than ten characters long is good practice, but even longer passwords with complex characters is a safer way to go.

You can also lock the root user password to effectively keep anything from running as root.

Warning:
Please be sure you already have another administrative user on the system with root or “sudo” privileges before locking the root user.

Depending on your version of Ubuntu the root account may be disabled, simply setting or changing the password for root will enable it with the following.

sudo passwd root

Now we can lock the root account by locking the password with the “-l” flag like the following. This will prevent the root user from being used.

sudo passwd -l root

To unlock the root account, again, just change the password for root to enable it.

sudo passwd root

 

2. Secure SSH Access

Many times, once a server is up and running the default configuration for SSH remote logins are set to allow root to log in. We can make the server more secure than this.

You only need to use the root user to run root or administration level commands on the server. This can still be accomplished by logging into a server over SSH with a regular user, and then switching to the root user after you are already logged into the server.
ssh spartacus@myawesomeserver.com

Once logged in you can switch from the user “spartacus” to the root user.

su -

You can disable SSH login for the root user by making some adjustments in the sshd_config file. Be sure to run all of the following commands as root or with a user with sudo privileges.

vim /etc/ssh/sshd_config

Within this file find the Authentication section and look for the following line:

PermitRootLogin yes

Just change that to:

PermitRootLogin no

For the changes to take effect you will need to restart the SSH service with:

/etc/init.d/ssh restart

You can now test this by logging out of the server and then log in again over SSH with the root user and password. It should deny your attempts to do so. This provides a lot more security as it requires a different user (one that others won’t know and probably cannot guess) to log in to the server over SSH. This provides two values that an attacker would need to know, instead of one vaule, as most hackers know that the root user exists on a Linux server.

Also, the following can also be changed to make SSH access more secure.

vim /etc/ssh/sshd_config

PermitEmptyPasswords no

Make sure that directive is set to “no” so that users without a password can’t log in. Otherwise, the attacker would need only one piece of information while also giving them the ability to get in with just knowledge of a user. This, of course, would also mean they could keep attempting guesses at users as well and very easily log in.

A final caution is to adjust any router or firewall settings to make sure that remote SSH access is forwarded to port 22 and does not directly access port 22. This will eliminate a lot of bots or scripts that will try to log in over SSH directly on port 22 with random usernames and passwords. You may need to refer to your router or server firewall documentation on making sure you forward a higher port than port 22.

 

3. Install a Firewall

By default, later versions of Ubuntu should come with Uncomplicated Firewall or UFW. You can check to see if UFW is installed with the following:

sudo ufw status

That will return a status of active or inactive. If it is not installed you can install it with:

sudo apt-get install ufw

It’s a good idea to think through a list of components that will require access to your server. Is SSH access needed? Is web traffic needed? You will want to enable the services through the firewall that are needed so that incoming traffic can access the server in the way you want it to.

In our example let’s allow SSH and web access.

sudo ufw allow ssh

sudo ufw allow http

Those commands will also open up the ports. You can alternatively use the port method to allow services through that specific port.

sudo ufw allow 80/tcp

That will essentially be the same as allowing the HTTP service. Once you have the services you want listed you can enable the firewall with this.

sudo ufw enable

This may interrupt the current SSH connection if that is how you are logged in so be sure your information is correct, so you don’t get logged out.

Also, ensure you have a good grasp on who really needs access to the server and only add users to the Linux operating system that really need access.

 

4. Understand What You Are Trying to Accomplish

It’s important to think through what you will be using your server for. Is it going to be just a file server? Or a web server? Or a web server that needs to send an email out through forms?

You will want to make a clear outline of what you will be using the server for so you can build it to suit those specific needs. It’s best to only build the server with the services that it will require. When you end up putting extra services that are not needed you run the risk of having outdated software which will only add more vulnerability to the server.

Every component and service you run will need to be secured to it’s best practices. For example, if you’re strictly running a static site, you don’t want to expose vulnerabilities due to an outdated email service.

 

5. Keep the File System Up-To-Date

You will want to make sure your server stays up to date with the latest security patches. While a server can run for a while without much maintenance and things will “just work” you will want to be sure not to adapt a “set it and forget it” mentality.

Regular updates on a Ubuntu server can make sure the system stays secure and up to date. You can use the following to do that.

sudo apt-get update

While installing an Ubuntu server is a great way to learn how to work with a Linux it’s a good idea to learn in an environment that is safe. Furthermore, it’s best not to expose the server to the Internet until you are ready.

A great way to get started is at home where you can access the server from your own network without allowing access to the server through the Internet or your home router.

If and when you do deploy a Ubuntu server you’ll want to keep the above five things in mind. It’s important to know the configuration of the server once it’s deployed so you know what type of access the public can get to and what yet needs to be hardened.

Enjoy learning and don’t be afraid to break something in your safe environment, as the experience can be a great teacher when it’s time to go live.

How Do I Secure My Linux Server?

Reading Time: 6 minutes

Our last article on Ubuntu security suggestions touched on the importance of passwords, user roles, console security, and firewalls. We continue with our last article and while the recommendations below are not unique to Ubuntu specifically (nearly all discussed are considered best practice for any Linux server) but they should be an important consideration in securing your server. Continue reading “How Do I Secure My Linux Server?”

Best Practices for Security on Your New Ubuntu Server: Users, Console and Firewall

Reading Time: 4 minutes

Thank you for taking the time to review this important information. You will find this guide broken down into six major sections that coincide with Ubuntu’s security policy guide. The major topics we talk on throughout these articles are as follows:

User Management

User management is one of the most important aspects of any security plan. Balancing your users’ access requirements against their everyday needs, versus the overall security of the server will demand a clear view of those goals to ensure users have the tools they need to get the job done as well as protect the other users’ privacy and confidentiality. We have three types or levels of user access:

  1. Root: This is the main administrator of the server. The root account has full access to everything on the server.  The root user can lock down or, loosen users roles, set file permissions, and ownership, limit folder access, install and remove services or applications, repartition drives and essentially modify any area of the server’s infrastructure. The phrase “with great power comes great responsibility” comes to mind in reference to the root user.
  2. A sudoer (user): This is a user who has been granted special access to a Linux application called sudo.  The “sudoer” user has elevated rights to run a function or program as another user. This user will be included in a specific user group called the sudo group. The rules this user has access to are defined within the “visudo” file which defines and limits their access and can only be initially modified by the root user.
  3. A user: This is a regular user who has been set up using the adduser command, given access to and, who owns the files and folders within the user /home/user/ directory as defined by the basic settings in the /etc/skel/.profile file.

Linux can add an extreme level of granularity to defined user security levels. This allows for the server’s (root user) administrator to outline and delineate as many roles and user types as needed to meet the requirements set forth by the server owner and its assigned task.

 

Enforce Strong Passwords

Because passwords are one of the mainstays in the user’s security arsenal, enforcing strong passwords are a must. In order to enact this guideline, we can modify the file responsible for this setting located here:  /etc/pam.d/common-password.

To enact this guideline, we can modify the file responsible for this setting by using the ‘chage’ command:

chage -m 90 username

This command simply states that the user’s password must be changed every 90 days.

/lib/security/$ISA/pam_cracklib.so retry=3 minlen=8 lcredit=-1 ucredit=-2 dcredit=-2 ocredit=-1

 

Restrict Use of Old Passwords

Open ‘/etc/pam.d/common-password‘ file under Ubuntu/Debian/Linux Mint.
vi /etc/pam.d/common-passwordAdd the following line to ‘auth‘ section.

auth        sufficient  pam_unix.so likeauth nullok

Add the following line to ‘password‘ section to disallow a user from re-using last five of his or her passwords.

sufficient    pam_unix.so nullok use_authtok md5 shadow remember=5Only the last five passwords are remembered by the server. If you tried to use any of five old passwords, you would get an error like:
Password has been already used. Choose another.

 

Checking Accounts for Empty Passwords

Any account having an empty password means its opened for unauthorized access to anyone on the web and it’s a part of security within a Linux server. So, you must make sure all accounts have strong passwords, and no one has any authorized access. Empty password accounts are security risks, and that can be easily hackable. To check if there were any accounts with an empty password, use the following command.

cat /etc/shadow | awk -F: '($2==""){print $1}'

 

What is Console Security?

Console security simply implies that limiting access to the physical server itself is key to ensuring that only those with the proper access can reach the server. Anyone who has access to the server can gain entry to the server, reboot it, remove hard drives, disconnect cables or even power down the server! To curtail malicious actors with harmful intent, we can make sure that servers are kept in a secure location. Another step we can take is to disable the Ctrl+Alt+Delete function. To accomplish this run the following commands:

systemctl mask ctrl-alt-del.target
systemctl daemon-reload
This forces attackers to take more drastic measures to access the server and also limits accidental reboots.

What is UFW?

UFW is simply a front end for a program called iptables which is the actual firewall itself and, UFW provides an easy means to set up and design the needed protection. Ubuntu provides a default firewall frontend called UFW (Uncomplicated firewall). This is another line of defense to keep unwanted or malicious traffic from actually breaching the internal processes of the server.

 

Firewall Logs

The firewall log is a log file which creates and stores information about attempts and other connections to the server. Monitoring these logs for unusual activity and/or attempts to access the server maliciously will aid in securing the server.

When using UFW, you can enable logging by entering the following command in a terminal:

ufw logging on

To disable logging, simply run the following command:

ufw logging off

To learn more about firewalls, visit our Knowledge Base articles.

We’ve covered the importance of passwords, user roles, console security and firewalls all of which are imperative to protecting your Linux server. Let’s continue onto the next article where we’ll cover AppArmor, certificates, eCryptfs and Encrypted LVM.

 

How to Secure a Site in IIS

Reading Time: 6 minutes

When investigating site infections or defacing on a Windows Servers, the most common root cause is poor file security or poor configuration choices when it comes to how IIS should access file content.  The easiest way to prevent this is to start with a secure site.

Setting up a website in IIS is exceedingly easy, but several of the default settings are not optimum when it comes to security or ease of management.  Further, some practices that used to be considered necessary or standards are no longer or were never necessary, to begin with. As such, we recommend that you follow these steps to set up a website to ensure that it is set up correctly and securely. And while some of these setting or permission changes may seem nitpicky, they go a long way on systems that host multiple domains or multiple tenants as they prevent any cross-site file access.

 

Add the Site to IIS

To add a website in IIS (Internet Information Services),  open up the IIS manager, right-click on Sites, and select Add Website.When adding a site to IIS, we typically recommend using the domain name as the “Site name” for easy identification.  Next, under “Physical path”, you will need to supply the path to where your website content is located or use the “” to navigate to and select the folder.  Configuration options under “Connect as…” and “Test Settings…” do not need to be modified.

When it comes to configuring site bindings, popular belief suggests that you should select a specific IP from the “IP address” drop dropdown; however, that is based on out of date practices typically in relation to how SSLs used to require dedicated IPs.  This is no longer necessary and can actually cause issues when getting into any eplicated or highly available configuration, so it is best to leave IP addresses set to All Unassigned and type the domain name you plan to host in the “Host name” field. Do note that you can only supply one value here; additional host names can be added after creating the site by right-clicking on the site and going to Bindings.  Further, depending on your needs, you may opt to select “https” instead of “http”. To host a site with an SSL, please visit our article on the subject after setting up the site to add an SSL and configure it.

 

Set the Anonymous User

Technically that is all you need to do to set up a site in IIS; however, the site may or may not work, and the security settings on the site are not optimum. The next step in securing your site is to configure the IIS user that will access your files. To do this, you will need to change the associated Anonymous user and make a few security changes on the website’s content folder.

In IIS, select your new site on the left, in the main window double click on Authentication, select Anonymous Authentication, and then click “Edit…” on the right action bar.

 

What is IUSR in IIS?

By default, a new site in IIS utilizes the IUSR account for accessing files.  This account is a built-in shared account typically used by IIS to access file content. This means that it will use the application pool’s identity (user) to access file content.

It may be okay to leave this configured if you only plan on hosting one domain; however, when it comes to hosting multiple domains, this is not secure as it would then be possible for any site using the same account to access files from another site.  As such, and as a standard practice, we recommend switching away from using the IUSR account for sites, and instead selecting “Application pool identity” and clicking OK. Alternately, you could manually create a user on the system for each site; however, then you need to manage credentials for an additional user, need to configure permissions for two users (the anonymous user and the application pool user) and possible complications with password complexity and rotation requirements your server or organization may have.

There is nothing further you need to configure in IIS in terms of security; however, for reference, let’s take a look at the application pool settings really quick.  To check the settings on the application pool, in IIS, select Application Pools on the left menu, select the application pool for the site you created (typically the same name as the name of the site), and then click “Advanced Settings…” on the right action bar.

In here, the related setting is the identity, which by default is “ApplicationPoolIdentity”.  This means to access file content, IIS and the associated application pool will use a hidden, dynamic user based off the name of the application pool to access files.  This user has no associated password, can only be used by IIS, and only has access to files specifically granted to it. As such, it removes the requirement of managing system users and credentials.

 

Set Folder Permissions in IIS

Now, as mentioned, the “ApplicationPoolIdentity” user has very few permissions, so the next and last step is to ensure that the website files have proper security settings set on them. Browse through your file system and find the folder where you plan on hosting your site’s files. Right-click on the folder and go to properties. In the properties interface, select the Security tab.

By default, there are a number of security permissions set up on the folder that are unnecessary and potentially insecure (there may be more than shown here).  To best secure a site, we recommend removing all but the “SYSTEM” and “Administrators” groups and adding the “ApplicationPoolIdentity” user (and possibly any other user you may require, such as an FTP user); however, to do this, you will need to disable inheritance.  To do this, click on “Advanced”, then click on “Disable inheritance”.

 

Here you will get a popup asking if you want to copy the current settings or start with no settings.  Either option can work; however, it is easier to copy the current settings and then remove the unnecessary permissions.  So select “ConvertConcert inherited permissions into explicit permission on this object” and then click OK.

At this point, to remove the unnecessary permissions, click Edit and remove everything other than the “SYSTEM” and “Administrators” groups.  Next, you need to add the “ApplicationPoolIdentity” user to this folder. To do this, click “Add…”. Now, depending on your server configuration, you may get a pop-up asking for you to authenticate to an active directory domain.  Simply click the cancel button a few times until you get the Select Users of Groups screen shown below.

On this screen, you will want to make sure that the “Location” selected is your computer.  If it is not, click “Locations…” and select your computer (should be at the top; you may also need to click cancel on some authentication windows here as well).

The “ApplicationPoolIdentity” user is a hidden user, so it is not possible to search for this user.  As such, you will have to type the username to add it. The username you will need to type is “IIS AppPool\<applicationpoolname>“.  Please see the following example and fill yours out accordingly:

Once you type the user name, click OK.  Now that you’ve added the user, which is by default only granted read permissions, you will want to verify your security settings look similar to the following image, and then click OK.

And with that, you’re done and have a secure site ready to be viewed by the masses without needing to fear that hackers will deface it.

 

Securing within Powershell

As a bonus, if you’re looking to get your fingers wet with some Powershell, the steps covered in this article can also be accomplished on a Windows Server 2012 or newer server through Powershell.  Simply fill out the first two variables with your domain name and the path to your content, and then run the rest of the PowerShell commands to set up the site in IIS and configure folder permissions.

[String]$Domain = ‘<domain_Name>’

[String]$Root = ‘<path_to_your_content>’

Import-Module WebAdministration

#Create App pool & Website
New-WebAppPool -Name $Domain
New-Website -Name $Domain -HostHeader $Domain -PhysicalPath $Root -ApplicationPool $Domain
Set-WebConfigurationProperty -Filter system.webServer/security/authentication/anonymousAuthentication -Location $Domain -PSPath MACHINE/WEBROOT/APPHOST -Name userName -Value ''

#Optionally add www. Binding
New-WebBinding -Name $Domain -HostHeader www.$Domain -ErrorAction

#Remove inheritance (copy)
$ACL = Get-ACL $Root
$ACL.SetAccessRuleProtection($True,$True) | Out-Null
$ACL.Access | ?{ !(($_.IdentityReference -eq 'NT AUTHORITY\SYSTEM') -or ($_.IdentityReference -eq 'BUILTIN\Administrators')) } | %{ $ACL.RemoveAccessRule( $_ ) } | Out-Null
$ACL | Set-ACL

#Add IIS user permissions
$ACL = Get-ACL $Root
$acl.SetAccessRuleProtection($False, $True)
$Rule = New-Object System.Security.AccessControl.FileSystemAccessRule("IIS AppPool\$Domain", "ReadAndExecute", "ContainerInherit, ObjectInherit", "None", "Allow")
$acl.AddAccessRule($Rule)
$acl | Set-Acl

Additional Notes: In some cases, sites may need additional write or modify permissions on specific files or folders for file uploads, cache files, or other content.  It is important that you do not apply modified permissions to the entire site. Instead, modify specific directories or files as needed. To apply these settings, go to the file or folder that needs modification, right-click on it, and select Properties.  Switch to the Security tab and click Edit. In there, select the user that has the name of the website (liquidweb.com in my example above), select modify under the Allow column, and then click OK. This will give the ApplicationPoolIdentity and IIS the ability to write to or modify the file(s) or folder(s).

Still need additional protection for your Liquid Web server?  Our Server Protection packages provides a suite of security tools especially for Windows servers.  You’ll get routine vulnerability scans, hardened server configurations, anti-Virus and even malware cleanup, should your site get hacked. Don’t wait another vunerable minute, check out how we can protect you.