Managed Dedicated Server Overview

Are you looking for hosting? You may be asking yourself what is the best solution for your website or online application. One that will provide the uptime and reliability you need with the best support in the industry?  Let’s go through a breakdown of our traditional Dedicated Server options to get a full scope on the aspects of these types of servers.

A traditional Dedicated Server means pure hardware. It’s customized and built to the specifications you select on our website. All resources (CPU, RAM, Disk) are “dedicated” to only you and no other customer.

What does it mean to be Fully Managed customer?

Tired of struggling with server configuration and maintenance? That is where our Fully Managed Heroic Support applies to you. Full Management is going to offer a variety of support options for your server. Those options are below:

  1. Fully Managed (CentOS – WHM/cPanel and/or Windows – Plesk)
  2. Heroic Support Accessible 24/7/365
  3. Fully Managed Network Infrastructure
  4. Fully Managed Hardware
  5. Server housed in our completely owned Liquid Web Data Centers
  6. Level 3 Technicians On-Site 24/7/365
  7. System Level Monitoring Alerts & Notifications
  8. System Level Health Monitoring and Graphing
  9. 100% Uptime SLA on These Items
  10. Installation and Full Support of Core Software Package
  11. Core Operating System Updates & Patches
  12. Security Enhancements
  13. Virus and Spam Protection
  14. Full Control Panel Support
  15. Control Panel Updates and Patches
    To see a full support comparison, see the Support Comparison Chart.



Experiencing performance issues? Large growth? These are early signs that you may need a Dedicated Server to handle processes? Let’s talk on the key components that call for a Dedicated Server.

Performance Issues
Have an e-commerce site and experiencing slow load times when your site is active? Consider upgrading to a Dedicated Server to really kick your website into high gear. An influx of traffic coming to your site demands more from server’s resources.

Each time a user submits an order via your e-commerce site, a read/write request to the MySQL database is performed. Multiply this process by 1,500+ users performing the same action causing your hard disk read/write to spike. For regular VPS servers, this spike may leave your VPS struggling to serve visitors. With Dedicates Server using Solid State Drives (SSD) they can serve out faster read/write speeds.

 

Increased Traffic
Having a popular site is a great problem to have! Traffic spikes can happen due to various reasons. A Dedicated or Cloud Dedicated Server is going to be your first step on ensuring you are ready for increased traffic. If you require scalability, our Cloud products give you the flexibility of resizing and upgrading or downgrading your server with a simple click of a button. We don’t stop there, you may be in a position to explore our Enterprise options to handle high demand traffic!

Server Errors
Pulling up your website only to see a “500 Internal Server Error” can be disastrous for your online business. Most common errors come from a lack of resources to keep your website online and stable. To avoid this type of error, placing yourself on Dedicated hardware is the first step. This goes hand in hand with increased Traffic. A larger flow of traffic can also cause issues due to lack of Dedicated hardware.

Server Specifications
Considering a Dedicated Server can be quite the task if you are not familiar. Let’s break down each server component and talk on how it can benefit you and your application or website.

Server Central Processing Unit (CPU) and Cores
On our Dedicated Servers page, you’ll see that our initial offering is a Single CPU, Quad Core (Intel E3-1230 v5). This means you have a Single CPU Socket on the motherboard with 4 Cores at 3.4Ghz. Each Server CPU will provide a different CPU Clock speed which is important for performance. Higher the clock speed, the more efficient the CPU can process data from your website.

Server Memory (RAM)
LiquidWeb provides you with 16GB DDR4 Memory for Traditional Dedicated Servers. The majority of large websites that are built around a Content Management System use Apache, PHP, and a MySQL Database. Ensuring your server has enough RAM is crucial to maintaining performance as all of these software components will require RAM usage.

For example, a small WordPress site with less than 1,000 visitors per day would be fine to start off with 16GB of RAM. Yet a large e-commerce site with 5,000+ users per day will need extra RAM to handle simultaneous requests.

Hard Disk Drives and Solid State Drives
Configuring a Dedicated Server on our website? You’ll notice that we provide Solid State Drives for your Primary Hard Disk.  We also provide you with a free backup drive of 1TB (SATA 7,200 RPM)! You’ll be placing your Linux/Apache/PHP/MySQL onto SSD drives for quicker response times. By utilizing SSD drives, your CPU and RAM will not have to rely on each other due to the high performing speeds on the SSD. If a SATA drive was being utilized to run your LAMP stack, the performance would decrease as there’s now a physical moving disk at 7,200 revolutions per minute (RPM). When comparing SSD vs SATA, the performance of our SSD is 10x faster than what our SATA drives can deliver. SATA drives are ideal for massive amounts of storage/backups. Files that will not need a lot of access or reading/writing to.

RAID
A Redundant Array of Independent Disks (RAID) is an easy method to combine several disks into a big array. There are different levels of RAID. Each that offers their own fault-tolerance, performance, and capacity. There are also different storage techniques such as striping, mirroring, and parity. If your server already has Solid State Drives, adding a RAID array is beneficial to further increase performance and redundancy.

For example, having 2x 250GB SSD drives in RAID1 is a great start. Unless there is a high demand from your users, this can call for a RAID10 array that will add 4x 250GB SSD drives. This will increase storage size and performance for your server, with more drives mirroring and working together on-top of adding redundancy.

Bandwidth
One of the largest benefits of having a Liquid Web Dedicated Server power of 5 Terabytes of bandwidth. This is no added cost and only outgoing bandwidth counts against you! There are two types of bandwidth transfer, incoming and outgoing. When you are uploading files via FTP to your server its considered inbound bandwidth.When a user goes to your website or application, they are sending a request to your server, known as outbound data. We have higher bandwidth packages to offer along with unlimited options if required.

What Kind Of Dedicated Server Is Right For Me?
The big question, how do I know what I need? There are a few recommendations to help yourself get an idea of your needs. If your website is with another hosting provider, contacting that hosting provider and requesting your current servers specifications is recommended. Not all companies will give you the details you need but this could be very helpful in comparing apple to apple services across hosts.
If you are about to start a new online project or want to switch to a Dedicated Server, it’s recommended you chat or speak to one of our Hosting Advisors. They can assist you with a fitted package recommendation or answer any technical questions. Check out our Dedicated or Cloud Dedicated solutions to order yours today!

 

How to Use IPMI

IPMI (Intelligent Platform Management Interface) is a great way to manage your server remotely. Having IPMI combined with a Liquid Web VPN is similar to having a remote Kernel-based Virtual Machine (KVM) attached to your server. You’ll be able to perform actions remotely which traditionally accomplished when physically present at the machine. This process includes viewing the startup process, changing BIOS settings, installing the OS, and even power cycling your server. This guide is intended to walk you through the IPMI web interface, and explain the various pages. If you need help accessing IPMI, try this Knowledge Base article instead!

Note:
Some functionality of the IPMI portal has been locked down by Liquid Web. As a customer, you have “Operator” level permissions. Only IPMI “Administrators” can perform specific actions in the web portal. This article covers what is primarily available to IPMI Operators!

This view is the first page displayed when you log into the IPMI web portal. There are a few important pieces of information on this page, including your IPMI IP address, the firmware revision of the IPMI BMC, and your system’s MAC addresses. The “Remote Console Preview” page gives you a small thumbnail display of what the video display would look like if directly connected to your server. Also note that you can perform some power cycling actions from this page, including “Power On,” “Power Down,” and “Reset.”

System Info within IPMI

 

While there is not much to look at on this page, it is one of the most important pages on the web portal! Clicking the “Launch Console” button will allow you to remotely connect to your server as if you had a KVM installed. When you click the button, your browser will prompt you to download a new file called “launch.jnlp.”

Note:
You will need Java installed to run this application.

Console Redirection page shows the "Launch Console" button

 

The “Event Log” page displays some fundamental logging information from the IPMI console. This page will keep a record of IPMI logins, and some other information on who accessed the system.

Note:
IPMI Operators will only be able to view these logs. Only IPMI Administrators maintain the ability to clear the logs.

Event Log shows who has accessed the server.

 

On this page, you can mount a CD-ROM ISO stored remotely on a Window share which can be useful if you would like to install a custom operating system remotely.

Note:
Installing a custom operating system may hamper Liquid Web’s ability to assist you! We have many officially supported operating systems available, ask your sales representative for more info.

IPMI gives you the ability to add your own OS.

 

The Virtual Media page allows you to upload a small binary image, (1.44MB max size,) directly to the IPMI controller in your server, allowing you to boot from legacy “floppy disk” images. While mostly un-necessary in today’s tech landscape, this option can still be helpful to some users.IPMI gives you the ability to add binary through floppy disk.

 

The Server Health page displays a small amount of information mostly permitting you to see some version information on the IPMI product.

Note:
Under normal circumstances, many of these fields will be blank, and there is limited information available on this page.

Check the version of your IPMI instance.

 

This page displays information gathered by sensors on the motherboard. You can see information on many physical aspects of your server here. For example, some data here includes fan speed, component temperatures, voltage readings on the CPU and RAM, and more.

Sensor Readings show fan speed, temps, CPU and RAM.

 

The “SOL” Console (Serial Over LAN Console) is a serial console connection to your server. With particular use cases, it is only useful for redirecting serial input/output over LAN.

The serial console connection, useful for redirecting serial input/output over LAN.

 

So covers the functionality available to IPMI Operators. When appropriately used, IPMI can be a valuable tool in maintaining your server. It provides similar level access as if you were physically present in front of your server. It used to be that this capability was only possible when purchasing additional expensive KVM hardware. Liquid Web Dedicated Servers have this functionality as a standard at no extra cost! Give us a call if you have any questions, or would like to discuss getting an IPMI capable server.

 

SSL vs TLS

You may have first heard about TLS because your Apache service needed to be secured using TLS for a PCI scan (Payment Card Industry: PCI scans are a standard to ensure server security for credit card transactions). Or maybe you noticed that your SSL also mentions TLS when you are ordering the certificate. Beyond where you heard the names, the question is, what is this mysterious TLS in relation to SSL and which of the two should you be using?

So what is the difference between SSL and TLS? Surprisingly not much. Most of us are familiar with SSL (Secure Socket Layer) but not TLS (Transport Layer Security), yet they are both protocols used to send data online securely. SSL is older than TLS, but all SSL certificates can use both SSL and TLS encryption. Indeed SSL certificates are appropriately called SSL/TLS certificates, but that becomes a mouthful. Thus the industry has stuck with calling them SSLs. From here on out I will break from convention and call the actual certificate an “SSL certificate” to distinguish the encryption type from the certificate. SSL has its origins in the early 1990s. The mention of Netscape and AOL should date how old these protocols are as they are the first to coin the term SSL.

 

Green Lock On Webpage

If you look up to the upper left corner of this webpage, you may see a very tiny lock and the word “Secure” written in green. While that doesn’t look like much, it plays a critical part of security. The SSL is what your web browser uses to show that data sent from your computer is safe. SSL certificates create a secure tunnel for HTTPS communication. HTTPS stands for Hyper Text Transfer Protocol Secure, differentiating from HTTP, (Hyper Text Transfer Protocol) which has no SSL present. If you see a red lock or a caution sign in the corner of your web browser, that indicates that the connection is not encrypted. Meaning a malicious third party could read any data sent on that webpage.

A secure connection happens via what is called a “handshake” between your browser and the web server. A simplified explanation of this is that the server and your browser agree on a literal “secret” handshake between each other based upon the type of encryption (SSL/TLS) and the SSL certificate itself. This handshake forms its encoding from the interaction of the public and private certificate key. From that point onward they use this secret handshake to confirm the information sent back and forth is from the authentic source.

This handshake and the accompanying SSL certificate helps prevent a man in the middle attack between customers and a server-side business. A man in the middle attack is where a malicious entity intercepts communication between a server and your computer. The man in the middle receives requests from the user and passes along the information to the server and back again. Data between the end user and the server are read, hence “man in the middle” phrase. If attacked, the Man in the Middle technique will show passwords and other sensitive information. As terrifying as that sounds this attack is only possible if there is no SSL certificate on the site.

As you may have heard, Google and FireFox are phasing out non-SSL/TLS encrypted websites. The change will soon show an explicit warning with the browsers for any site that is not covered by an SSL certificate. The browsers will force an acknowledgment that you want to proceed with an insecure website before showing any content.

For business owners who accept online payments, it is even more critical to not only have an SSL certificate but also enforces the latest TLS versions on the server. In a PCI compliance scan, it requires that the domain only use specific TLS versions.

SSL and TLS each have specific versions which relate to the type of encryption that the SSL certificate will use in the previously mentioned handshake.

The SSL versions are:

  • SSL v1
  • SSL v2
  • SSL v3

Never released to the public but still notated is SSL v1; SSL v2 was an improvement upon SSL v1 but still problematic; SSL v3 fixed some of these initial bugs but is open to attacks through vulnerabilities like POODLE or DROWN (read more on those vulnerabilities here). SSL v3 was at the End of Life in 2015, forever ago regarding the internet.

Modern TLS encryptions cover:

  • TLS v1.0
  • TLS v1.1
  • TLS v1.2
  • TLS v1.3

Each of which addresses flaws from one version to the next. The newer encryptions are just that, more modern and more secure ways to encrypt data for security. The later the release, the better the encoding and the more difficult it is to decrypt by malicious third parties. Conversely, the older versions, like with SSL, have vulnerabilities which can be exploited to collect private data. In many ways, you can think of TLS as the newer version of SSL. Some refer to TLS v1.0 as TLS v 1.0/SSL v3.1.

For the interested technophile, as it relates to the handshake example, we break down the first connection process. The first connection deals with the browser, and a “browserhello” is the first exchange in the handshake. The browser then states the version of TLS they accept, say, for example, everything up to TLS v1.1. The server then replies with a “serverhello,” which is the second exchange in the handshake. The server states the version of encryption that is for the rest of the interaction based upon the first connection.

This interaction should force the newest version of SSL/TLS that both the server and browser are capable of handling. Some outdated browsers do not use the latest versions of TLS. The server is also capable of disabling specific TLS/SSL versions, ensuring all the connections to the server are safer. In this way, new servers should disable the use of all SSL versions and even some of the TLS versions. For example, as of September 2018, PCI certification require all SSL versions and TLS v1.0 disabled.

So back to our original question, what is the difference between SSL and TLS? In sum, TLS is the logical progression of SSL and the safer of the two by that fact. Beyond this, they work in the same fashion, but the newer versions use stronger types of encryption.

If you are interested in getting an SSL for your website, check out our blog to help you make an informed decision on what kind of SSL to purchase.

Editing DNS Zone Files in WHM/cPanel

When using custom name servers, it is essential to update the DNS in cPanel/WHM, doing so, is a component of hosting your own DNS. To use custom name servers, you must update the nameservers at your domain’s registrar to match your Liquid Web server’s hostname. If you are unsure how to do this, you can see how in our article Setting Up Private Name servers in WHM/cPanel. It is critical to have created a cPanel account and to add the domain to your WHM panel, if you haven’t already,  follow our article, How To: Create a cPanel Account in WHM.  Additionally, access to your registrar’s control panel is necessary to update the name servers. If you are questioning who your registrar is, learn how to locate where your domain’s DNS is by following the instructions in our article, Where Is My DNS Hosted?

Knowing your DNS provider is imperative in guaranteeing that you’re pointing your name servers to your Liquid Web server. There is not much use in updating the records you see in WHM if your name servers are not looking to the Liquid Web server. Any updates that you want to take place have to be done so on the authoritative server for DNS (which in this case is Liquid Web), as this is the actual server responding to DNS requests.

Once you’ve set up your custom nameservers and created a cPanel account, the final step is to edit the DNS in your WHM/cPanel account.
If you’re setting your records through WHM/ cPanel and your WHOIS information reflects the correct name servers, then you are ready to make changes to your DNS. There are several different kinds of DNS records that you can set up, but the most essential of the records, and the one I am going to focus on for this article is the A and CNAME records.

After logging into WHM navigate to “Edit DNS Zone” under “DNS Functions,” and select the domain you want to edit DNS. Once highlighted, select “Edit” to update records on a particular domain.

Edit DNS in WHM

Once you enter the zone file, none of the changes you make will take effect until you save them, so you can back out at any time and start over, a good tactic if you think you have messed up the syntax. Many parts of the zone file will never get changed, so we will focus on the three fields you are most likely to edit:

  • Domain
  • TTL (Time to Live)
  • Record Type

These are significant fields for the function of the DNS, and within each area, specific nuances tend to raise questions from people, but the fundamentals remain very simple. The Domain field should be the domain name followed by a trailing period (.). Anywhere that a Fully Qualified Domain Name (FQDN) is used, never forget the trailing period. If you are not using a FQDN in the domain field, you can use the sub-domain, which does not require the trailing period.

This image shows several different sub-domain names and how their syntax differs from FQDNs:

DNS Zone in WHM

The TTL column controls how long the record remains cached before it requires the general public to re-request the DNS record from the source. Caching is convenient during times of migration because it can effectively minimize downtime by using a lower TTL! The IN field always needs to be set to IN, so it is best not to make changes to this field.

Lastly, we have the record type, which is twofold: it’s necessary to select the record type and fill in the data field. For example, if you choose to add an A record, an IP address must follow that in the adjacent field. If using a CNAME then you’ll use a FQDN in its adjoining field, again, don’t forget the trailing period!

WHM is broken up into two sections, one section allows for modifying the existing data OR adding a record. In this example, we are going to add an A record a new sub-domain “files.domain.com,” if propagation is a factor we can edit the TTL:

TTL field in WHM

In this example, we’ve added a new record to my zone file using a TTL of 300 seconds and pointed to the IP of the sub-domain. It will take at least 300 seconds, and up to 24 hours in some cases, for you to be able to see that domain from your browser or through a DNS lookup. Once you have added this info, you can save your changes! For the sake of brevity, we will skip the mail exchanger settings.

So far, we have only discussed editing DNS in WHM. Editing in cPanel is much more straightforward only offering two record options: add an A record, or add a CNAME. CPanel has fewer options and does not give full permission to the Panel user to edit their file, though they can make additional A and CNAME records, necessary for adding such elements as CDNs (Content Delivery Networks) or sub-domains.

If you need to add an A record through the cPanel you will want to search for and click on “Zone Editor” through the cPanel interface:

cPanel Zone Editor

Once in the Zone Editor, you will have the option of adding either an A, CNAME, or an MX record. You will see the options next to a plus sign:

Add A Record in Zone Editor

This becomes useful when setting up services like CDN or the like. CDN services typically require a CNAME record to be added. In the following example, we are adding a CNAME to dnsexample.com for the sub-domain cdn.dnsexample.com:

CNAME Record for a Sub Domain Record in cPanel

Once complete, select “add” and the record will be saved. One advantage of the cPanel view versus the WHM view of the DNS record is the ability to filter your search by record type:

Filtering Records in cPanel

 

In closing, when the hosting DNS on your server enables editing through WHM >> Edit DNS Zone. Altering any records will not be reflected changes if the name servers are not pointed to the appropriate server. In general, you won’t touch or need to change most fields in the zone file, except for Domain, TTL and Record Type. Through WHM you can make edits to the existing record or opt to add records. To get started with editing your DNS records and locate your DNS provider our easy to follow DNS article can assist you.

 

Getting Started with Ubuntu 16.04 LTS

A few configuration changes are needed as part of the basic setup with a new Ubuntu 16.04 LTS server. This article will provide a comprehensive list of those basic configurations and help to improve the security and usability of your server while creating a solid foundation to build on.

Root Login

First, we need to get logged into the server. To log in, you will need the Ubuntu server’s public IP address and the password for the “root” user account. If you are new to server administration, you may want to check out our SSH tutorial.
Start by logging in as the root user with the command below (be sure to enter your server’s public IP address):
ssh root@server_ipEnter the root password mentioned earlier and hit “Enter.” You may be prompted to change the root password upon first logging in.

 

Root User

The root user is the default administrative user within a Linux(Ubuntu) environment that has extensive privileges. Regular use of the root user account is discouraged as part of the power inherent within the root account is its ability to make very adverse changes. The control of this user can lead to many different issues, even if those changes made are by accident.
The solution is to set up an alternative user account with reduced privileges and make it a “superuser.”

 

Create a New User

Once you are logged in as root, we need to add a new user account to the server. Use the below example to create a new user on the server. Replace “test1” with a username that you like:

adduser test1

You will be asked a few questions, starting with the account password.
Be sure to enter a strong password and fill in any of the additional information. This information is optional, and you can just hit ENTER in any field you wish to skip.

 

Root Privileges

We should now have a new user account with regular account privileges. That said, there may be a time when we need to perform administrative level tasks.
Rather than continuously switching back and forth with the root account, we can set up what is called a “superuser” or root privileges for a regular account. Granting a regular user administrative rights will allow this user to run commands with administrative(root) privileges by putting the word “sudo” before each command.
To give these privileges to the new user, we need to add the new user to the sudo group. On Ubuntu 16.04, users that belong to the sudo group are allowed to use the sudo command by default.
While logged in as root, run the below command to add the newly created user to the sudo group:

usermod -aG sudo test1

That user can now run commands with superuser privileges using the sudo command!

 

Public Key Authentication

Next, we recommend that you set up public key authentication for the new user. Setting up a public key will configure the server to require a private SSH key when you try to log in, adding another layer of security to the server. To setup Public Key Authentication, please follow the steps outlined in our “Using-SSH-Keys” article.

 

Disable Password Authentication

Following the steps outlined in the previously mentioned “Using-SSH-Keys” article, results in the new user ability to use the SSH key to log in. Once you have confirmed the SSH Key is working, we can proceed with disabling password-only authentication to increase the server’s security even further. Doing so will restrict SSH access to your server to public key authentication only, reducing entry to your Ubuntu server via the keys installed on your computer.

Note
You should only disable password authentication if you successfully installed and tested the public key as recommended. Otherwise, you have the potential of being locked out of your server.

To disable password authentication on the server, start with the sshd configuration file. Log into the server as root and make a backup of the sshd_config file:

cp /etc/ssh/sshd_config /etc/ssh/sshd_config.backup

Now open the SSH daemon configuration using nano:

nano /etc/ssh/sshd_config

Find the line for “PasswordAuthentication” and delete the preceding “#” to uncomment the line. Change its value from “yes” to “no” so that it looks like this:

PasswordAuthentication no

The below settings are important for key-only authentication and set by default. Be sure to double check to configure as shown:

PubkeyAuthentication yes
ChallengeResponseAuthentication no

Once done, save and close the file with CTRL-X, then Y, then ENTER.

We need to reload/restart the SSH daemon to recognize the changes with the below command:

systemctl reload sshd

Password authentication is now disabled, and access restricted to SSH key authentication.

Set Up a Basic Firewall

The default firewall management on Ubuntu is iptables. Iptables offers powerful functionality. However, it has a complex syntax that can be confusing for a lot of users. A more user-friendly language can make managing your firewall much easier.
Enter Uncomplicated Firewall (UFW); the recommended alternative to iptables for managing firewall rules on Ubuntu 16.04. Most standard Ubuntu installations are built with UFW by default. A few simple commands can install where UFW is not present.

 

Install UFW

Before performing any new install, it is always best practice to run a package update; you’ll need root SSH access to the server. Updating helps to ensure that the latest version of the software package. Use the below commands to update the server packages and then we can proceed with the UFW install:

apt update

apt upgrade

With the packages updates, it’s time for us to install UFW:
apt install ufwOnce the above command completes, you can confirm the UFW install with a simple version command:
ufw --version

UFW is essentially a wrapper for iptables and netfilters, so there is no need to enable or restart the service with systemd. Though UFW is installed, it is not “ON” by default. The firewall still needs to be enabled with the below command:

ufw enable

Note
Recreating any pre-existing iptables rules is necessary for UFW. It is best to set up the basic firewall rules then enable UFW to ensure you are not accidentally locked out while working via SSH.

 

Using UFW

UFW is easy to learn! Various programs can provide support for UFW in the form of app profiles which are pretty straightforward. Using the app profiles, you can allow or deny access for specific applications. Below are a few examples of how to view and manage these profiles:

  • List all the profiles provided by currently installed packages:

ufw app list

Available applications:
Apache
Apache Full
Apache Secure
OpenSSH

  • Allow “full” access to Apache on port 80 and 443:

ufw allow "Apache Full"

Rule added
Rule added (v6)

  • Allow SSH access:

ufw allow "OpenSSH"

Rule added
Rule added (v6)

  • View the detailed status of UFW:

ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action From
--                         ------ ----
22/tcp (OpenSSH)           ALLOW IN Anywhere           
22/tcp (OpenSSH (v6))      ALLOW IN Anywhere (v6)

As you can see, the App profiles feature in UFW makes it easy to manage services in your firewall. Newer servers will not have many profiles to start with. As you continue to install more applications, any that support UFW are included in the list of profiles shown when you run the ufw app list command.

If you have completed all of the configurations outlined above, you now have a solid foundation to start installing any other software you need on your new Ubuntu 16.04 server.

 

Understanding the DNS Process

Do you ask yourself, “What is DNS?” “Do I need to use DNS?”  Do you feel confused? In some cases, DNS can be convoluted and complicated.  Let’s talk about Domain Name System (DNS) services. When you need to access a website, you type the domain name, such as www.google.com, into the web browser instead of typing an IP address. A conversion happens between www.google.com to 172.217.12.46, an IP, which designated to a device on the Internet. This conversion is a DNS query, an integral part of devices connecting with each other to communicate over the internet. To understand the DNS query process, let’s talk about how a DNS query routes through different components.

Step 1: Requesting Website Information

First, you visit a website by typing a domain name into a web browser.  Your computer will start resolving the hostname, such as www.liquidweb.com. Your computer will look for the IP address associated with the domain name in its local DNS cache, which stores DNS information that your computer has recently saved.  If it is present locally, then the website will be displayed. If your computer does not have the data stored, then it will perform a DNS query to retrieve the correct information.

Step 2: Contact the Recursive DNS Servers

If the information is not in your computer’s local DNS cache, then it will query the recursive DNS servers from your (ISP) Internet service provider. Recursive DNS servers have their local DNS cache, much like your computer. Given that many of the ISP’s customers are using the same recursive DNS servers, there is a chance that common domain names already in its cache. If the domain is cached, the DNS query will end here and the website displayed to the user.

Step 3: Query the Authoritative DNS Servers

If a recursive DNS server or servers do not have the information stored in its cache memory, the DNS query continues to the authoritative DNS server that has the data for a specific domain. These authoritative name servers are responsible for storing DNS records for their respective domain names.

Step 4: Access the DNS Record

For our example, to find out the IP address for www.liquidweb.com, we will query the authoritative name server for the address record (A record). The Recursive DNS server accesses the A record for www.liquidweb.com from the authoritative name servers and stores the record in its local DNS cache. If other DNS queries request the A record for www.liquidweb.com, the recursive server will have the answer and will not have to repeat the DNS lookup process. All DNS records have a time-to-live value, which shows when a DNS record will expire. After some time has passed, the recursive DNS server will ask for an updated copy of the DNS record.

Step 5: Final DNS Step

The Recursive DNS server has the information and returns the A record to your computer. Your computer will store the DNS record in its local DNS cache, will read the IP address from the DNS record, and pass this information to your browser. The web browser will connect to the web server associated with the A records IP and display the website.

The entire DNS lookup process, from start to finish, takes only milliseconds to complete. For a more profound understanding let’s break down the previously mentioned DNS components that are relevant to the DNS lookup process.

The DNS Process

Authoritative DNS Server

An authoritative name server is a DNS server that stores DNS records (A, CNAME, MX, TXT, etc.) for domain names. These servers will only respond to DNS queries for locally stored DNS zone files.  For example, if a DNS server in my network has a stored A record for example.com, then that DNS server is the authoritative server for the example.com domain name.

Recursive Nameserver

A recursive name server is a DNS server that receives DNS queries for informational purposes. These types of DNS servers do not store DNS records. When a DNS query is received, it will search in its cache memory for the host address tied to the IP address from the DNS query. If the recursive name server has the information, then it will return a response to query sender. If it does not have the record, then the DNS query will be sent to other recursive name servers until it reaches an authoritative DNS server that can supply the IP address.

A DNS zone is an administrative space within the Domain Name System (DNS). A DNS zone forms one part of the DNS namespace delegated to administrators or specific entities. Each zone contains the resource records for all of its domain names.

A DNS zone file is a text file stored on a DNS server that contains all the DNS records for every domain within that zone. It is mandatory for the zone file to have the TTL (Time to Live) listed before any other information. The TTL specifies how long a DNS record is in the DNS server’s cache memory. The zone file can only list one DNS record per line and will have the Start of Authority (SOA) record listed first. The SOA record contains essential domain name information including the primary authoritative name server for the DNS Zone.

DNS Zone File

Stored in authoritative DNS servers are the DNS records, these records provide information about a domain including its associated IP address for each domain. It is mandatory for all domains to have a few necessary DNS records to be able to access a website using a domain name.

Below is a list of the most common types and frequently utilized DNS records. Let’s dive into each kind of record.

A (Address) Record
A (Address) Record An A record points a domain name to an IP address. For example, when you type www.google.com in a web browser, it will translate to 172.217.12.46. This record links your website’s domain name to an IP address that points to where the website’s files live.Example of A record
CNAME (Canonical Name) Record
A CNAME record forwards one domain name to another domain name. This record does not contain an IP address. Utilize this type of record only when there are no other records on that domain name. Otherwise, conflict is introduced by any other records interfering. An example, a CNAME can just go from www.google.com to google.com and not to any additional domain name such as gmail.com.

Example of CNAME record

MX (Mail Exchanger)
This type of record routes all email messages to a specified mail server on behalf of a recipient’s domain to a designated mail host. The MX records use a priority number when there is more than one MX record entered for any single domain name that is using more than one mail server. The priority number specifies the order of access to the listed mail servers. Counterintuitively, the lower number is the higher priority. For example, the priority number of 10 set within the MX record will receive the email messages first. The MX record with the priority number of 20 will be a backup if the MX record with the priority of 10 is unavailable.

Example of MX records

TXT (Text) Record
Utilized for information and verification purposes the TXT record discloses information to other services about your domain such as what services the domain is using. Sender Policy Framework (SPF) records are added as TXT records to help identify if email messages are coming from a trusted source.

Example of TXT record

NS (Name Server) Record
Name servers are servers usually owned by a web hosting company, such as Liquid Web, that are used to manage domain names associated with their web hosting customers. The NS records are created to identify the name servers for each domain name in a given DNS zone. Example of NS records

SOA (Start of Authority) Record

The SOA record is a resource record which stores information regarding all the DNS records in a given DNS zone.  An SOA record contains properties for a zone such as:

  • The name of the primary DNS server
  • Email address of the responsible person for that zone
  • The serial number that is used by a secondary DNS server to check if the zone has changed
    • If a zone has changed on the primary DNS server, then the changes are copied to the secondary DNS server which changes the serial number.
  • Refresh Interval
    • This shows how frequently the secondary DNS servers check for changes to any of the records, as determined by the TTL . 
  • Retry Interval
    • The retry interval displays how frequently the secondary DNS servers should retry checking if any changes are made to the zone if the first refresh fails.
  • Expire Interval
    • Shows how long the zone will be valid after a refresh.
  • Minimum (default) TTL (Time to Live)
    • The SOA records are outlined in https://www.ietf.org/rfc/rfc1035.txt  under “Domain Names – Implementation and Specification”.

Example of SOA record

SRV (Service) Record

The SRV records are created to establish connections between services and hostnames.  For example, if an application is searching for a location of a service that it needs, it will look for an SRV record with that information.  When the app finds the correct SRV record, it will filter through the list of services to find the following information:

  • Hostname
  • Ports
  • Priority and Weight
  • IP Addresses

Here is an example of two SRV records.

_sip._tcp.example.com.   3600 IN SRV 10 50 5060 serviceone.example.com.

_sip._tcp.example.com.   3600 IN SRV 10 30 5060 servicetwo.example.com.

Note: _sip is the name of the service and _tcp is the transport protocol.

The content of the SRV record defines a priority of 10 for both records. The first record has a weight of 50 and the second a weight of 30. The priority and weight values promote the use of specific servers over others.  The final two values in the record describe the port and hostname to connect to for accessing any services.

PTR (Pointer) Record
A PTR record (Reverse DNS record) does the opposite of an A record. It resolves an IP address to a domain name. The purpose of this record is mainly administrative to verify that an IP address links to a domain name. Not all DNS hosting providers offer this type of DNS record.

Now that we have talked about the DNS services and the DNS components, we can troubleshoot any DNS issues which may have arisen. Below is a list of common DNS troubleshooting tips.  

  • If your website is displaying that a “server IP address could not be found,” then it’s possible that the A record is missing. You will need to add an A record to your DNS zone.

Error Page "IP Address Not Found"

  • Check to see if you have any improperly configured DNS records.
  • When you change your name servers for your domain name, you will need to wait for the name servers to propagate. The propagation can take up to 24 hours to complete.
  • Check to see if you have high TTL (Time to Live) values. For example, you have an A record that has 86400 seconds (24 hours) as the TTL value if you update the domain’s A record to point to a new IP address, it will take 24 hours to propagate. It is better to change the TTL value to 300 seconds which is 5 minutes. We have a great article that talks more about TTL values.
  • If you are using a third-party proxy server for your website and your website is not displaying, you can use your computer’s host file to see where the issue is occurring. For example, I have the website dnswebtest.com using a third-party proxy server, and it is displaying a connection error. I need to find out if the issue is with the web hosting company or the third-party proxy server. I will access my local host file, add my website dnswebtest.com as an entry and point it to the web hosting company’s IP address, for example, 98.129.229.4. If I then go to my site in the browser and it displays correctly, then I know the issue is with the third-party proxy server. Here is an excellent article on How to Edit Your Host File.

Although DNS can be a complex issue, with a better understanding of the process and a few troubleshooting tips, you will be much more confident when working with it or troubleshooting problems. The following third-party tools are also quite useful when checking for DNS propagation or finding what types of DNS records a domain name has:

  1. https://www.whatsmydns.net/  for DNS propagation
  2. https://www.whoishostingthis.com/ to show what IP address a website is resolving to

 

DNS Zones Explained

DNS Zones

A DNS Zone is a portion of the DNS namespace that is managed by an organization or administrator. It serves as an administrative space with granular control of DNS components and records, such as authoritative nameservers. There is a common misconception that a DNS zone associates only with a single domain name or a single DNS server. In actuality, a DNS zone can contain multiple domain and subdomains. Multiple zones can also exist on the same server.  Information stored for a DNS zone lives within a text file called a DNS zone file.

 

DNS Zone Files

A DNS Zone file is a plain text file stored on a controlling DNS server that contains all the records for every domain within a given zone. Zone files can include many different record types, but must always begin with what is called an SOA record (Start of Authority).

 

Types of Records

 

As mentioned, there are a handful of different types of records used within a DNS Zone, all of which serve a unique purpose. Below are some examples of the most commonly used record types and a brief description of each.

Start of Authority (SOA)

The first record in any zone file is the SOA resource record. This record is an essential part of the DNS zone file. It indicates the domain’s zone and the fundamental properties of the domain name server. Each zone file can contain only one SOA record.

Name Server (NS)

NS records tell recursive name servers which name servers are authoritative for a zone. Recursive name servers look at the authoritative NS records to facilitate which server to ask next when resolving a name.

 

Note
The only zone file that matters is the one located at the authoritative name server for the domain. You can find which name servers the internet will look at through a whois lookup on the domain.

Mail Exchange (MX)

MX records, usually two, are responsible for specifying which mail server is in charge of receiving email messages on behalf of a site. The email client tries to make an SMTP connection to the primary mail server listed in the zone file. The records are ranked by priority from lowest to highest with the lowest being the primary. If the primary server is not available, the next listed mail server will attempt a routing connection. MX records must point to a domain, not an IP.

Address (A)

The A record is used to find the IP associated with a domain name. This record routes info from the server to the end client’s web browser.

 

AAAA

The quadruple A record has the same function as the A record but is used specifically for the IPv6 protocol.

Canonical Name (CNAME)

This record will alias one site name to another. The DNS lookup will then route domain name requests the new name that the A record holds. These records must point to a fully qualified domain name (FQDN).

 

Alias Record (ALIAS)

The ALIAS record is functionally similar to a CNAME record in that it is used to point one name to another. That said, while CNAME records are for subdomains, an ALIAS record is used to lead the apex domain name (example.com) to a subdomain such as host.example.com. The authoritative nameservers for the Apex domain will subsequently resolve the IP of the hostname to direct traffic.  

Text (TXT)

TXT records hold the free-form text of any type. Initially, these were for human-readable information about the server such as location or data center. Presently, the most common uses for TXT records today are SPF and Domain_Keys(DKIM).

 

Service Locator (SRV)

Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX. This type of record, while helpful, is not commonly used.

Pointer (PTR)

Pointer records point an IP to a canonical name and used explicitly in reverse DNS. It is important to note that a reverse DNS record needs to be set up on the authoritative nameservers for the person that owns the IP, not the person that owns the canonical name.

Migration to Managed WooCommerce

Liquid Web is here to support your migration needs into our Managed WooCommerce Hosting platform. Whether you are migrating from an external or internal source, our in-house team of migration experts transforms the data migration process into a simple task. To ensure the smoothest and best possible data transfer, we have a quick overview and a few points for your consideration.

 

Our first step includes taking a copy of your live site (known as the origin site) and migrating it over to our Managed WooCommerce Hosting platform. Rest assured, when performing the migration, the only changes made to the site will be to assist in the movement. Within this timeframe, it is advised to avoid making changes or updates to the site as it will extend the migration timeline and could result in data loss. Changes and updates are included but not limited to themes, designs, contents, products, blog posts or WordPress versions. The initial sync process should result in no downtime for your live site.

Once the initial sync is complete, our Migration Specialists perform a series of basic tests to the site. During this time, our team will send information on ways to test out your new site to ensure that all aspects have carried over correctly and are in working order. Before going live, it is essential to take the time to thoroughly review your site and if at any point you do find a discrepancy our specialist is there to assist.

The third and most exciting step is the push to go live. We will coordinate the best date and time for the final sync of your site. This last sync will ensure the latest data on orders, products, and customers transfers to your new server. Upon completion of the final sync, you will be asked to update the staging domain’s name and DNS record. With a little DNS propagation time, you will begin to see the new site populate!

With the updating of DNS and the site name, you are now entirely done with the migration process. In subsequent steps, we will create a ticket with our Product Team to connect your store to our partnered applications, Glew and Jilt. Credentials to these valued applications will be sent out in an email, after which, our product team can suggest performance optimization methods to get the most out of your eCommerce store.

 

Knowing the details behind the migration process aligns us with our next step in creating a migration request from your Liquid Web control panel! Once completed, our Migration Specialists will be in touch to schedule the migration and answer any questions you may have.

 

How to Remove (Delete) a User on Ubuntu 16.04

User management includes removing users who no longer need access, removing their username and any associate root privileges are necessary for securing your server. Deleting a user’s access to your Linux server is a typical operation which can easily be performed using a few commands.  

Pre-flight Check

  • We are logged in as root on an Ubuntu 16.04 VPS powered by Liquid Web!

Step 1: Remove the User

Insert the username you want to delete by placing it after the userdel command. In our example, I’ll be deleting our user, Tom.

userdel tom

Simultaneous you can delete the user and the files owned by this user with the -r flag.  Be careful these files are not needed to run any application within your server.

userdel -r tom

If the above code produces the message below, don’t be alarmed, it is not an error, but rather /home/tom existed but /var/mail/tom did not.

userdel: tom mail spool (/var/mail/tom) not found

 

Step 2: Remove Root Privileges

By removing Tom’s username from our Linux system we are halfway complete, but we still need to remove their root privileges.

visudo

Navigate to the following section:

## Allow root to run any commands anywhere
root ALL=(ALL:ALL) ALL
tom ALL=(ALL:ALL) ALL

Or:

## User privilege specification
root ALL=(ALL:ALL) ALL
tom ALL=(ALL:ALL) ALL

With either result, remove access for your user by deleting the corresponding entry:

tom ALL=(ALL:ALL) ALL

Save and exit this file by typing :wq and press the enter key.

To add a user, see our frequently used article, How to Add a User and Grant Root Privileges on Ubuntu 16.04. Are you using a different Ubuntu version? We’ve got you covered, check out our Knowledge Base to find your version.

Cloud vs. Dedicated Hosting

Cloud and dedicated servers are two types of hosting solutions that you will find across professional web hosting companies. Whether your a small business or a thriving enterprise the question remains, what is the difference between Cloud and Dedicating hosting and which one is the best solution for you? For a third year in a row voted by Cloud Spectator, in top server side performance, we are here to give you a comprehensive break down of these two types of hosting environments as well as impactful aspects:  performance through speed and uptime, and most importantly cost.

Simply put the cloud a virtual space on multiple servers. A typical product of the cloud environment is a Cloud VPS. We can compare Cloud hosting to the service of a restaurant. The restaurant would represent the physical server, within that server lives the Cloud VPS, in this case, represented by the restaurant’s tables. Each waitress/waiter equals the resources that each table or Cloud VPS can pull from. If your party is growing, you can add another table (Cloud VPS) together and therefore increase your waiter/waitresses (resources). If your party is leaving the same concept can be applied, quickly adjusting to your needs. On the other hand, a dedicated server would have one table, the whole restaurant to yourself, having six wait staff attending to your dedicated server needs.

Note:
Dedicated and Cloud both have the option to manage or self-manage. Unmanaged means you’ll be responsible for running routine software updates, which are essential to security, as opposed to your hosting provider. The choice to go managed or unmanaged is better left to a broader discussion but as you can guess unmanaged is usually the cheaper option.

Cloud hosting wins over with price as there is no hardware to buy and you usually pay for what you use. While your business grows, you’ll be able to scale up by adding more file space for more or larger websites.  Because these are actual physical pieces that you can own, dedicated servers, sometimes have a setup fee associated with them. The freedom of having your server means you’ll be paying for maximum power even with you are not utilizing the whole server.  Your hosting provider can maintain both Cloud VPS and Dedicated, but dedicated servers often will need an additional team with a deeper understanding of resource monitoring and network setup. Lastly, Cloud VPS average entry-level servers start around $60 per month while dedicated entry level starts off around $199 per month.

Cloud Costs
  • Average entry level $60 per month
  • No hardware to buy
  • Unlimited resource scaling
  • Pay for what you use
Dedicated Costs
  • Average entry level $199 per month
  • Initial setup fees

Uptime, defined as the amount of time your server is online and available to your users.  For most, uptime is of the utmost importance because businesses rely on the revenue or information that their site provides. More and more cloud environments uptimes are improving though there is always the chance of downtime due to resource abuse from other customers on the same server.  Dedicated servers face similar issues with, but those are due to a different reason such as hardware failures. You’ll also find cloud environments are resilient and redundant with their environmental setup and are ideal for minimal downtime in scaling up. Scaling up within dedicated will prove to be more of an impact on uptime due to its intricacies. Regardless of their scaling and known for their superb uptime, dedicated servers are favorites to more significant e-commerce business with larger mission-critical websites.

Cloud Uptime
  • More for small to medium business because uptime can sometimes be an issue
  • Resilient and Redundant
  • Can have occasional downtime due to shared resources
Dedicated Uptime
  • Great uptime for larger websites, such as e-commerce businesses and high-traffic sites
  • One point of failure

How fast a network or website is most often the number one concern amongst website developers and users.  When differentiating whether the cloud or a dedicated server is faster, the short of it is, dedicated will usually be faster. It’s difficult to compare the two environments because not all websites are created equal, front-end and backend development play into speed.  In this instance, we are going to assume the code is sufficient for optimal performance. If we think of our restaurant analogy from earlier, resources (the wait staff) are limited in Cloud environment. These resources get pulled by the website’s processes, so you may reach a limit that your hosting provider or Cloud VPS cant handle as fast as a dedicated server. (Though there are some free caching services you can implement to be SEO competitive.)  Remember resources for dedicated servers are all yours, and this equals out to a significant increase in speed.

Cloud Speed
  • Average page load times
  • Extra work needed to implement caching
Dedicated Speed
  • Quicker website load times
  • High performance due to unshared resources
You’ll find that determining the current need of your business will help to ease the choice of hosting environments.  Dedicated servers remain the best choice where performance is critical if you have the money to spend but can be rigid in upscaling.  Most small to medium sites are optimal to run on the cloud or can stand for the occasional downtime and are best for growing businesses.   For a side by side comparison, visit our products to see how our Dedicated Servers  and Cloud VPS win over big name hosting providers.