Install Rsync and Lsync on CentOS, Fedora or Red Hat

Have you ever needed to copy files from your local computer over to your web server? You may have previously used File Transfer Protocol (FTP) applications for this task, but FTP is prone to being insecure and can be challenging to work with over the command line. What if there was a better way? In this tutorial, we’ll be covering two popular utilities in the Linux world to securely assist in file transfers, rsync and lsyncd. We’ll show you how to install and use both in this article. Let’s dig in!

 

What is Rsync?

The first utility we’ll look at is called rsync, and this command is a real powerhouse! Most simply, it is used for copying files between folders, but it has some extra features that make it very useful. We’ll start with the most basic usage for rsync, and work into more complicated examples to show you how versatile it can be.

 

Install Rsync on CentOS, Fedora, or Red Hat

If you are using CentOS, Fedora, or Red Hat, you can use the yum package manager to install:

Note:
You’ll need to be the “root” user to install packages!

yum install rsync

 

Install Rsync on Ubuntu or Debian

If you are using Ubuntu or Debian, you can use the apt-get package manager to install:

Note:
You’ll need to be the “root” user to install packages!

apt-get install rsync

 

How to Use Rsync

The syntax for running rsync looks like this:

rsync -options <source> <destination>

You are not required to specify “options”, but you’ll always need to tell rsync a source and destination.  In this example, we’re using rsync to copy “file.txt” into the “/path/of/destination/” folder:

rsync /path/of/source/file.txt /path/of/destination/file.txt

Important
When you run rsync, remember always to put the “source” first, and “destination” second!

Now that you have the basics let’s try another common task. In this example, we’re going to use rsync to copy a directory from our local computer over to our web server “192.168.0.100”.

rsync -avHl /path/of/source/folder root@192.168.0.100:/path/to/destination/folder

Notice how just as before, we specified our source first, and destination second. One of the great things about rsync is that it performs remote transfers of data securely, through SSH. Using SSH is fantastic from a security point of view, and it allows you to use SSH keys to avoid typing passwords.

As one last example, let’s try copying something from your remote server “192.168.0.100” over to your local machine. Once again, rsync is the tool for the job!

rsync -avH  root@192.168.0.100:/path/of/remote/folder /path/of/destination/folder

We used some special options in that last example. Let’s break them down.

-a = archive mode (includes several commonly used options: -rlptgoD, check the rsync man page for detailed info.)

-v = print verbose messages to screen, (very helpful!)

-H = preserve hard links when copying

One of the great things about rsync is that it intelligently copies files. If only the last few bits of a file has changed, rsync solely copies the changes, rather than the whole file. Transferring only the changed parts of a file can be a huge time saver, but especially when copying files remotely like in that last example.

 

Introducing Lsyncd

Finally, we’ll talk about lsyncd. The utility lsyncd is somewhat similar to rsync in that it is used to synchronize two folders. Unlike rsync, which has to be run manually, lsyncd is a daemon. Sounds scary, but in computer terminology, daemons are merely applications that run as a background process. You usually don’t have to manually run daemons every time you want to use them, as they are typically configured to start automatically when your server boots. When you configure lsycnd correctly, it can automatically synchronize folders between two computers. Imagine if you didn’t have to manually create backups of your website on your web server every time you made a small change? That could be a real time saver! Let’s dig in.

 

Install Lsyncd on CentOS, Fedora, or Red Hat

If you are using CentOS, Fedora, or Red Hat, you can use the yum package manager to install:

Note
You’ll need to be the “root” user to install packages!

yum install lsyncd

 

Install Lsyncd on Ubuntu or Debian

If you are using Ubuntu or Debian, you can use the apt-get package manager to install:

Note
You’ll need to be the “root” user to install packages!

apt-get install lsyncd

 

How to Use Lsyncd

Unlike rsync, lsyncd runs as a daemon. You don’t run it directly. Instead, it starts automatically with your server at boot time, and runs silently in the background. It’s a great automated way to sync folders on your server! All we need to do is configure lsyncd, so it knows what folders to sync.

First, we’ll create some files and folders.

Create the configuration folder location
mkdir /etc/lsyncd

Create a folder to sync FROM, feel free to name it what you would like
mkdir /source_folder

Create the folder to sync TO
mkdir /destination_folder

Create the log folder location
mkdir /var/log/lsyncd

Make the log file
touch /var/log/lsyncd.log

Create the status file
touch /var/log/lsyncd.status

This is an example file for our sync tutorial
touch /source_folder/hello_world.txt

Next, we need to configure lsyncd to use our newly created files. Open a new file for editing using your favorite Linux text editor.

vi /etc/lsyncd/lsyncd.conf.lua

Paste in the following configuration, and save the file.

settings = {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status"
}
sync {
default.rsync,
source = "/source_folder",
target = "/destination_folder",
}

Now all that remains is to start the program! First, we’ll tell lsyncd to start automatically when your server boots:

systemctl enable lsyncd

Next, we’ll start lsyncd manually, (this only needs to be done once.)

systemctl start lsyncd

We’ve successfully installed lsyncd, but it is good practice to double check your work. We’ll check to see if it is running using this command:

systemctl status lsyncd

If you see a line that reads: active (running), it is running correctly!

Finally, we check the contents of /destination_folder to make sure that it contains our “hello_world.txt” file.

ls -l  /destination_folder
total 0
-rw-r--r-- 1 root root 0 Nov 17 07:15 hello_world.txt

 

You should see that “hello_world.txt” has been automatically synchronized over to “/destination_folder”. Everything is working! In practice, you can set “/source_folder” and “/destination_folder” to any folders you need synchronizing.

As you can see, these two utilities rsync and lsyncd are great tools for copying files and folders in Linux. Have questions about how to use these tools on your Liquid Web web server? Reach out to the Most Helpful Humans in Hosting. We’re here to help, 24×7!

 

Setup a Development Environment for CentOS using cPanel

Editing a website’s code is often needed to update a site, but doing this to the live website could create downtime and other unwanted effects. Instead, its ideal to create an environment especially for developing new ideas.  In this tutorial, we will explore creating a development site specifically for CentOS servers.

As a warning, this is advanced technical work. It’s possible to make mistakes and cause downtime on your live domain. If you are not 100% confident, it may be a good idea to hire a system admin or developer to copy the domain for you.

Pre-flight

Step 1: Continue on to back up the cPanel account, just in case you have any issues while creating your development environment:

/scripts/pkgacct [username] --backup /home/temp/

Step 2: After creating a backup, you will create a copy the database of the main domain, otherwise known as the primary domain.

mysqldump [database_name] > /home/temp/backup.[database_name].sql

Step 3: Create the new dev domain in WHM. This domain name will be a subdomain of the primary domain. Creating a subdomain is one of the first steps in designing your development environment. We prefer to use, dev.[domain].com, the same domain name but with “dev” in front of it for clarity. Do be sure to note all the information, like the username and password. If you are not familiar with how to create a new account, see the following tutorial.

Step 4: Once you’ve created the subdomain within your cPanel you’ll copy the files from the main document root to the newly created dev document root. The document root is the location where your website’s files.

Use the following command to find the document root for either domain. Replace “exampledomain.com” with the primary and development domains for determining the location of document root for each.

whmapi1 domainuserdata domain=[exampledomain.com] | grep -i documentroot

Step 5: After locating the document roots we will copy the files from the primary domain over to the development environment. Insert the document roots into this next command.

rsync -avh /document/root/of/the/primary/domain/ /document/root/of/the/new/dev/domain

Step 6: Next you will need to state the correct ownership of the dev domain’s files and directories, as the previous username will be in place. The ‘dev_username’ will be the given/chosen when you created the new account. The following command will change the ownership for you.

chown -R [dev_username]: /home/[dev_username]

Step 7: After changing file ownership, create a new database and database user for the dev domain. Be sure to notate this information including the password set. Our documentation on the creating a new database will walk you through this necessary process.

Step 8: Once you’ve created the new database its user, you can start copying the original database into the newly created database.

mysql [new_database_name] < /home/temp/backup.[database_name].sql

Step 9: Copying over the database is the bulk of the work, but you’ll still need to edit the configuration files for your domain. Typically, some files need to access the database and will accomplish this via the database user and password. The file that contains these credentials needs to be updated to have the database, database user, and password you created in step 8 of this tutorial. If unsure of the location of these files talking with a developer may be helpful. If you are working with a WordPress site, you can continue onto the next section. Otherwise, if you have updated your dev configuration files with your new database info continue onto step 10.

Editing WordPress Configurations
Fortunately, WordPress is one of the most commonly used content management systems. WordPress is easy to configure we’ll provide a short tutorial on how to change the database and database user in the wp-config.php. First, move to the new document root. cd /document/root/to/the/dev/domain

There you will edit the wp-config.php with your favorite text editor such as vim or nano. nano wp-config.php

In the wp-config.php file, you will see a section that looks like the below. From there you will edit the highlighted characters with the information you used to create the database in the tutorial. // ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'new_database_name');
/** MySQL database username */
define('DB_USER', ‘new_database_user');
/** MySQL database password */
define('DB_PASSWORD', 'password');

In the database, clear all mentions of the original domain and replace them with the dev domain. For example with WordPress, in the _options table you need to change two entries of ‘home’ and ‘siteurl’. These can be quickly changed using WP-ClI, which is a is a command line tool for interacting with and managing WordPress sites. To install WP-CLI follow these instructions and continue onto the next step. If you do have a WordPress website, once you have installed WP-CLI you will want to run the following commands: su - [dev_username] cp public_html
wp option update siteurl https://dev.domain.com
wp option update home https://dev.domain.com
exit

Sometimes plugins or themes mention the original domain in the database. If some parts of the dev domain are not working, particularly plugins or themes, you may need to contact a developer to see if the original domain name is still active in the database. After replacing the names using WP-CLI, you’ll have officially created a dev domain.

Step 10: To complete this tutorial you have two choices: add an A record to your DNS view your dev site online or edit your local hosts file to view solely on your computer. For our Liquid Web customers feel free to contact The Most Helpful Humans™ with questions you may have in setting up a development environment.

 

 

Troubleshooting: Cron Jobs

Cron is a service for Linux servers that automatically executes scheduled commands. A cron job can be a series of shell commands, scripts, or other programs. Cron tasks or jobs can perform a variety of functions and once ran can send out an e-mail message to inform you of its completion or errors. If you receive an error, there are many ways to troubleshoot the cron task.  Use this article for troubleshooting assistance or a tutorial on the basics of cron jobs. If you would like to learn more about creating a cron job check out our Knowledge Base tutorials on the subject.

Checking Configurations with Crontab

From the command line, you can review the scheduled cron jobs by listing the crontab for the user. This command outputs the contents of the user’s crontab to the terminal.

As the user you can run:
crontab -l

As root, you can see any user’s crontab, by specifying the username.
crontab -l -u username

You can find some detailed information about how to format the cron jobs in the /etc/crontab file.  Below is the example within that file.  Each asterisk can be replaced with a number or to its corresponding field.  Or you can leave the asterisk in place to represent all possible numbers for that position.  For example, if left with all asterisk this means that the cron job will run every minute, all the time.

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed

 

Altering the E-mail Address of a Cron

Once initiated, a cron sends a notification to an email address, set within the MAILTO line of the crontab.

MAILTO="user@domain.com"

To edit the crontab, you can run the following commands as the user:

crontab -e

Or if logged in as root, you can type in the username for any of your users to see a scheduled task that they have created.

crontab -e -u usernameThese open the crontab of the user in the default editor. Typically the vim or nano command will open the file. Be aware that this similar to opening any other text file where you’ll save before closing.

The MAILTO line indicates where the execution status of a cron should be sent. The sending address will typically be the cron task creator’s username along with the server’s hostname. So an email’s sender address would follow this syntax, unixuser@serverhostname.com. If you don’t see an email right away, it may be a good idea to check your spam box.

Silenced Crons

Sometimes cron jobs are configured to either produce no output or have its output silenced, even if set with a MAILTO address. If you see a cron job listed with any of the following on end, it is a sign that the cron has output has been silenced. These send any output to the null device (the black hole on a Linux server). In cases like this, you’ll need to remove the line from the cron job script to generate an output.

&> /dev/null

2>&1 /dev/nullSome cron jobs are disabled entirely. These will have a “#” in front of the command, resulting ignored lines when executed. Remove the “#” to re-activate the cron job.

Verifying the Crond Service

Once you’ve confirmed the correct settings, it’s time to verify that the cron system is enabled and running. The three following commands can each be used to verify if the crond (the cron service) is running.

/etc/init.d/crond status

service crond status

systemctl status crond

After running any of the above commands, if you find the crond service is not running, you can start it with one of the following.

/etc/init.d/crond start

service crond start

systemctl start crond

/var/log/cron

Once you know that the cron is enabled, not silenced, and crond is running, then it’s time to check the cron log, located in the path of /var/log/cron.

cat /var/log/cron

Example Output:

Oct 2 23:45:01 host CROND[3957]: (root) CMD (/usr/local/lp/apps/kernelupdate/lp-kernelupdate.pl > /dev/null 2>&1)
Oct 2 23:50:01 host CROND[4143]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Oct 2 23:50:01 host CROND[4144]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)
In the log, you will see if, when and what user ran the cron. If initiated, you’ll see the date and time of execution followed by brackets of the individual cron number. This timestamp does not confirm that the script ran normally or at all, it only signifies when the cron system last ran the task. Beyond that, you may need to investigate the cron script itself or application-level configurations and their respective logs to ensure the code is executing correctly.

Other Cron Services

This article is merely an overview of the main crond service as there many other cron tasks services. The anacron system is a commonly used cron service that configures daily or hourly jobs, and can even be set to run at reboot. The logs for these kinds of tasks are within /var/log/cron, and are not executed by crond.

Other scheduled tasks, although also referred to as cron jobs, are not executed from the crond system. These cron jobs are often configured within the code or configuration of a website. To determine if executed, you’ll need to investigate other configurations and logs that the cron script interacts with.

As with all cron services, automated jobs can be manipulated to execute numerous daily tasks, so you don’t have to. Cron tasks can occasionally go awry even without altering them or years, but knowing where to look is half the battle in troubleshooting a cron job.

 

Useful Command Line for Linux Admins

The command line terminal, or shell on your Linux server, is a potent tool for deciphering activity on the server, performing operations, or making system changes. But with several thousand executable binaries installed by default, what tools are useful, and how should you use them safely?

We recommend coming to terms with the terminal over SSH. Learn how to connect to your server over SSH, and get started with a few basic shell commands. This article will expand on those basic commands and show you even more useful and practical tools.

Warning:
Warning: These commands can cause a great deal of harm to your server if misused. Computers do precisely what you tell them to do. If you command your server to delete all files, it will remove every single file without question, and feasibly crash because it deleted itself. Please take precautions when working on your server, and ensure you have good local and remote backups available.

In the basic shell commands tutorial, you learned about basic navigation and file manipulation commands like ls, rm, mv, and cd. Below are a few essential commands for learning about your Linux system. (Display a user manual for each command by using man before each command, like so: man ps)

pipe

The pipe command (which is the | between two or more commands) is possibly the most useful tool in the shell language. This command allows the output of one command to be fed into the input of another command directly, without temporary files. The pipe command useful if you are dealing with a huge command output that you would like to format further, or to be processed by some other program without using a temporary file.

The basic tutorial showed the commands w and grep. Let’s connect them using pipe to format the output. Using the w command allows us to view users logged into the server while passing the output for the grep command to filter by the ‘root’ user type:

# w
08:56:43 up 27 days, 22:17, 2 users, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w
jeff pts/1 191.168.1.6 09:02 1:59 0.07s 0.06s -bash

# w | grep root
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w

The format of the last command is much more digestible and becomes much more important with the output from commands like ps.

ps

The ps command shows a ‘process snapshot’ of all currently running programs on the server. It is particularly useful in conjunction with the grep command to pare down its verbose results down to a certain keyword. For instance, let’s see if the Apache process ‘httpd,’ is running:

# ps faux | grep httpd
root 27242 0.0 0.0 286888 700 ? Ss Aug29 1:40 /usr/sbin/httpd -k start
nobody 77761 0.0 0.0 286888 528 ? S Sep17 0:03 \_ /usr/sbin/httpd -k start
nobody 77783 0.0 1.6 1403008 14416 ? Sl Sep17 0:03 \_ /usr/sbin/httpd -k start
...

We can see that there are several ‘httpd’ processes running here. The one owned by ‘root’ is the core one (the ‘forest’ nodes, \_, help identify child processes, too). If we did not see any httpd processes, it could safely assume, Apache is not running, and we should restart it to serve websites request again.

The common flags used for ps are ‘faux’, which displays processes for all users in a user-oriented format, run from any source (terminal or not, which is signified by the x), paired with a process tree (forest). The ‘aux’ command ensures the view of every single process on the server, while the ‘f’ in aux helps to determine which processes are parents and which are children.

top

Like the ps command, the top command helps to determine which processes are running on a server, but top has an advantage in its ability to display in real-time while filtering by several different factors. Simply, it dynamically shows the ‘top’ resource utilizers and is executed by running:

# top

Once inside of top, you will see a lot of process threads moving around. The ones at the top, by default, will show you processes that are using the most CPU at the moment. Holding shift to type ‘M’ will change the sort to processes that are using the most MEMory. Hold shift and press ‘P’ to change the sort back to CPU. When you want to quit, you can simply press ‘q’.

Since top writes information live, its output cannot be parsed by grep and thus seldom used in conjunction with a pipe. Top is most useful for discovering what is causing a server to run out of memory, or what is causing a load. For instance, on a server with high load, if the first command is using 100% CPU and its name is php-fpm, then we can assume that an inefficient PHP script is causing the load. In this case, php-fpm should be restarted (this is achieved on cPanel with /scripts/restartsrv_apache_php_fpm).

netstat

netstat is another tool to show what service is running on a server, but in particular, it shows processes that are listening for traffic on any particular network port. It can also display other interface statistics. Here is how you would display all publicly listening processes:

# netstat -tunlp

The command flags ‘-tunlp’ will show program names listening for UDP or TCP traffic, with numeric addresses. This can be further scoped down by grep to see, for instance, what program is listening on port 80:

# netstat -tunlp | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 27242/httpd
tcp 0 0 :::80 :::* LISTEN 27242/httpd

There are four listeners listed, two each for all IPv4 (0.0.0.0) and all IPv6 (::) addresses on the local machine. There are two unique PID numbers (1863 and 1993), indicating that there are two, actively running memcached processes. The active ports for each PID, respectively, are 11211 and 11213. I can use this information to guarantee correct connects against my configurations and to provide the correct ports.

ip

The ip command shows network devices, their routes, and a means of manipulating their interfaces. LiquidWeb IP addresses are statically assigned, so you will not need to make any changes to the IPs on your server, but you can use the ip command to read the information on the interfaces:

# ip a

This command is short for ‘ip address show’, and shows you the active interfaces on the server:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.1/24 brd 192.168.0.255 scope global eth0:cp1
inet 192.168.0.2/24 brd 192.168.0.255 scope global secondary eth0:cp2
inet6 fe80::5054:ff:face:b00c/64 scope link
valid_lft forever preferred_lft forever

In our case, there are two interfaces numbered 1 and 2: lo (the localhost loopback interface), and eth0. eth0 has three IP addresses assigned to it, on eth0, eth0:cp1, and eth0:cp2, which are 192.168.0.1, 2, and 3. We can also see that my MAC address for eth0 is 52:54:00:00:00:00, which can be helpful for troubleshooting connections to other devices like firewalls and switches. This interface also supports IPv6, and our IP is fe80::5054:ff:face:b00c.

lsof

lsof stands for ‘list open files,’ and it does just that; lists the files that are in use by the system. Listing open files is very helpful in determining what script is especially complex, or for finding a file that is in a state of writing.

Let’s use PHP as an example. We want to figure the location or path for the PHP default error logs, but Apache’s configuration is a large group of nested folders. The ps command only tells us if PHP is running, not which file is being written. lsof will show me this handily:

# lsof -c php | grep error
php-fpm 13366 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13366 root 2w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13366 root 5w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13395 root 2w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root 7w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log

The ‘-c’ flag will only list processes that match a certain command name, in my case, ‘php’. I pipe this output into grep to search for the files that match the name ‘error’, and I see that there are two open error logs: /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log and /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log. Check each of these files (with tail or cat) to see recently logged errors.

If using the rsync command for the transfer of large folder(s), in this case, /backup, we can search for open rsync processes inside:

# lsof -c rsync | grep /backup
rsync 48479 root cwd DIR 252,3 4096 4578561 /backup
rsync 48479 root 3r REG 252,3 5899771606 4578764 /backup/2018-09-12/accounts/jeff.tar.gz
rsync 48480 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root 1u REG 252,3 150994944 4578600 /backup/temp/2018-09-12/accounts/.jeff.tar.gz.yG6Rl2

The process has two regular files open in the /backup directory: /backup/2018-09-12/accounts/jeff.tar.gz and /backup/temp/2018-09-12/accounts/.jeff.tar.gz.yG6Rl2. Even with quiet output on rsync, we can see that it is currently working on copying the jeff.tar.gz file.

df

df is a swift command that displays how much space used on the mounted partitions of a system. It only reads data from the partition tables, so it is slightly less accurate if you are actively moving files around, but it beats enumerating and adding up every file.

# df -h

This ‘-h’ flag gets human readable output in nice round numbers (it can be omitted to print output in KB):

Filesystem Size Used Avail Use% Mounted on
/dev/vda3 72G 49G 20G 72% /
tmpfs 419M 0 419M 0% /dev/shm
/dev/vda1 190M 59M 122M 33% /boot
/usr/tmpDSK 3.1G 256M 2.7G 9% /tmp

Some of the information we see is the primary partition mounted on / is 72% used space with 20GB being free. Since we’re not planning on adding any more sites our server right, this is not a problem. But, some of the information we don’t see also is telling. There is no separate /backup partition mounted on my server, so my cPanel backups are filling up the primary partition. If I want to retain more backups, I should consider adding another physical or networked disk to store them.

df can also show inode (file and folder) count of mounted filesystems from the same partition table information:

# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda3 4.6M 496K 4.1M 11% /
tmpfs 105K 2 105K 1% /dev/shm
/dev/vda1 50K 44 50K 1% /boot
/usr/tmpDSK 201K 654 201K 1% /tmp

Our main partition has 496,000 inodes used, and just over 4 million inodes free, which is plenty for general use. If we stored a lot of small files, like emails, my inode count could be much higher for the same disk usage in bytes. If you run out of inodes on your partition, it won’t be able to record the location of any more files or folders, even if you have free disk space, the system will function like your disk is full.

du

Like the df command, du will tell you disk usage, but it works by recursively counting folders and files that you specify. This command can take a long time on large folders, or those with a lot of inodes.

# du -hs /home/temp/
2.4M /home/temp/

My flags ‘-hs’ give human-readable output, and only displays the summary of the enumeration, rather than each nested folder. One of the other useful flags is –max-depth, which can define how deep you would like to list folder summaries. This flag is like increasing the depth of the -s flag (-s is basically –max-depth=0,  root — level 1, and one sub-directory — level 2):

# du -hs public_html/
5.5G public_html/

# du -h public_html/ --max-depth=0
5.5G public_html/

# du -h public_html/ --max-depth=1
8.0K public_html/_vti_txt
8.0K public_html/_vti_cnf
257M public_html/storage
64K public_html/cgi-bin
8.0K public_html/_vti_log
5.0G public_html/images
64K public_html/scripts
8.0K public_html/.well-known
8.0K public_html/_private
5.0M public_html/forum
56K public_html/_vti_pvt
24K public_html/_vti_bin
360K public_html/configs
5.5G public_html/

These commands help to find out if any specific folders inside of public_html are significantly larger than others. We add this to a pipe along with grep to get only folders that are 1GB or larger:

# du -h public_html/ --max-depth=1 | grep G
5.0G public_html/images
5.5G public_html/

Clearly, we have some pictures to delete or compress if we need more disk space.

free

The free command shows the instant reading of free memory on your system. Also displayed by top, but when only needing total memory information, the free is a lot faster.

# free -m
total used free shared buffers cached
Mem: 837 750 86 5 66 201
-/+ buffers/cache: 482 354
Swap: 1999 409 1590

Our free command with the megabytes flag displays output in MB. Without it, it would default to -k (kilobytes), but you can also pass -g for gigabytes (though the output is rounded and thus less accurate).

In our output, the total RAM on the system is 837MB, or about 1GB. Of this, 750MB is ‘used,’ but 66MB is in buffers and 201M is cached data, so subtracting those, the total ‘free’ RAM on the server is around 354MB. Because the calculations are made in KB and rounded for output, the numbers won’t always accurately add up (750 plus 86 is not 837).

The final line shows swap usage, which you want to avoid using. My output says that there is 409MB used in the on-disk swap space, but since there is free RAM at the moment, my swap usage was in the past, and the system stopped using swap space.

If there was an amount in the ‘used’ column for swap, and there was 0 free RAM after calculating the buffers and cache, then the system will be very sluggish. We call this ‘being in swap.’ The reading/writing to swap is very slow compared to RAM, and you should avoid going into swap space by tuning your programs to use memory appropriately. If you run out of RAM and swap space, then your server will be out of memory (OOM), and will immediately freeze.

Advanced Commands

Useful Pipelines

Now that we have a few advanced commands under our belt let’s learn more about how we can use pipe to our advantage in making useful command strings or scripts, aka ‘one-liners’ or ‘pipelines.’ These use several formatting commands, such as sed, awk, sort, uniq, or column, which fall outside of the scope of this article for description (you can learn more about them using the man command).

Disk Usage Formatting

This command will use du and awk, an output manipulation tool, to nicely format and sort the output of a du command in the current working directory by size. First, change directory (cd) to your intended folder for analysis, and run:

# du -sk ./* | sort -nr | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + $1; x = $1; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'

The above command will add dynamic suffixes to the on-disk sizes, so you can see output in GB, MB, and KB, instead of just one of those powers. The top listed folder will be your largest in that directory.

Check Connection Count

This string of commands checks active connections to the server using netstat, pares the output down to HTTP and HTTPS connections using grep, formats and sorts the output using a series of other commands. This example shows how many times each IP address listed has connected to the server.

# netstat -tn 2>/dev/null | grep -E '(:80|:443)' | awk '{print $5}' | cut -f1 -d: | sort | uniq -c | sort -rn
2 46.229.168.149
1 46.229.168.147
1 46.229.168.145
1 46.229.168.144
1 46.229.168.142
1 46.229.168.141
1 46.229.168.138
...

In our case, a subnet that was hitting us with requests to scrape data, caused a lot of load on the server. Add this IP range the firewall, in the deny list, to stop the attack for now.

You can also get a quick summary of just the total number of connections with this command:

# netstat -tan | grep -E ‘(:80|:443)’ | wc -l
8

Format error_log Output

Here is some advanced usage for grep output. The sed command is like awk, where it edits the output stream as it is printed. In this case, we want to look at logged modsec errors, but I want to add some whitespace between the errors so it’s easier to read:

# grep -i modsec /usr/local/apache/logs/error_log | tail -n100 | sed -e “s/$/\\n/”

This command will give me the output of the last 100 logged modsec, ModSec, or ModSecurity line (since the ‘-i’ flag for grep will ignore case sensitivity) and replace the end of each line with a newline.

Memory Usage By Account

Add up all of the percentages of memory usage by user for a running program as defined by ps and give you a sorted output.

# tmpvar=””; for each in `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u`; do tmpvar="$tmpvar\n`ps aux | egrep ^$each | awk 'BEGIN{total=0};{total += $4};END{print total, $1}'`"; done; echo -e $tmpvar | grep -v ^$ | sort -rn | head; unset tmpvar

Usage Count In /var/tmp

When searching for a file count per user, if you encounter a number as the file owner, you can conclude that the user has been removed, and should the file should be deleted.

# find /var/tmp/ ! -user root ! -user mysql ! -user nobody ! -group root ! -group mysql | xargs ls -lh | awk '{print $3, $5, $9}' | sort | awk '{print $1}' | uniq -c | sort -rh

Top Processes By Memory Usage

This command outputs the processes using the highest memory, sorting the 4th column of ps and displaying the top 10 commands with head.

# ps aux | sort -rnk 4 | head

Whether you are brushing up on your Linux Admin interview or just want to get more familiar these commands are sure to be useful to your repertoire.

Setup a Development Environment in Ubuntu

Often we want to edit our domain’s code, but on a production website, this can be dangerous. Making changes to the production site would not only allow all of the Internet to see unfinished changes but could also cause errors to display. As a workaround, we’ll create a testing domain or “dev” domain to work out any bugs and changes to the site.

As a warning, this is advanced technical work. It’s possible to make mistakes and cause downtime on your live domain. If you are not 100% confident, it may be a good idea to hire a system admin or developer to copy the domain for you.

Pre-flight

Step 1: After logging in as root, back up the domain using the command below to save a compressed version. It’s essential to have a copy of your site just in case you run into any issues. Replace the brackets and the “domainname” with your site’s name.

tar cfv backup.[domainname].tar /document/root/of/the/domain/

 

Step 2: We will be moving our backup file, .tar, somewhere off that document root, This way we can still access the data while keeping it safe.

mv backup.[domainname].tar /another/location/on/the/server

 

Step 3: The tar command created a file that comprises only half of the site, which holds the files. Next, you will want to copy the database for the primary site.

mysqldump [database_name] > /another/location/on/the/server/backup.[database_name].sql

 

Step 4: Create the new dev account, just like you were adding a new domain to your server. However, you will be creating a subdomain which you can name as dev.[domain].com.

 

Step 5: Once you’ve created the dev subdomain, copy the files from the main document root to the newly created dev document root. The document root is the location of the files housing your domain, also known as the “path”.

To find the document root location, for either domain, run the following command. Find the document root by replacing “exampledomain.com” for each both the live and dev domain.

conf=$(apache2ctl -S | grep exampledomain.com | perl -n -e '/\((\/.*)\:/ && print "$1\n"'); grep -i documentroot $conf

 

Step 6: After you have found the document roots for both the domain and dev domain, you will need to copy the files. With the output of the last command, you’ll insert the path for the document roots.

rsync -avh /document/root/of/the/domain/ /document/root/of/the/new/dev/domain

 

Step 7: Give the correct ownership to the dev domain, as the previous username remains in place. The ‘dev_username’ is a default user, one generated by creating the new account. The following command will change the ownership for you.

chown -R [dev_username]: /document/root/of/the/new/domain

 

Step 8: After you have done this you will need to create a new database and database user for the dev domain, be sure to keep all this information including the password you set. First, enter the mysql command prompt by running the command:

mysql

Next, create the database itself.

CREATE DATABASE [new_database_name];

In our next command, we create the user and their password. Be sure this is a unique username, unused up till now.

CREATE USER '[select_new_username]'@'localhost' IDENTIFIED BY '[password]';

 

Step 9: Grant privileges for the user on the database.

GRANT ALL PRIVILEGES ON [new_database_name] TO '[new_username]'@'localhost' WITH GRANT OPTION;

Then exit out of mysql by typing “quit” and hitting enter:

quit

 

Step 10: Once back at the root command prompt, you can start copying the original database into the new database.

mysqldump [new_database_name] < /home/temp/backup.[database_name].sql

With the database dumped you will still need to edit the configuration files for your domain. Typically, there are files which need to access the database and will reference the database, database user, and password. These credentials need updating to the ones created earlier in step 8 of this tutorial. If you are unsure of where these files contact a developer. If you are working with a WordPress site continue on to the next section.  Otherwise, if you have updated your dev configuration files with your new database info continue onto step 11.

Editing WordPress Configurations

Fortunately, WordPress is one of the most commonly used content management systems. WordPress is easy to configure we’ll provide a short tutorial on how to change the database and database user in the wp-config.php. First, move to the new document root.

cd /document/root/to/the/dev/domain

There you will edit the wp-config.php with your favorite text editor such as vim or nano.

nano wp-config.php

In the wp-config.php file, you will see a section that looks like the below. From there you will edit the highlighted characters with the information you used to create the database in the tutorial.

// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'new_database_name');
/** MySQL database username */
define('DB_USER', ‘new_database_user');
/** MySQL database password */
define('DB_PASSWORD', 'password');

In the database, clear all mentions of the original domain and replace them with the dev domain. For example with WordPress, in the _options table you need to change two entries of ‘home’ and ‘siteurl’. These can be quickly changed using WP-ClI, which is a is a command line tool for interacting with and managing WordPress sites.  To install WP-CLI follow these instructions and continue onto the next step.

If you do have a WordPress website, once you have installed WP-CLI you will want to run the following commands:

su - [dev_username] cp public_html
wp option update siteurl https://dev.domain.com
wp option update home https://dev.domain.com
exit

Sometimes plugins or themes mention the original domain in the database. If some parts of the dev domain are not working, particularly plugins or themes, you may need to contact a developer to see if the original domain name is still active in the database.

After replacing the names using WP-CLI, you’ll have officially created a dev domain.

Step 11: To complete this tutorial you have two choices: add an A record to your DNS to view your dev site online or edit your local hosts file to view solely on your computer. For our Liquid Web customers feel free to contact The Most Helpful Humans™ with questions you may have in setting up a development environment.

Install Multiple PHP Versions on Ubuntu 16.04

As a default, Ubuntu 16.04 LTS servers assign the PHP 7.0 version. Though PHP 5.6 is coming to the end of life in December of 2018, some applications may not be compatible with PHP 7.0. For this tutorial, we instruct on how to switch between PHP 7.0 and PHP 5.6 for Apache and the overall default PHP version for Ubuntu.

Step 1: Update Apt-Get

As always, we update and upgrade our package manager before beginning an installation. If you are currently running PHP 7.X, after updating apt-get, continue to step 2 to downgrade to PHP 5.6.

apt-get update && apt-get upgrade

Step 2: Install PHP 5.6
Install the PHP5.6 repository with these two commands.

apt-get install -y software-properties-common
add-apt-repository ppa:ondrej/php
apt-get update
apt-get install -y php5.6

Step 3: Switch PHP 7.0 to PHP 5.6
Switch from PHP 7.0 to PHP 5.6 while restarting Apache to recognize the change:

a2dismod php7.0 ; a2enmod php5.6 ; service apache2 restart

Note
Optionally you can switch back to PHP 7.0 with the following command: a2dismod php5.6 ; a2enmod php7.0 ; service apache2 restart

Verify that PHP 5.6 is running on Apache by putting up a PHP info page. To do so, use the code below in a file named as infopage.php and upload it to the /var/www/html directory.

<? phpinfo(); ?>

By visiting http://xxx.xxx.xxx.xxx/infopage.php (replacing the x’s with your server’s IP address), you’ll see a PHP info banner similar to this one, confirming the PHP Version for Apache:

Example of PHP Info page

Continue onto the section PHP Version for Ubuntu to edit the PHP binary from the command line.

Step 4: Edit PHP Binary

Maintenance of symbolic links or the /etc/alternatives path through the update-alternatives command.

update-alternatives --config php

Output:
There are 2 choices for the alternative php (providing /usr/bin/php).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/bin/php7.0 70 auto mode
1 /usr/bin/php5.6 56 manual mode
2 /usr/bin/php7.0 70 manual mode
Press to keep the current choice[*], or type selection number:

Select php5.6 version to be set as default, in this case, its the number one option.

You can now verify that PHP 5.6 is the default by running:
php -v

Output:
PHP 5.6.37-1+ubuntu16.04.1+deb.sury.org+1 (cli)
Copyright (c) 1997-2016 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies
with Zend OPcache v7.0.6-dev, Copyright (c) 1999-2016, by Zend Technologies

Install Nginx on Ubuntu 16.04

Nginx is an open source Linux web server that accelerates content while utilizing low resources. Known for its performance and stability Nginx has many other uses such as load balancing, reverse proxy, mail proxy, and HTTP cache. With all these qualities it makes a definite competitor for Apache. To install Nginx follow our straightforward tutorial.

Pre-Flight Check

  • Logged into an as root and are working on an Ubuntu 16.04 LTS server powered by Liquid Web! If using a different user with admin privileges use sudo before each command.

Step 1: Update Apt-Get

As always, we update and upgrade our package manager.

apt-get update && apt-get upgrade

Step 2: Install Nginx

One simple command to install Nginx is all that is needed:

apt-get -y install nginx

Step 3: Verify Nginx Installation

When correctly installed Nginx’s default file will appear in /var/www/html as index.nginx-debian.html . If you see the Apache default page rename index.html file. Much like Apache, by default, the port for Nginx is port 80, which means that if you already have your A record set for your server’s hostname you can visit the IP to verify the installation of Nginx. Run the following command to get the IP of your server if you don’t have it at hand.

ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

Take the IP given by the previous command and visit via HTTP. (http://xxx.xxx.xxx.xxx) You will be greeted with a similar screen, verifying the installation of Nginx!

Nginx Default Page

Note
Nginx, by default, does not execute PHP scripts and must be configured to do so.

If you already have Apache established to port 80, you may find the Apache default page when visiting your host IP, but you can change this port to make way for Nginx to take over port 80. Change Apache’s port by visiting the Apache port configuration file:

vim /etc/apache2/ports.conf

Change “Listen 80” to any other open port number, for our example we will use port 8090.

Listen 8090

Restart Apache for the changes to be recognized:

service apache2 restart

All things Apache can now be seen using your IP in the replacement of the x’s. For example http://xxx.xxx.xxx.xxx:8090

How To Install Oracle Java 8 in Ubuntu 16.04

Pre-Flight Check

  1. Open the terminal and log in as root.  If you are logged in as another user, you will need to add sudo before each command.
  2. Working on a Linux Ubuntu 16.04 server
  3. No installations of previous Java versions

Step 1:  Update & Upgrade

It is advised to update your system by copy and pasting the command below.  Be sure to accept the update by typing Y when asked to continue:

apt-get update && apt-get upgrade

 

Step 2: Install the Repository

WebUpd8 Team Personal Package Archive (PPA), a third party repository,  allows us to download the package necessary for Java 8 installation.  Press Enter to continue the installation.

add-apt-repository ppa:webupd8team/java

Once again, update your package list.

apt-get update

 

Step 3: Install Java 8

Use the apt-get command to install Oracle’s Java 8 via their installer:

apt-get install oracle-java8-installer

 

Click Y to continue and press Enter to agree to the licensing agreement.

 

Select Yes and hit the Enter key.

 

Step 4: Verify Java 8 is Installed

java -version

Output:

java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

 

It’s essential to know the path of our Java installation for our applications to function. Where is Java installed? Run this command to find its path:update-alternatives --config java

Output:

~# update-alternatives --config java
There is 1 choice for the alternative java (providing /usr/bin/java).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-8-oracle/jre/bin/java 1081 auto mode
* 1 /usr/lib/jvm/java-8-oracle/jre/bin/java 1081 manual mode

 

Copy the highlighted path from the second row: /usr/lib/jvm/java-8-oracle/jre/bin/java/.  After copying, open the file /etc/environment and add in the path of your Java installation to the end of your file.

vim /etc/environment

JAVA_HOME="/usr/lib/jvm/java-8-oracle/jre/bin/java"

 

Save the file by hitting ESC button and type :wq to execute the command below to recognize the changes to the file:

source /etc/environment

 

You should now see the path of installation when using the $Java_Home variable:

echo $JAVA_HOME

Output:

~# echo $JAVA_HOME
/usr/lib/jvm/java-8-oracle/jre/bin/java

 

VPS Server Space/Disk Quota

The term “server space” refers to the amount of disk space that is available on your server’s hard disk drive. This space varies according to server type, hosting plan and possibly by additional services that are set up and available on your Liquid Web account.

Some of the largest hard disk drives on the market now can hold up to 100TB of data. To better visualize this, 100 terabytes of data is approximately equivalent to:

  • 42,000,000,000 trillion single-spaced typewritten pages
  • 8,000,000 phone books
  • 160,000 regular compact discs
  • 20,000 DVDs
  • 200 average-sized hard disks (500GB)
  • 80 human brains (the capacity of a human being’s functional memory is estimated to be 1.25 terabytes by futurist Raymond Kurzweil in The Singularity Is Near)

Your disk space can hold many types of data including file types like HTML, TXT, CSS, PHP, PDF, JPG, MP3, MP4, compressed (tar.gz) backups, SQL databases and more. These files are in specific folders which are defined by the applications configuration files or locations you determine.

How do I locate the folders containing a particular set of data?

The location of a file depends primarily on the type of file. On a Linux server, your typical cPanel account is set up under the /home/username folder, and your cPanel account username specifies the username folder. This folder is sometimes called the top-level or root folder of your cPanel account. This root folder is not publicly accessible on the web but, contains folders which are accessible via a web browser. The root folder holds other cPanel specific system folders that use a variety of functions.

As you can see, when uploading files to your account, you’ll likely want them to be in public_html to be accessible on the web. Uploading an image.jpg file to the public_html folder makes it available at domain.com/image.jpg. Additionally, if you create a folder inside of the public_html directory and add the same image there, it would be accessible at yourdomain.com/foldername/image.jpg.

To see the location of a file, you have several options;

  1. Log into your cPanel and open the File Manager under the Files section
    cPanel >> Home >> Files >> File Manager’ here you can view all of the files and folders in your account’s root directory.cPanel File Manager
  2. When a cPanel account is initially set up, it also creates the main FTP user. You can use the servers FTP functionality to access folders from a remote location to view the file listings. Several software titles like Filezilla, Cyberduck, and WinSCP are available for this type of connection.
  3. Lastly, you can connect to the server via SSH and get access to folders/files on the server.

How do I see how much space I’m using?

Disk Usage Graph

Let’s start by reviewing a few command line examples; mainly the “du” and “df” commands.

Note
The ‘du’ command sums up the total space of files that exist on the filesystem, while the ‘df’ command shows blocks available in the file system.

The ‘df’ command (abbreviation for disk free) simply lists the space used per partition:

df Command Output

The ‘du’ command (abbreviation for disk usage) reports the sizes of directories including all of their contents and the sizes of individual files:

du Command Output

Note
There are times when the ‘du’ and ‘df’ commands show different usage amounts. Previously removed files can cause this discrepancy from a running process holding open that file. Open processes cause the ‘df’ command to report that space as still being used. The solution to this is to restart the service to close any open process.

You can also use cPanel to determine the amount of space used and where its located. If you log in to cPanel, you would need to go to cPanel >> Home >> Files >> Disk Usage to get graphical of your disk usage.

cPanel Disk Usage

Lastly, to view your server’s disk usage in your Manage account server resource graph

  1. Log into your Manage portal.
  2. Navigate to the Servers section and then click on the Plus sign (+) next to the server of focus.
  3. Click on the Dashboard button and click the link next to the Disk Usage text as seen below

graphical statistics

This view provides a graphical representation of your disk space and the used amount.

How do I prevent disk space overages?

Disk space overages can result in lost emails, backups or even websites or the server going down! Just like your car, your server requires regular server maintenance. Attention to server maintenance reduces lost data. One way to prevent disk space overages is to use cPanel’s built-in tools.

cPanel possess the ability to send “Disk Quota Warning”  emails that denote when your server is using too much space. They contain specific locations to check, and the space used. The settings for these emails notifications are in WHM (Web Hosting Manager) under the Home »Server Configuration »Tweak Settings .

Email Notifications AreaOther areas of server maintenance to check on regularly include:

  • Pruning backups
  • Logs are rotating correctly (including Domlogs, Apache2, MySql, and Chkservd)
  • Regularly archiving email
  • Using the /home directory for large user accounts

What are the dangers of being too close to Disk Quota?

When a server gets close to or is at its max disk space capacity, strange errors and problems can manifest themselves in many ways including:

  • Services (like MySQL or Apache) can error out or stop
  • Websites can become very sluggish
  • The servers overall responsiveness can become slower
  • The server may exhibit a high load
  • You may see degraded disk performance
  • The server may display an increase in I/O wait
  • The server may demonstrate an increase in CPU usage
  • The file system can go into “read-only” mode
  • The server can run out of inodes
  • Files can become corrupted
  • Decreased swap space may occur causing issues

So what do I do if I’m running out of space?

As Benjamin Franklin stated, “By failing to prepare, you are preparing to fail.” In light of this knowledge, taking steps in advance to prevent these issues is always the best course of action. Directly monitoring your server disk space on a weekly or monthly basis prevents most space issues from turning into actual problems.

If you have already reached the point where immediate action needs to be taken to bring a server back in line with normal space expectations, you have several options. Using the “du” and “df” commands are your primary weapons in tracking down used server space.

The primary steps needed are:

    1. Log into your server
    2. Run a df -h command to locate which partitions are using the most spacedf Command Output
    3. Change directories into the affected folders using the most space.
    4. Run the following command:

du-sk Command Output(This is an advanced du command that sorts the contents of a directory by size. Use this to drill down into a folder to see used space.)

  1. Move files (to a backup drive or folder) or, remove the files that are no longer needed using the ‘rm’ command.
  2. Repeat steps 2 through 6 as needed until reaching desired space level.

Final Thoughts

Over time, any operating system can become overcrowded with addition and removal of programs or accounts. Actively monitoring your servers disk space is the most effective method to prevent server space issues. If you do run into issues, using the du, df command line tools or, using the graphical interface in your account allows you to view files as needed. As always, if you have further thoughts or questions about this topic, please contact our Linux Support department for more information.

 

How To Install Apache Tomcat 8 on Ubuntu 16.04

Apache Tomcat is used to deploy and serve JavaServer Pages and Java servlets. It is an open source technology based off Apache.

Pre-Flight Check

  • This document assumes you are installing Apache Tomcat on Ubuntu 16.04.
  • Be sure you are logged in as root user.

Installing Apache Tomcat 8

Step 1: Create the Tomcat Folder

Logged in as root, within the opt folder make a directory called tomcat and cd into that folder after completion.

mkdir /opt/tomcat

cd /opt/tomcat

 

Step 2: Install Tomcat Through Wget

Click this link to the Apache Tomcat 8 Download site. Place you cursor under 8.5.32  Binary Distributions, right click on the tar file and select copy link address (as shown in the picture below). At the time of this article Tomcat 8 is the newest version but feel free to pick whatever version is more up-to-date.

Tomcat 8's Download Page

Next from your server, use wget command to download the tar to  the tomcat folder from the URL you copied in the previous step:

wget http://apache.spinellicreations.com/tomcat/tomcat-8/v8.5.32/bin/apache-tomcat-8.5.32.tar.gz

Note
You can down the file to your local desktop, but you’ll then want to transfer the file to your Liquid Web server. If assistance is needed, check out this article: Using SFTP and SCP Instead of FTP

After the download completes, decompress the file in your tomcat folder:

tar xvzf apache-tomcat-8.5.32.tar.gz

 

Step 3: Install Java

Before you can use Tomcat you’ll have to install the Java Development Kit (JDK). Beforehand, check to see if Java is installed:

java -version

If that command returns the following message then Java has yet to be installed:
The program 'java' can be found in the following packages:

To install Java, simply run the following command (and at the prompt enter Y to continue):
apt-get install default-jdk

 

Step 4: Configure .bashrc file

Set the environment variables in .bashrc with the following command:

vim ~/.bashrc

Add this information to the end of the file:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
export CATALINA_HOME=/opt/tomcat/apache-tomcat-8.5.32

Note
Verify your file paths! If you downloaded a different version or already installed Java, you may have to edit the file path or name. Older versions of Java may say java-7-openjdk-amd64 instead of java-1.8.0-openjdk-amd64 . Likewise, if you installed Tomcat in a different folder other then /opt/tomcat (as suggested) you’ll indicate the path in your bash file and edit the lines above.

Save your edits and exit from the .bashrc file, then run the following command to register the changes:

. ~/.bashrc

 

Step 5: Test Run

Tomcat and Java should now be installed and configured on your server. To activate Tomcat, run the following script:

$CATALINA_HOME/bin/startup.sh

You should get a result similar to:

Using CATALINA_BASE: /opt/tomcat
Using CATALINA_HOME: /opt/tomcat
Using CATALINA_TMPDIR: /opt/tomcat/temp
Using JRE_HOME: /usr/lib/jvm/java-7-openjdk-amd64/
Using CLASSPATH: /opt/tomcat/bin/bootstrap.jar:/opt/tomcat/bin/tomcat-juli.jar
Tomcat started

 

To verify that Tomcat is working visit the ip address of you server:8080 in a web browser. For example http://127.0.0.1:8080.

Apache Tomcat 8 Verification Page