How to Install PHP 7.2 on Ubuntu 16.04

Using PHP 7.2 on an Ubuntu server is highly recommended over previous PHP versions for several reasons, first being security. Active Support for PHP 7.2 goes until November 30th, 2019 and Security Support until Nov 30, 2020. Older versions like 7.0 and anything 5.6 and below are no longer getting any support and can leave open security holes on a server if they are not replaced. Another main reason to upgrade is the big performance increase over previous versions when PHP 7.2 is installed and is using the OPcache module.  This can greatly decrease the time it takes for your webpage to load! If you are developing a site locally or launching it on one of Liquid Web’s Ubuntu VPS or Dedicated Servers, using PHP 7.2 or newer would be the way to go.

Continue reading “How to Install PHP 7.2 on Ubuntu 16.04”

Plesk to Plesk Migration

Migrating from one Plesk installation to another is easy with the Plesk Migrator Tool! The Plesk team has done a great job creating an easy to use interface for migrating entire installations of Plesk to a new server.

If you need to move files, users, subscriptions, FTP accounts, mail and DNS servers setup through Plesk, this guide will help you successfully navigate the process and come out victorious!

We will be splitting this tutorial into three sections:

Continue reading “Plesk to Plesk Migration”

Install and Connect to PostgreSQL 10 on Ubuntu 16.04

PostgreSQL (pronounced “post-gress-Q-L”) is a household name for open source relational database management systems. Its object-relational meaning that you’ll be able to use objects, classes database schemas and in the query language.  In this tutorial, we will show you how to install and connect to your PostgreSQL database on Ubuntu 16.04.

 

Step 1: Install PostgreSQL

First, we’ll obtain the authentication keys need to validate packages from the PostgreSQL repo.

wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -sc)-pgdg main" > /etc/apt/sources.list.d/PostgreSQL.list'

As a best practice, we will update our server before installing PostgreSQL.

apt-get -y update

After the update is complete, we’ll run the following command to install PostgreSQL

apt-get install postgresql-10

 

Step 2: Logging into PostgreSQL

Once installed PostgreSQL creates a default user named “postgres”.  This user works in a way different to that of other popular databases like MySQL.  PostgreSQL users can change the method of authentication, but by default, it uses a mode called ident. Ident takes your OS username and compares it with the allowed database usernames.

You must first switch to the default Postgres user

su - postgres

You’ll now see that you are logged in as that user via the prompt change

postgres@host2:~$

Afterward, you can then enter the PostgreSQL terminal by typing:

psql

You’ll know you are connected by the message below:

psql (9.5.14)
Type "help" for help.
postgres=#

 

Step 3: Logging out of PostSQL

To exit out of your postgresql environment use the following command

\q

Now that you’ve created your PostgreSQL world it’s time to stretch your feet!  Let’s start creating and listing databases using our world renown Cloud VPS servers.

 

Listing and Switching Databases in PostgreSQL

PostgreSQL (pronounced “post-gress-Q-L”) is a household name for open source relational database management systems. Its object-relational meaning that you’ll be able to use objects, classes in database schemas and the query language. As part of our PostgreSQL series, we’ll show you how to list and switch between databases quickly.

Pre-flight

Log into your Ubuntu 16.04 server

Step 1: Login to your Database
su - postgres

Step 2: Enter the PostgreSQL environment
psql

With the psql command, you’ll be greeted by its current version and command prompt.

psql (9.5.14)
Type "help" for help.
postgres=#

Step 3: List Your PostgreSQL databases
Often, you’ll need to switch from database to database, but first, we will list the available database in PostgreSQL

postgres=# \list

By default, PosgreSQL has 3 databases: postgres, template0 and template1

Step 4: Switching Between Databases in PostgreSQL
Switching between databases is another way of saying you are closing one connection and opening another. When you need to change between databases, you’ll use the “connect” command, which is conveniently shortened to \c, followed by the database name.

\connect dbname

Or:

\c dbname

Which Version of Drupal Am I Running?

Which version of Drupal are you running? Good news! There are several different methods to determine which version of Drupal you are running.

Are you a Drupal admin? Can you login? If so, you’re in luck! Finding your Drupal version is as easy as navigating a simple menu system…

For Drupal 6.0 or later…

Manage -> Reports -> Status report

For Drupal 5.x and earlier…

Administer -> Logs -> Status report

There are several files you can check for the Drupal version. They include…

  • Version 4.7.x, 5.x, 6.x: modules/system/system.module
  • Version 7.x: includes/bootstrap.inc
  • Version: 8.x: core/lib/Drupal.php

In each file, you’re looking for a variable named ‘VERSION‘. Your result should look something like:

const VERSION = '8.6.2';

In the above case, the Drupal version is 8.6.2. Easy!

Install Rsync and Lsync on CentOS, Fedora or Red Hat

Have you ever needed to copy files from your local computer over to your web server? You may have previously used File Transfer Protocol (FTP) applications for this task, but FTP is prone to being insecure and can be challenging to work with over the command line. What if there was a better way? In this tutorial, we’ll be covering two popular utilities in the Linux world to securely assist in file transfers, rsync and lsyncd. We’ll show you how to install and use both in this article. Let’s dig in!

Continue reading “Install Rsync and Lsync on CentOS, Fedora or Red Hat”

Setup a Development Environment for CentOS using cPanel

Editing a website’s code is often needed to update a site, but doing this to the live website could create downtime and other unwanted effects. Instead, its ideal to create an environment especially for developing new ideas.  In this tutorial, we will explore creating a development site specifically for CentOS servers.

As a warning, this is advanced technical work. It’s possible to make mistakes and cause downtime on your live domain. If you are not 100% confident, it may be a good idea to hire a system admin or developer to copy the domain for you.

Pre-flight

Step 1: Continue on to back up the cPanel account, just in case you have any issues while creating your development environment:

/scripts/pkgacct [username] --backup /home/temp/

Step 2: After creating a backup, you will create a copy the database of the main domain, otherwise known as the primary domain.

mysqldump [database_name] > /home/temp/backup.[database_name].sql

Step 3: Create the new dev domain in WHM. This domain name will be a subdomain of the primary domain. Creating a subdomain is one of the first steps in designing your development environment. We prefer to use, dev.[domain].com, the same domain name but with “dev” in front of it for clarity. Do be sure to note all the information, like the username and password. If you are not familiar with how to create a new account, see the following tutorial.

Step 4: Once you’ve created the subdomain within your cPanel you’ll copy the files from the main document root to the newly created dev document root. The document root is the location where your website’s files.

Use the following command to find the document root for either domain. Replace “exampledomain.com” with the primary and development domains for determining the location of document root for each.

whmapi1 domainuserdata domain=[exampledomain.com] | grep -i documentroot

Step 5: After locating the document roots we will copy the files from the primary domain over to the development environment. Insert the document roots into this next command.

rsync -avh /document/root/of/the/primary/domain/ /document/root/of/the/new/dev/domain

Step 6: Next you will need to state the correct ownership of the dev domain’s files and directories, as the previous username will be in place. The ‘dev_username’ will be the given/chosen when you created the new account. The following command will change the ownership for you.

chown -R [dev_username]: /home/[dev_username]

Step 7: After changing file ownership, create a new database and database user for the dev domain. Be sure to notate this information including the password set. Our documentation on the creating a new database will walk you through this necessary process.

Step 8: Once you’ve created the new database its user, you can start copying the original database into the newly created database.

mysql [new_database_name] < /home/temp/backup.[database_name].sql

Step 9: Copying over the database is the bulk of the work, but you’ll still need to edit the configuration files for your domain. Typically, some files need to access the database and will accomplish this via the database user and password. The file that contains these credentials needs to be updated to have the database, database user, and password you created in step 8 of this tutorial. If unsure of the location of these files talking with a developer may be helpful. If you are working with a WordPress site, you can continue onto the next section. Otherwise, if you have updated your dev configuration files with your new database info continue onto step 10.

Editing WordPress Configurations
Fortunately, WordPress is one of the most commonly used content management systems. WordPress is easy to configure we’ll provide a short tutorial on how to change the database and database user in the wp-config.php. First, move to the new document root. cd /document/root/to/the/dev/domain

There you will edit the wp-config.php with your favorite text editor such as vim or nano. nano wp-config.php

In the wp-config.php file, you will see a section that looks like the below. From there you will edit the highlighted characters with the information you used to create the database in the tutorial. // ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
define('DB_NAME', 'new_database_name');
/** MySQL database username */
define('DB_USER', ‘new_database_user');
/** MySQL database password */
define('DB_PASSWORD', 'password');

In the database, clear all mentions of the original domain and replace them with the dev domain. For example with WordPress, in the _options table you need to change two entries of ‘home’ and ‘siteurl’. These can be quickly changed using WP-ClI, which is a is a command line tool for interacting with and managing WordPress sites. To install WP-CLI follow these instructions and continue onto the next step. If you do have a WordPress website, once you have installed WP-CLI you will want to run the following commands: su - [dev_username] cp public_html
wp option update siteurl https://dev.domain.com
wp option update home https://dev.domain.com
exit

Sometimes plugins or themes mention the original domain in the database. If some parts of the dev domain are not working, particularly plugins or themes, you may need to contact a developer to see if the original domain name is still active in the database. After replacing the names using WP-CLI, you’ll have officially created a dev domain.

Step 10: To complete this tutorial you have two choices: add an A record to your DNS view your dev site online or edit your local hosts file to view solely on your computer. For our Liquid Web customers feel free to contact The Most Helpful Humans™ with questions you may have in setting up a development environment.

 

 

Troubleshooting: Cron Jobs

Cron is a service for Linux servers that automatically executes scheduled commands. A cron job can be a series of shell commands, scripts, or other programs. Cron tasks or jobs can perform a variety of functions and once ran can send out an e-mail message to inform you of its completion or errors. If you receive an error, there are many ways to troubleshoot the cron task.  Use this article for troubleshooting assistance or a tutorial on the basics of cron jobs. If you would like to learn more about creating a cron job check out our Knowledge Base tutorials on the subject.

Checking Configurations with Crontab

From the command line, you can review the scheduled cron jobs by listing the crontab for the user. This command outputs the contents of the user’s crontab to the terminal.

As the user you can run:
crontab -l

As root, you can see any user’s crontab, by specifying the username.
crontab -l -u username

You can find some detailed information about how to format the cron jobs in the /etc/crontab file.  Below is the example within that file.  Each asterisk can be replaced with a number or to its corresponding field.  Or you can leave the asterisk in place to represent all possible numbers for that position.  For example, if left with all asterisk this means that the cron job will run every minute, all the time.

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed

 

Altering the E-mail Address of a Cron

Once initiated, a cron sends a notification to an email address, set within the MAILTO line of the crontab.

MAILTO="user@domain.com"

To edit the crontab, you can run the following commands as the user:

crontab -e

Or if logged in as root, you can type in the username for any of your users to see a scheduled task that they have created.

crontab -e -u usernameThese open the crontab of the user in the default editor. Typically the vim or nano command will open the file. Be aware that this similar to opening any other text file where you’ll save before closing.

The MAILTO line indicates where the execution status of a cron should be sent. The sending address will typically be the cron task creator’s username along with the server’s hostname. So an email’s sender address would follow this syntax, unixuser@serverhostname.com. If you don’t see an email right away, it may be a good idea to check your spam box.

Silenced Crons

Sometimes cron jobs are configured to either produce no output or have its output silenced, even if set with a MAILTO address. If you see a cron job listed with any of the following on end, it is a sign that the cron has output has been silenced. These send any output to the null device (the black hole on a Linux server). In cases like this, you’ll need to remove the line from the cron job script to generate an output.

&> /dev/null

2>&1 /dev/nullSome cron jobs are disabled entirely. These will have a “#” in front of the command, resulting ignored lines when executed. Remove the “#” to re-activate the cron job.

Verifying the Crond Service

Once you’ve confirmed the correct settings, it’s time to verify that the cron system is enabled and running. The three following commands can each be used to verify if the crond (the cron service) is running.

/etc/init.d/crond status

service crond status

systemctl status crond

After running any of the above commands, if you find the crond service is not running, you can start it with one of the following.

/etc/init.d/crond start

service crond start

systemctl start crond

/var/log/cron

Once you know that the cron is enabled, not silenced, and crond is running, then it’s time to check the cron log, located in the path of /var/log/cron.

cat /var/log/cron

Example Output:

Oct 2 23:45:01 host CROND[3957]: (root) CMD (/usr/local/lp/apps/kernelupdate/lp-kernelupdate.pl > /dev/null 2>&1)
Oct 2 23:50:01 host CROND[4143]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Oct 2 23:50:01 host CROND[4144]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)
In the log, you will see if, when and what user ran the cron. If initiated, you’ll see the date and time of execution followed by brackets of the individual cron number. This timestamp does not confirm that the script ran normally or at all, it only signifies when the cron system last ran the task. Beyond that, you may need to investigate the cron script itself or application-level configurations and their respective logs to ensure the code is executing correctly.

Other Cron Services

This article is merely an overview of the main crond service as there many other cron tasks services. The anacron system is a commonly used cron service that configures daily or hourly jobs, and can even be set to run at reboot. The logs for these kinds of tasks are within /var/log/cron, and are not executed by crond.

Other scheduled tasks, although also referred to as cron jobs, are not executed from the crond system. These cron jobs are often configured within the code or configuration of a website. To determine if executed, you’ll need to investigate other configurations and logs that the cron script interacts with.

As with all cron services, automated jobs can be manipulated to execute numerous daily tasks, so you don’t have to. Cron tasks can occasionally go awry even without altering them or years, but knowing where to look is half the battle in troubleshooting a cron job.

 

Useful Command Line for Linux Admins

The command line terminal, or shell on your Linux server, is a potent tool for deciphering activity on the server, performing operations, or making system changes. But with several thousand executable binaries installed by default, what tools are useful, and how should you use them safely?

We recommend coming to terms with the terminal over SSH. Learn how to connect to your server over SSH, and get started with a few basic shell commands. This article will expand on those basic commands and show you even more useful and practical tools.

Warning:
Warning: These commands can cause a great deal of harm to your server if misused. Computers do precisely what you tell them to do. If you command your server to delete all files, it will remove every single file without question, and feasibly crash because it deleted itself. Please take precautions when working on your server, and ensure you have good local and remote backups available.

In the basic shell commands tutorial, you learned about basic navigation and file manipulation commands like ls, rm, mv, and cd. Below are a few essential commands for learning about your Linux system. (Display a user manual for each command by using man before each command, like so: man ps)

pipe

The pipe command (which is the | between two or more commands) is possibly the most useful tool in the shell language. This command allows the output of one command to be fed into the input of another command directly, without temporary files. The pipe command useful if you are dealing with a huge command output that you would like to format further, or to be processed by some other program without using a temporary file.

The basic tutorial showed the commands w and grep. Let’s connect them using pipe to format the output. Using the w command allows us to view users logged into the server while passing the output for the grep command to filter by the ‘root’ user type:

# w
08:56:43 up 27 days, 22:17, 2 users, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w
jeff pts/1 191.168.1.6 09:02 1:59 0.07s 0.06s -bash

# w | grep root
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w

The format of the last command is much more digestible and becomes much more important with the output from commands like ps.

ps

The ps command shows a ‘process snapshot’ of all currently running programs on the server. It is particularly useful in conjunction with the grep command to pare down its verbose results down to a certain keyword. For instance, let’s see if the Apache process ‘httpd,’ is running:

# ps faux | grep httpd
root 27242 0.0 0.0 286888 700 ? Ss Aug29 1:40 /usr/sbin/httpd -k start
nobody 77761 0.0 0.0 286888 528 ? S Sep17 0:03 \_ /usr/sbin/httpd -k start
nobody 77783 0.0 1.6 1403008 14416 ? Sl Sep17 0:03 \_ /usr/sbin/httpd -k start
...

We can see that there are several ‘httpd’ processes running here. The one owned by ‘root’ is the core one (the ‘forest’ nodes, \_, help identify child processes, too). If we did not see any httpd processes, it could safely assume, Apache is not running, and we should restart it to serve websites request again.

The common flags used for ps are ‘faux’, which displays processes for all users in a user-oriented format, run from any source (terminal or not, which is signified by the x), paired with a process tree (forest). The ‘aux’ command ensures the view of every single process on the server, while the ‘f’ in aux helps to determine which processes are parents and which are children.

top

Like the ps command, the top command helps to determine which processes are running on a server, but top has an advantage in its ability to display in real-time while filtering by several different factors. Simply, it dynamically shows the ‘top’ resource utilizers and is executed by running:

# top

Once inside of top, you will see a lot of process threads moving around. The ones at the top, by default, will show you processes that are using the most CPU at the moment. Holding shift to type ‘M’ will change the sort to processes that are using the most MEMory. Hold shift and press ‘P’ to change the sort back to CPU. When you want to quit, you can simply press ‘q’.

Since top writes information live, its output cannot be parsed by grep and thus seldom used in conjunction with a pipe. Top is most useful for discovering what is causing a server to run out of memory, or what is causing a load. For instance, on a server with high load, if the first command is using 100% CPU and its name is php-fpm, then we can assume that an inefficient PHP script is causing the load. In this case, php-fpm should be restarted (this is achieved on cPanel with /scripts/restartsrv_apache_php_fpm).

netstat

netstat is another tool to show what service is running on a server, but in particular, it shows processes that are listening for traffic on any particular network port. It can also display other interface statistics. Here is how you would display all publicly listening processes:

# netstat -tunlp

The command flags ‘-tunlp’ will show program names listening for UDP or TCP traffic, with numeric addresses. This can be further scoped down by grep to see, for instance, what program is listening on port 80:

# netstat -tunlp | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 27242/httpd
tcp 0 0 :::80 :::* LISTEN 27242/httpd

There are four listeners listed, two each for all IPv4 (0.0.0.0) and all IPv6 (::) addresses on the local machine. There are two unique PID numbers (1863 and 1993), indicating that there are two, actively running memcached processes. The active ports for each PID, respectively, are 11211 and 11213. I can use this information to guarantee correct connects against my configurations and to provide the correct ports.

ip

The ip command shows network devices, their routes, and a means of manipulating their interfaces. LiquidWeb IP addresses are statically assigned, so you will not need to make any changes to the IPs on your server, but you can use the ip command to read the information on the interfaces:

# ip a

This command is short for ‘ip address show’, and shows you the active interfaces on the server:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.1/24 brd 192.168.0.255 scope global eth0:cp1
inet 192.168.0.2/24 brd 192.168.0.255 scope global secondary eth0:cp2
inet6 fe80::5054:ff:face:b00c/64 scope link
valid_lft forever preferred_lft forever

In our case, there are two interfaces numbered 1 and 2: lo (the localhost loopback interface), and eth0. eth0 has three IP addresses assigned to it, on eth0, eth0:cp1, and eth0:cp2, which are 192.168.0.1, 2, and 3. We can also see that my MAC address for eth0 is 52:54:00:00:00:00, which can be helpful for troubleshooting connections to other devices like firewalls and switches. This interface also supports IPv6, and our IP is fe80::5054:ff:face:b00c.

lsof

lsof stands for ‘list open files,’ and it does just that; lists the files that are in use by the system. Listing open files is very helpful in determining what script is especially complex, or for finding a file that is in a state of writing.

Let’s use PHP as an example. We want to figure the location or path for the PHP default error logs, but Apache’s configuration is a large group of nested folders. The ps command only tells us if PHP is running, not which file is being written. lsof will show me this handily:

# lsof -c php | grep error
php-fpm 13366 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13366 root 2w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13366 root 5w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13395 root 2w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root 7w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log

The ‘-c’ flag will only list processes that match a certain command name, in my case, ‘php’. I pipe this output into grep to search for the files that match the name ‘error’, and I see that there are two open error logs: /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log and /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log. Check each of these files (with tail or cat) to see recently logged errors.

If using the rsync command for the transfer of large folder(s), in this case, /backup, we can search for open rsync processes inside:

# lsof -c rsync | grep /backup
rsync 48479 root cwd DIR 252,3 4096 4578561 /backup
rsync 48479 root 3r REG 252,3 5899771606 4578764 /backup/2018-09-12/accounts/jeff.tar.gz
rsync 48480 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root 1u REG 252,3 150994944 4578600 /backup/temp/2018-09-12/accounts/.jeff.tar.gz.yG6Rl2

The process has two regular files open in the /backup directory: /backup/2018-09-12/accounts/jeff.tar.gz and /backup/temp/2018-09-12/accounts/.jeff.tar.gz.yG6Rl2. Even with quiet output on rsync, we can see that it is currently working on copying the jeff.tar.gz file.

df

df is a swift command that displays how much space used on the mounted partitions of a system. It only reads data from the partition tables, so it is slightly less accurate if you are actively moving files around, but it beats enumerating and adding up every file.

# df -h

This ‘-h’ flag gets human readable output in nice round numbers (it can be omitted to print output in KB):

Filesystem Size Used Avail Use% Mounted on
/dev/vda3 72G 49G 20G 72% /
tmpfs 419M 0 419M 0% /dev/shm
/dev/vda1 190M 59M 122M 33% /boot
/usr/tmpDSK 3.1G 256M 2.7G 9% /tmp

Some of the information we see is the primary partition mounted on / is 72% used space with 20GB being free. Since we’re not planning on adding any more sites our server right, this is not a problem. But, some of the information we don’t see also is telling. There is no separate /backup partition mounted on my server, so my cPanel backups are filling up the primary partition. If I want to retain more backups, I should consider adding another physical or networked disk to store them.

df can also show inode (file and folder) count of mounted filesystems from the same partition table information:

# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda3 4.6M 496K 4.1M 11% /
tmpfs 105K 2 105K 1% /dev/shm
/dev/vda1 50K 44 50K 1% /boot
/usr/tmpDSK 201K 654 201K 1% /tmp

Our main partition has 496,000 inodes used, and just over 4 million inodes free, which is plenty for general use. If we stored a lot of small files, like emails, my inode count could be much higher for the same disk usage in bytes. If you run out of inodes on your partition, it won’t be able to record the location of any more files or folders, even if you have free disk space, the system will function like your disk is full.

du

Like the df command, du will tell you disk usage, but it works by recursively counting folders and files that you specify. This command can take a long time on large folders, or those with a lot of inodes.

# du -hs /home/temp/
2.4M /home/temp/

My flags ‘-hs’ give human-readable output, and only displays the summary of the enumeration, rather than each nested folder. One of the other useful flags is –max-depth, which can define how deep you would like to list folder summaries. This flag is like increasing the depth of the -s flag (-s is basically –max-depth=0,  root — level 1, and one sub-directory — level 2):

# du -hs public_html/
5.5G public_html/

# du -h public_html/ --max-depth=0
5.5G public_html/

# du -h public_html/ --max-depth=1
8.0K public_html/_vti_txt
8.0K public_html/_vti_cnf
257M public_html/storage
64K public_html/cgi-bin
8.0K public_html/_vti_log
5.0G public_html/images
64K public_html/scripts
8.0K public_html/.well-known
8.0K public_html/_private
5.0M public_html/forum
56K public_html/_vti_pvt
24K public_html/_vti_bin
360K public_html/configs
5.5G public_html/

These commands help to find out if any specific folders inside of public_html are significantly larger than others. We add this to a pipe along with grep to get only folders that are 1GB or larger:

# du -h public_html/ --max-depth=1 | grep G
5.0G public_html/images
5.5G public_html/

Clearly, we have some pictures to delete or compress if we need more disk space.

free

The free command shows the instant reading of free memory on your system. Also displayed by top, but when only needing total memory information, the free is a lot faster.

# free -m
total used free shared buffers cached
Mem: 837 750 86 5 66 201
-/+ buffers/cache: 482 354
Swap: 1999 409 1590

Our free command with the megabytes flag displays output in MB. Without it, it would default to -k (kilobytes), but you can also pass -g for gigabytes (though the output is rounded and thus less accurate).

In our output, the total RAM on the system is 837MB, or about 1GB. Of this, 750MB is ‘used,’ but 66MB is in buffers and 201M is cached data, so subtracting those, the total ‘free’ RAM on the server is around 354MB. Because the calculations are made in KB and rounded for output, the numbers won’t always accurately add up (750 plus 86 is not 837).

The final line shows swap usage, which you want to avoid using. My output says that there is 409MB used in the on-disk swap space, but since there is free RAM at the moment, my swap usage was in the past, and the system stopped using swap space.

If there was an amount in the ‘used’ column for swap, and there was 0 free RAM after calculating the buffers and cache, then the system will be very sluggish. We call this ‘being in swap.’ The reading/writing to swap is very slow compared to RAM, and you should avoid going into swap space by tuning your programs to use memory appropriately. If you run out of RAM and swap space, then your server will be out of memory (OOM), and will immediately freeze.

Advanced Commands

Useful Pipelines

Now that we have a few advanced commands under our belt let’s learn more about how we can use pipe to our advantage in making useful command strings or scripts, aka ‘one-liners’ or ‘pipelines.’ These use several formatting commands, such as sed, awk, sort, uniq, or column, which fall outside of the scope of this article for description (you can learn more about them using the man command).

Disk Usage Formatting

This command will use du and awk, an output manipulation tool, to nicely format and sort the output of a du command in the current working directory by size. First, change directory (cd) to your intended folder for analysis, and run:

# du -sk ./* | sort -nr | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + $1; x = $1; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'

The above command will add dynamic suffixes to the on-disk sizes, so you can see output in GB, MB, and KB, instead of just one of those powers. The top listed folder will be your largest in that directory.

Check Connection Count

This string of commands checks active connections to the server using netstat, pares the output down to HTTP and HTTPS connections using grep, formats and sorts the output using a series of other commands. This example shows how many times each IP address listed has connected to the server.

# netstat -tn 2>/dev/null | grep -E '(:80|:443)' | awk '{print $5}' | cut -f1 -d: | sort | uniq -c | sort -rn
2 46.229.168.149
1 46.229.168.147
1 46.229.168.145
1 46.229.168.144
1 46.229.168.142
1 46.229.168.141
1 46.229.168.138
...

In our case, a subnet that was hitting us with requests to scrape data, caused a lot of load on the server. Add this IP range the firewall, in the deny list, to stop the attack for now.

You can also get a quick summary of just the total number of connections with this command:

# netstat -tan | grep -E ‘(:80|:443)’ | wc -l
8

Format error_log Output

Here is some advanced usage for grep output. The sed command is like awk, where it edits the output stream as it is printed. In this case, we want to look at logged modsec errors, but I want to add some whitespace between the errors so it’s easier to read:

# grep -i modsec /usr/local/apache/logs/error_log | tail -n100 | sed -e “s/$/\\n/”

This command will give me the output of the last 100 logged modsec, ModSec, or ModSecurity line (since the ‘-i’ flag for grep will ignore case sensitivity) and replace the end of each line with a newline.

Memory Usage By Account

Add up all of the percentages of memory usage by user for a running program as defined by ps and give you a sorted output.

# tmpvar=””; for each in `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u`; do tmpvar="$tmpvar\n`ps aux | egrep ^$each | awk 'BEGIN{total=0};{total += $4};END{print total, $1}'`"; done; echo -e $tmpvar | grep -v ^$ | sort -rn | head; unset tmpvar

Usage Count In /var/tmp

When searching for a file count per user, if you encounter a number as the file owner, you can conclude that the user has been removed, and should the file should be deleted.

# find /var/tmp/ ! -user root ! -user mysql ! -user nobody ! -group root ! -group mysql | xargs ls -lh | awk '{print $3, $5, $9}' | sort | awk '{print $1}' | uniq -c | sort -rh

Top Processes By Memory Usage

This command outputs the processes using the highest memory, sorting the 4th column of ps and displaying the top 10 commands with head.

# ps aux | sort -rnk 4 | head

Whether you are brushing up on your Linux Admin interview or just want to get more familiar these commands are sure to be useful to your repertoire.

Setup a Development Environment in Ubuntu

Often we want to edit our domain’s code, but on a production website, this can be dangerous. Making changes to the production site would not only allow all of the Internet to see unfinished changes but could also cause errors to display. As a workaround, we’ll create a testing domain or “dev” domain to work out any bugs and changes to the site.

As a warning, this is advanced technical work. It’s possible to make mistakes and cause downtime on your live domain. If you are not 100% confident, it may be a good idea to hire a system admin or developer to copy the domain for you.

Continue reading “Setup a Development Environment in Ubuntu”