Troubleshooting: Cron Jobs

Cron is a service for Linux servers that automatically executes scheduled commands. A cron job can be a series of shell commands, scripts, or other programs. Cron tasks or jobs can perform a variety of functions and once ran can send out an e-mail message to inform you of its completion or errors. If you receive an error, there are many ways to troubleshoot the cron task.  Use this article for troubleshooting assistance or a tutorial on the basics of cron jobs. If you would like to learn more about creating a cron job check out our Knowledge Base tutorials on the subject.

Checking Configurations with Crontab

From the command line, you can review the scheduled cron jobs by listing the crontab for the user. This command outputs the contents of the user’s crontab to the terminal.

As the user you can run:
crontab -l

As root, you can see any user’s crontab, by specifying the username.
crontab -l -u username

You can find some detailed information about how to format the cron jobs in the /etc/crontab file.  Below is the example within that file.  Each asterisk can be replaced with a number or to its corresponding field.  Or you can leave the asterisk in place to represent all possible numbers for that position.  For example, if left with all asterisk this means that the cron job will run every minute, all the time.

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed

 

Altering the E-mail Address of a Cron

Once initiated, a cron sends a notification to an email address, set within the MAILTO line of the crontab.

MAILTO="user@domain.com"

To edit the crontab, you can run the following commands as the user:

crontab -e

Or if logged in as root, you can type in the username for any of your users to see a scheduled task that they have created.

crontab -e -u usernameThese open the crontab of the user in the default editor. Typically the vim or nano command will open the file. Be aware that this similar to opening any other text file where you’ll save before closing.

The MAILTO line indicates where the execution status of a cron should be sent. The sending address will typically be the cron task creator’s username along with the server’s hostname. So an email’s sender address would follow this syntax, unixuser@serverhostname.com. If you don’t see an email right away, it may be a good idea to check your spam box.

Silenced Crons

Sometimes cron jobs are configured to either produce no output or have its output silenced, even if set with a MAILTO address. If you see a cron job listed with any of the following on end, it is a sign that the cron has output has been silenced. These send any output to the null device (the black hole on a Linux server). In cases like this, you’ll need to remove the line from the cron job script to generate an output.

&> /dev/null

2>&1 /dev/nullSome cron jobs are disabled entirely. These will have a “#” in front of the command, resulting ignored lines when executed. Remove the “#” to re-activate the cron job.

Verifying the Crond Service

Once you’ve confirmed the correct settings, it’s time to verify that the cron system is enabled and running. The three following commands can each be used to verify if the crond (the cron service) is running.

/etc/init.d/crond status

service crond status

systemctl status crond

After running any of the above commands, if you find the crond service is not running, you can start it with one of the following.

/etc/init.d/crond start

service crond start

systemctl start crond

/var/log/cron

Once you know that the cron is enabled, not silenced, and crond is running, then it’s time to check the cron log, located in the path of /var/log/cron.

cat /var/log/cron

Example Output:

Oct 2 23:45:01 host CROND[3957]: (root) CMD (/usr/local/lp/apps/kernelupdate/lp-kernelupdate.pl > /dev/null 2>&1)
Oct 2 23:50:01 host CROND[4143]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Oct 2 23:50:01 host CROND[4144]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)
In the log, you will see if, when and what user ran the cron. If initiated, you’ll see the date and time of execution followed by brackets of the individual cron number. This timestamp does not confirm that the script ran normally or at all, it only signifies when the cron system last ran the task. Beyond that, you may need to investigate the cron script itself or application-level configurations and their respective logs to ensure the code is executing correctly.

Other Cron Services

This article is merely an overview of the main crond service as there many other cron tasks services. The anacron system is a commonly used cron service that configures daily or hourly jobs, and can even be set to run at reboot. The logs for these kinds of tasks are within /var/log/cron, and are not executed by crond.

Other scheduled tasks, although also referred to as cron jobs, are not executed from the crond system. These cron jobs are often configured within the code or configuration of a website. To determine if executed, you’ll need to investigate other configurations and logs that the cron script interacts with.

As with all cron services, automated jobs can be manipulated to execute numerous daily tasks, so you don’t have to. Cron tasks can occasionally go awry even without altering them or years, but knowing where to look is half the battle in troubleshooting a cron job.

 

Useful Command Line for Linux Admins

The command line terminal, or shell on your Linux server, is a potent tool for deciphering activity on the server, performing operations, or making system changes. But with several thousand executable binaries installed by default, what tools are useful, and how should you use them safely?

We recommend coming to terms with the terminal over SSH. Learn how to connect to your server over SSH, and get started with a few basic shell commands. This article will expand on those basic commands and show you even more useful and practical tools.

Warning:
Warning: These commands can cause a great deal of harm to your server if misused. Computers do precisely what you tell them to do. If you command your server to delete all files, it will remove every single file without question, and feasibly crash because it deleted itself. Please take precautions when working on your server, and ensure you have good local and remote backups available.

In the basic shell commands tutorial, you learned about basic navigation and file manipulation commands like ls, rm, mv, and cd. Below are a few essential commands for learning about your Linux system. (Display a user manual for each command by using man before each command, like so: man ps)

pipe

The pipe command (which is the | between two or more commands) is possibly the most useful tool in the shell language. This command allows the output of one command to be fed into the input of another command directly, without temporary files. The pipe command useful if you are dealing with a huge command output that you would like to format further, or to be processed by some other program without using a temporary file.

The basic tutorial showed the commands w and grep. Let’s connect them using pipe to format the output. Using the w command allows us to view users logged into the server while passing the output for the grep command to filter by the ‘root’ user type:

# w
08:56:43 up 27 days, 22:17, 2 users, load average: 0.00, 0.00, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w
jeff pts/1 191.168.1.6 09:02 1:59 0.07s 0.06s -bash

# w | grep root
root pts/0 10.1.1.206 08:52 0.00s 0.06s 0.00s w

The format of the last command is much more digestible and becomes much more important with the output from commands like ps.

ps

The ps command shows a ‘process snapshot’ of all currently running programs on the server. It is particularly useful in conjunction with the grep command to pare down its verbose results down to a certain keyword. For instance, let’s see if the Apache process ‘httpd,’ is running:

# ps faux | grep httpd
root 27242 0.0 0.0 286888 700 ? Ss Aug29 1:40 /usr/sbin/httpd -k start
nobody 77761 0.0 0.0 286888 528 ? S Sep17 0:03 \_ /usr/sbin/httpd -k start
nobody 77783 0.0 1.6 1403008 14416 ? Sl Sep17 0:03 \_ /usr/sbin/httpd -k start
...

We can see that there are several ‘httpd’ processes running here. The one owned by ‘root’ is the core one (the ‘forest’ nodes, \_, help identify child processes, too). If we did not see any httpd processes, it could safely assume, Apache is not running, and we should restart it to serve websites request again.

The common flags used for ps are ‘faux’, which displays processes for all users in a user-oriented format, run from any source (terminal or not, which is signified by the x), paired with a process tree (forest). The ‘aux’ command ensures the view of every single process on the server, while the ‘f’ in aux helps to determine which processes are parents and which are children.

top

Like the ps command, the top command helps to determine which processes are running on a server, but top has an advantage in its ability to display in real-time while filtering by several different factors. Simply, it dynamically shows the ‘top’ resource utilizers and is executed by running:

# top

Once inside of top, you will see a lot of process threads moving around. The ones at the top, by default, will show you processes that are using the most CPU at the moment. Holding shift to type ‘M’ will change the sort to processes that are using the most MEMory. Hold shift and press ‘P’ to change the sort back to CPU. When you want to quit, you can simply press ‘q’.

Since top writes information live, its output cannot be parsed by grep and thus seldom used in conjunction with a pipe. Top is most useful for discovering what is causing a server to run out of memory, or what is causing a load. For instance, on a server with high load, if the first command is using 100% CPU and its name is php-fpm, then we can assume that an inefficient PHP script is causing the load. In this case, php-fpm should be restarted (this is achieved on cPanel with /scripts/restartsrv_apache_php_fpm).

netstat

netstat is another tool to show what service is running on a server, but in particular, it shows processes that are listening for traffic on any particular network port. It can also display other interface statistics. Here is how you would display all publicly listening processes:

# netstat -tunlp

The command flags ‘-tunlp’ will show program names listening for UDP or TCP traffic, with numeric addresses. This can be further scoped down by grep to see, for instance, what program is listening on port 80:

# netstat -tunlp | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 27242/httpd
tcp 0 0 :::80 :::* LISTEN 27242/httpd

There are four listeners listed, two each for all IPv4 (0.0.0.0) and all IPv6 (::) addresses on the local machine. There are two unique PID numbers (1863 and 1993), indicating that there are two, actively running memcached processes. The active ports for each PID, respectively, are 11211 and 11213. I can use this information to guarantee correct connects against my configurations and to provide the correct ports.

ip

The ip command shows network devices, their routes, and a means of manipulating their interfaces. LiquidWeb IP addresses are statically assigned, so you will not need to make any changes to the IPs on your server, but you can use the ip command to read the information on the interfaces:

# ip a

This command is short for ‘ip address show’, and shows you the active interfaces on the server:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.3/24 brd 192.168.0.255 scope global eth0
inet 192.168.0.1/24 brd 192.168.0.255 scope global eth0:cp1
inet 192.168.0.2/24 brd 192.168.0.255 scope global secondary eth0:cp2
inet6 fe80::5054:ff:face:b00c/64 scope link
valid_lft forever preferred_lft forever

In our case, there are two interfaces numbered 1 and 2: lo (the localhost loopback interface), and eth0. eth0 has three IP addresses assigned to it, on eth0, eth0:cp1, and eth0:cp2, which are 192.168.0.1, 2, and 3. We can also see that my MAC address for eth0 is 52:54:00:00:00:00, which can be helpful for troubleshooting connections to other devices like firewalls and switches. This interface also supports IPv6, and our IP is fe80::5054:ff:face:b00c.

lsof

lsof stands for ‘list open files,’ and it does just that; lists the files that are in use by the system. Listing open files is very helpful in determining what script is especially complex, or for finding a file that is in a state of writing.

Let’s use PHP as an example. We want to figure the location or path for the PHP default error logs, but Apache’s configuration is a large group of nested folders. The ps command only tells us if PHP is running, not which file is being written. lsof will show me this handily:

# lsof -c php | grep error
php-fpm 13366 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13366 root 2w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13366 root 5w REG 252,3 185393 3139602 /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root mem REG 252,3 16656 264846 /lib64/libgpg-error.so.0.5.0
php-fpm 13395 root 2w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log
php-fpm 13395 root 7w REG 252,3 14842 2623528 /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log

The ‘-c’ flag will only list processes that match a certain command name, in my case, ‘php’. I pipe this output into grep to search for the files that match the name ‘error’, and I see that there are two open error logs: /opt/cpanel/ea-php56/root/usr/var/log/php-fpm/error.log and /opt/cpanel/ea-php70/root/usr/var/log/php-fpm/error.log. Check each of these files (with tail or cat) to see recently logged errors.

If using the rsync command for the transfer of large folder(s), in this case, /backup, we can search for open rsync processes inside:

# lsof -c rsync | grep /backup
rsync 48479 root cwd DIR 252,3 4096 4578561 /backup
rsync 48479 root 3r REG 252,3 5899771606 4578764 /backup/2018-09-12/accounts/jeff.tar.gz
rsync 48480 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root cwd DIR 252,3 4096 4578562 /backup/temp
rsync 48481 root 1u REG 252,3 150994944 4578600 /backup/temp/2018-09-12/accounts/.jeff.tar.gz.yG6Rl2

The process has two regular files open in the /backup directory: /backup/2018-09-12/accounts/jeff.tar.gz and /backup/temp/2018-09-12/accounts/.jeff.tar.gz.yG6Rl2. Even with quiet output on rsync, we can see that it is currently working on copying the jeff.tar.gz file.

df

df is a swift command that displays how much space used on the mounted partitions of a system. It only reads data from the partition tables, so it is slightly less accurate if you are actively moving files around, but it beats enumerating and adding up every file.

# df -h

This ‘-h’ flag gets human readable output in nice round numbers (it can be omitted to print output in KB):

Filesystem Size Used Avail Use% Mounted on
/dev/vda3 72G 49G 20G 72% /
tmpfs 419M 0 419M 0% /dev/shm
/dev/vda1 190M 59M 122M 33% /boot
/usr/tmpDSK 3.1G 256M 2.7G 9% /tmp

Some of the information we see is the primary partition mounted on / is 72% used space with 20GB being free. Since we’re not planning on adding any more sites our server right, this is not a problem. But, some of the information we don’t see also is telling. There is no separate /backup partition mounted on my server, so my cPanel backups are filling up the primary partition. If I want to retain more backups, I should consider adding another physical or networked disk to store them.

df can also show inode (file and folder) count of mounted filesystems from the same partition table information:

# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/vda3 4.6M 496K 4.1M 11% /
tmpfs 105K 2 105K 1% /dev/shm
/dev/vda1 50K 44 50K 1% /boot
/usr/tmpDSK 201K 654 201K 1% /tmp

Our main partition has 496,000 inodes used, and just over 4 million inodes free, which is plenty for general use. If we stored a lot of small files, like emails, my inode count could be much higher for the same disk usage in bytes. If you run out of inodes on your partition, it won’t be able to record the location of any more files or folders, even if you have free disk space, the system will function like your disk is full.

du

Like the df command, du will tell you disk usage, but it works by recursively counting folders and files that you specify. This command can take a long time on large folders, or those with a lot of inodes.

# du -hs /home/temp/
2.4M /home/temp/

My flags ‘-hs’ give human-readable output, and only displays the summary of the enumeration, rather than each nested folder. One of the other useful flags is –max-depth, which can define how deep you would like to list folder summaries. This flag is like increasing the depth of the -s flag (-s is basically –max-depth=0,  root — level 1, and one sub-directory — level 2):

# du -hs public_html/
5.5G public_html/

# du -h public_html/ --max-depth=0
5.5G public_html/

# du -h public_html/ --max-depth=1
8.0K public_html/_vti_txt
8.0K public_html/_vti_cnf
257M public_html/storage
64K public_html/cgi-bin
8.0K public_html/_vti_log
5.0G public_html/images
64K public_html/scripts
8.0K public_html/.well-known
8.0K public_html/_private
5.0M public_html/forum
56K public_html/_vti_pvt
24K public_html/_vti_bin
360K public_html/configs
5.5G public_html/

These commands help to find out if any specific folders inside of public_html are significantly larger than others. We add this to a pipe along with grep to get only folders that are 1GB or larger:

# du -h public_html/ --max-depth=1 | grep G
5.0G public_html/images
5.5G public_html/

Clearly, we have some pictures to delete or compress if we need more disk space.

free

The free command shows the instant reading of free memory on your system. Also displayed by top, but when only needing total memory information, the free is a lot faster.

# free -m
total used free shared buffers cached
Mem: 837 750 86 5 66 201
-/+ buffers/cache: 482 354
Swap: 1999 409 1590

Our free command with the megabytes flag displays output in MB. Without it, it would default to -k (kilobytes), but you can also pass -g for gigabytes (though the output is rounded and thus less accurate).

In our output, the total RAM on the system is 837MB, or about 1GB. Of this, 750MB is ‘used,’ but 66MB is in buffers and 201M is cached data, so subtracting those, the total ‘free’ RAM on the server is around 354MB. Because the calculations are made in KB and rounded for output, the numbers won’t always accurately add up (750 plus 86 is not 837).

The final line shows swap usage, which you want to avoid using. My output says that there is 409MB used in the on-disk swap space, but since there is free RAM at the moment, my swap usage was in the past, and the system stopped using swap space.

If there was an amount in the ‘used’ column for swap, and there was 0 free RAM after calculating the buffers and cache, then the system will be very sluggish. We call this ‘being in swap.’ The reading/writing to swap is very slow compared to RAM, and you should avoid going into swap space by tuning your programs to use memory appropriately. If you run out of RAM and swap space, then your server will be out of memory (OOM), and will immediately freeze.

Advanced Commands

Useful Pipelines

Now that we have a few advanced commands under our belt let’s learn more about how we can use pipe to our advantage in making useful command strings or scripts, aka ‘one-liners’ or ‘pipelines.’ These use several formatting commands, such as sed, awk, sort, uniq, or column, which fall outside of the scope of this article for description (you can learn more about them using the man command).

Disk Usage Formatting

This command will use du and awk, an output manipulation tool, to nicely format and sort the output of a du command in the current working directory by size. First, change directory (cd) to your intended folder for analysis, and run:

# du -sk ./* | sort -nr | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + $1; x = $1; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'

The above command will add dynamic suffixes to the on-disk sizes, so you can see output in GB, MB, and KB, instead of just one of those powers. The top listed folder will be your largest in that directory.

Check Connection Count

This string of commands checks active connections to the server using netstat, pares the output down to HTTP and HTTPS connections using grep, formats and sorts the output using a series of other commands. This example shows how many times each IP address listed has connected to the server.

# netstat -tn 2>/dev/null | grep -E '(:80|:443)' | awk '{print $5}' | cut -f1 -d: | sort | uniq -c | sort -rn
2 46.229.168.149
1 46.229.168.147
1 46.229.168.145
1 46.229.168.144
1 46.229.168.142
1 46.229.168.141
1 46.229.168.138
...

In our case, a subnet that was hitting us with requests to scrape data, caused a lot of load on the server. Add this IP range the firewall, in the deny list, to stop the attack for now.

You can also get a quick summary of just the total number of connections with this command:

# netstat -tan | grep -E ‘(:80|:443)’ | wc -l
8

Format error_log Output

Here is some advanced usage for grep output. The sed command is like awk, where it edits the output stream as it is printed. In this case, we want to look at logged modsec errors, but I want to add some whitespace between the errors so it’s easier to read:

# grep -i modsec /usr/local/apache/logs/error_log | tail -n100 | sed -e “s/$/\\n/”

This command will give me the output of the last 100 logged modsec, ModSec, or ModSecurity line (since the ‘-i’ flag for grep will ignore case sensitivity) and replace the end of each line with a newline.

Memory Usage By Account

Add up all of the percentages of memory usage by user for a running program as defined by ps and give you a sorted output.

# tmpvar=””; for each in `ps aux | grep -v COMMAND | awk '{print $1}' | sort -u`; do tmpvar="$tmpvar\n`ps aux | egrep ^$each | awk 'BEGIN{total=0};{total += $4};END{print total, $1}'`"; done; echo -e $tmpvar | grep -v ^$ | sort -rn | head; unset tmpvar

Usage Count In /var/tmp

When searching for a file count per user, if you encounter a number as the file owner, you can conclude that the user has been removed, and should the file should be deleted.

# find /var/tmp/ ! -user root ! -user mysql ! -user nobody ! -group root ! -group mysql | xargs ls -lh | awk '{print $3, $5, $9}' | sort | awk '{print $1}' | uniq -c | sort -rh

Top Processes By Memory Usage

This command outputs the processes using the highest memory, sorting the 4th column of ps and displaying the top 10 commands with head.

# ps aux | sort -rnk 4 | head

Whether you are brushing up on your Linux Admin interview or just want to get more familiar these commands are sure to be useful to your repertoire.

SQL Databases Migration with Command Line

What if you have dozens of SQL databases and manually backing up/restoring each database is too time-consuming for your project? No problem! We can script out a method that will export and import all databases at once without needing manual intervention. For help with transferring SQL Logins and Stored Procedures & Views take a look at our MSSQL Migration with SSMS article.

1. Open SSMS (Microsoft SQL Server Management Studio) on the source server, log in to the SQL instance and open a New Query window. Run the following query:

SELECT name FROM master.sys.databases

This command will output a list of all MSSQL databases on your server. To copy this list out, click anywhere in the results and use the keyboard shortcut CTRL+A (Command + A for Mac users) to select all databases. After highlighting all the databases right click and select copy.

2. Open Notepad, paste in your results and delete all databases (in the newly copied notepad text) you do NOT wish to migrate, as well as deleting the following entries:

  • master
  • tempdb
  • model
  • msdb

These entries are the system’s databases, and copying them is not necessary. Make sure to delete everything except explicitly the databases you need to migrate.  You should now have a list of all required databases separated by a line. i.e.

  • AdventureWorks2012
  • AdventureWorks2014
  • AdventureWorks2016

3. Save this result on the computer as C:\databases.txt.

4. Create a new Notepad window, copy/paste the following into the document and save it as C:\db-backup.bat

mkdir %systemdrive%\dbbackups
for /F "tokens=*" %%a in (databases.txt) do ( sqlcmd.exe -Slocalhost -Q"BACKUP DATABASE %%a TO DISK ='%systemdrive%\dbbackups\%%a.bak' WITH STATS" )

5. Now that you’ve saved the file as C:\db-backup.bat, navigate to the Start menu and type cmd and right click on Command Prompt to select Run as Administrator.Type the following command:

cd C:\

And hit enter. Afterward, type db-backup.bat and hit enter once again.

At this point, your databases have begun exporting and you will see the percentage progress of each databases export (pictured below).

Command line shows the process of each database that is exported.

Take note of any failed databases, as you can re-run the batch file when it’s done, using only the databases that may have failed. If the databases are failing to back up, take note of the error message displayed in the command prompt, address the error by modifying the existing C:\databases.txt file to include only the failed databases and re-run db-backup.bat until all databases are successfully exported.

 

By now you have the folder C:\dbbackups\ that contains .bak files for each database you want to migrate. You will need to copy the folder and your C:\databases.txt file to the destination server. There are numerous ways to move your data to the destination server; you can use USB, Robocopy or FTP. The folder on the C drive of the destination server should be called C:\dbbackups . It’s important to accurately name the file as our script will be looking for the .bak files here. Be sure that the destination server has your C:\databases.txt file as well, as our script will be looking for the database names here.

 

1. Open a Notepad and copy/paste the following into the document and save it as C:\db-restore.bat

for /F "tokens=*" %%a in (C:\databases.txt) do (
sqlcmd.exe -E -Slocalhost -Q"RESTORE DATABASE %%a FROM DISK='%systemdrive%\dbbackups\%%a.bak' WITH RECOVERY"
)

2. Save the file as C:\db-restore.bat 

3. Navigate to the Start menu and type cmd.

4. Right click on Command Prompt and select Run as Administrator. Type the following command:

cd C:\

and hit Enter. Now type db-restore.bat and hit Enter.

Your databases have now begun importing. You will see the percentage of each databases restoration and the message “RESTORE DATABASE successfully processed” for each database that has been successfully processed.

Take note of any failed databases, as you can re-run the batch file when it’s done, using only the databases that have failed. If the databases are failing to back up, take note of the error message displayed in the command prompt, address the error (you can change the batch file as necessary), modify C:\databases.txt to include only the failed databases and re-run db-restore.bat until all databases are successfully exported.

Congratulations, you have now backed up and restored all of your databases to the new server. If you have any login issues while testing the SQL connections on the destination server, refer to the Migrating Microsoft SQL Logins (anchor link) section of this article and follow the steps therein. To migrate views or stored procedures please refer to the Migrating Views and Stored Procedures section. Every SQL server will have it’s own configurations and obstacles to face but we hope this article has given you a strong foundation for your Microsoft SQL Server Migration.

 

Install Xfce Desktop Environment on Ubuntu 16.04

Since 1996, XFCE Desktop gives users the ability to have a graphical user interface (GUI) environment, visually turning your Linux server into an environment more like your desktop computer. With its no-frills look, XFCE does not weigh heavy on the server’s hardware and is faster than GNOME and KDE to boot. Once completed with this small tutorial, you’ll be able to share and connect to the XFCE GUI by continuing to the next tutorial on How To Install VNC.

Pre-flight

  • These instructions are intended for installing Xfce Desktop Environment on an Ubuntu 16.04 LTS server.
  • Logged in as a root user, but for non-root users precede all commands with the word sudo.

Step 1: Update apt-get
With best practices in mind, we will update before proceeding to install XFCE 4

apt-get update

Step 2: Install XFCE4 Desktop Environment
With one command we can install Xfce itself and some useful utilities that come with Xfce:
apt-get install -y xfce4 xfce4-goodies

Step 1:
Run each of these commands so that apt-get can utilize them during the purge of Xfce.
apt-get -f install
apt-get clean
apt-get autoclean
apt-get update

Step 2:
Purge Xfce from your Ubuntu server:
apt-get purge xfce4

As mentioned in our opening paragraph the next step is to configure VNC (virtual network computing) Installing VNC is necessary to open the recently installed Xfce interface. It’s optional but advisable to set up an SSH tunnel that connects to VNC for a secure connection.  Check out our Knowledge Base on the subject of VNC to find your choice of articles.

 

How to Remove (Delete) a User on Ubuntu 16.04

User management includes removing users who no longer need access, removing their username and any associate root privileges are necessary for securing your server. Deleting a user’s access to your Linux server is a typical operation which can easily be performed using a few commands.  

Pre-flight Check

  • We are logged in as root on an Ubuntu 16.04 VPS powered by Liquid Web!

Step 1: Remove the User

Insert the username you want to delete by placing it after the userdel command. In our example, I’ll be deleting our user, Tom.

userdel tom

Simultaneous you can delete the user and the files owned by this user with the -r flag.  Be careful these files are not needed to run any application within your server.

userdel -r tom

If the above code produces the message below, don’t be alarmed, it is not an error, but rather /home/tom existed but /var/mail/tom did not.

userdel: tom mail spool (/var/mail/tom) not found

 

Step 2: Remove Root Privileges

By removing Tom’s username from our Linux system we are halfway complete, but we still need to remove their root privileges.

visudo

Navigate to the following section:

## Allow root to run any commands anywhere
root ALL=(ALL:ALL) ALL
tom ALL=(ALL:ALL) ALL

Or:

## User privilege specification
root ALL=(ALL:ALL) ALL
tom ALL=(ALL:ALL) ALL

With either result, remove access for your user by deleting the corresponding entry:

tom ALL=(ALL:ALL) ALL

Save and exit this file by typing :wq and press the enter key.

To add a user, see our frequently used article, How to Add a User and Grant Root Privileges on Ubuntu 16.04. Are you using a different Ubuntu version? We’ve got you covered, check out our Knowledge Base to find your version.

Whitelisting in ModSecurity

Broken down into two parts our article’s first section hits on “how to whitelist IPs or URIs,” for people who are somewhat familiar with ModSecurity but want to know further about the process. Our second section examines why we configure ModSecurity and how to prevent the security of the server from getting in the way of our work. If you have a Fully Managed Liquid Web server reach out to our Heroic Support team for assistance with whitelisting!

How to Whitelist IPs or URIs

“ModSecurity is a toolkit for real-time web application monitoring, logging, and access control.” (modsecurity.org).  In simple terms, this means that ModSec, also called mod_security or ModSecurity, is a web application firewall that can actively look for attacks to the system and stop malicious activity. However, sometimes these rules trigger when legitimate work is taking place, blocking your IP and stopping you or your developer’s until you can remove the IP block. The way around for being blocked is known as whitelisting, which essentially allows for a specific IP to access the server.   There are a few ways to whitelist a request in ModSec, either by IP or by URI (URIs are specific pages on the website). 

Getting Started

  1. Find your IP or ask your developer for theirs. (You can find this by going to ip.liquidweb.com)If you or your developer have a static IP (one that will not change), one way you can whitelist the ModSec rules is by IP.
  2. Find the ModSec error in the Apache error logs with the following command (Be sure to modify the command with your IP in place of “IP here.”):
    grep ModSec /usr/local/apache/logs/error_log | grep “IP here”.
  3. The output of this command will give you a list of hits for ModSecurity from you or your developer’s IP, which you can see below. While this looks intimidating, you will only want to pay attention to 3 bits of information highlighted.  Please note, the output will not show these colors when you are viewing the files.
Note
Blue = client, the IP which tripped the rule
Red
= ID number of tripped rule within ModSec
Green = URI, the location where the error started from

[Fri May 25 23:07:04.178701 2018] [:error] [pid 78007:tid 139708457686784] [client 61.14.210.4:30095] [client 61.14.210.4] ModSecurity: Access denied with code 406 (phase 2). Pattern match "Mozilla/(4|5)\\\\.0$" at REQUEST_HEADERS:User-Agent. [file "/etc/apache2/conf.d/modsec2.liquidweb.conf"] [line "109"] [id "20000221"] [hostname "67.227.209.163"] [uri "/db/index.php"] [unique_id "WwjPWChxvG1CO4kz-D55eQAAACU"]

 

Whitelist By IP:

1. Once you have the correct ModSec error, you will need to edit the ModSec configuration. If you are using Easy Apache 4 you will find the configuration file with this path:
/etc/apache2/conf.d/modsec2/whitelist.conf

2. Open the file with your favorite text editor, such as vim, nano, or file manager like so:

vim /etc/apache2/conf.d/whitelist.conf

3.  The blue text above will be the IP address that you are whitelisting from the original error. You must keep the backslashes (\) and up-carrot (^) in order for the IP to be read correctly. Thus it will always look something like:

“^192\.168\.896\.321”

For for the id, noted in red, you will change the number after the colon, which will be the Apache error log like we saw above. This will look similar to:

Id:2000221

Add the following code with the colored sections edited to match your intended IP.

SecRule REMOTE_ADDR "^64\.14\.210\.4"
"phase:1,nolog,allow,ctl:ruleEngine=off,id:20000221"

 

Whitelist By URI:

If your IP is dynamic (changing) and you keep getting blocked in the firewall, it is best to whitelist it via URI, the yellow item in the ModSec error.

1. Begin by opening the Easy Apache 4 configuration file:

vim /etc/apache2/conf.d/whitelist.conf

2. Add the following text to the configuration. Remember to pay attention to the highlighted parts.  Change the yellow “/db/index.php” to match your URI and the red id to match the id of your error (Do not use the colon in this one).

<LocationMatch "/db/index.php">
SecRuleRemoveById 20000221
</LocationMatch>

3. The final step for whitelisting, before you finalize the process, is to ensure you have correctly set up the whitelist. For Easy Apache 4 you will run the command:
apachectl -t

As long as the command returns “Syntax Ok” you are safe to make the whitelist active by restarting Apache. Otherwise, review the whitelists to make sure the syntax matches up correctly with the above directions.

4. Lastly, restart Apache with the following command.

/scripts/restartsrv_httpd

You have successfully whitelisted yourself in ModSec!

 

Using ModSec

Cyber Security is a hydra; once one threat is cut off, two more grow back. While this is not a new analogy, it’s important to understand as we battle threats to our network, computers, and servers. With all the complexities that come with security, I want to talk about adequately configuring ModSec to deter threats while still allowing you to work on your websites. Often, when it comes to server security, too much protection can hinder effectiveness.

For example, say you have the following set up on your server:

  • You do not allow root SSH login to the server
  • utilize dual-factor authentication for any SSH logins
  • use an SSH key for the sudo user and require other security safeguards

While this type of configuration is secure, it takes longer to log into your system to make a quick edit to your settings, a double-edged sword; how can you keep the server safe while not tying your own hands?  A great example of how this plays out is using ModSec.

ModSec can block your IP if it falsely flags your work. While this module improves system security, you’ll need to be aware of properly implementing and “scoping” the technology. Scoping in this sense means to manage risks, the focus of what is important for security while still allowing work on the server with minimal interference. To tune out legitimate requests to your server, such as when you are editing your website’s code via a plugin, ModSec has the options to whitelist rules or IPs and keep your work on track.

Whitelisting an IP from the rules that ModSec follows is a great option so long as the IP never changes (i.e. a static IP, see article here to learn more https://support.google.com/fiber/answer/3547208?hl=en) and is limited to only people you trust. This method prevents ModSec from viewing your requests as malicious and blocking your IP. This practice has the drawback that if someone (say an unhappy employee) has access to your network, they now have a way around ModSec to attack your server.

With non-static (dynamic) IPs the problems of whitelisting an IP are readily apparent. With the continual change of a dynamic IP, it creates the potential of exploiting your server, as someone could use an old IP to access the server. Whitelisting specific rules comes to save the day! When you whitelist by rules, you can edit with granularity and limit the rules to particular domains and URIs, protecting the rest of the server from attacks related to that same rule!

Example of ModSecurity

ModSec reads a series of rules and applies them to incoming requests being made to the web server. An example of what a block looks like is:

[Sat Jun 30 02:21:56.013837 2018] [:error] [pid 79577:tid 139862413879040] [client 120.27.217.223:24397] [client 120.27.217.223] ModSecurity: Access denied with code 406 (phase 2). Pattern match "Mozilla/(4|5)\\\\.0$" at REQUEST_HEADERS:User-Agent. [file "/etc/apache2/conf.d/modsec2.liquidweb.conf"] [line "109"] [id "2000064"] [hostname "67.227.192.139"] [uri "/mysql/index.php"] [unique_id "WzchhAjuZ6wPAzo9AwW1WwAAAE8"]

This error shows Apache stopped a potential attack on a file at /mysql/index.php. This is an error similar to what appears when the code is being written or edited within programs like Drupal or WordPress.

Evaluating ModSecurity

If you are persistently being blocked in your firewall while working on your code, ModSec is the likely culprit. The ModSec errors can be found in the Apache error log (in cPanel the path is /usr/local/apache/logs/error_log). The phrase “ModSec” can be quickly isolated from the log (via the command ‘grep “ModSec” /usr/local/apache/logs/error_log’). By comparing you or your developer(s) IP to the log, you’ll be able to identify stopped requests that are legitimate. Verify these are valid requests by double-checking that someone in your organization made them. Once you have done so, you can move forward in setting up a whitelist for the error, per the steps above.

Again, we want to scope to allow the least amount of wiggle room for an attack and ensure we can keep working. If you are unable to have a trusted static IP, you’ll need to use the whitelist URI  method, providing the specific page as an exemption. Once completed, remove both whitelisted items from the configuration file, in case of a genuine attack.

On a parting note, I encourage you to explore ModSec and learn more of the ins and outs of the software. Exploring different methods of whitelisting can be a lot of for to learn and most importantly helps to tighten server security. As always, our Fully Supported Customers can contact our Helpful Human Support team for assistance. Check out articles on security in our Knowledge Base, like this one on Maldet! It’s another excellent way to learn about your server and develop an understanding of server security.

How to Add a User and Grant Root Privileges on Ubuntu 16.04

Ubuntu 16.04 LTS provides you the ability to add a user for anyone who plans on accessing your server.  Creating a user is a basic setup but an important and critical one for your server security. In this tutorial, we will create a user and grant administrative access, known as root, to your trusted user.

 

Pre-Flight Check

  1. Open a terminal and log in as root.  
  2. Work on a Linux Ubuntu 16.04 server

Step 1:  Add The User

Create a username for your new user, in my example my new user is Tom:

adduser tom

You’ll then be prompted to enter a password for this user.   We recommend using a strong password because malicious bots are programmed to guess simple passwords. If you need a secure password, this third party password generator can assist with creating one.

Output:

~# adduser tom
Adding user `tom' ...
Adding new group `tom' (1002) ...
Adding new user `tom' (1002) with group `tom' ...
Creating home directory `/home/tom' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

Note
Usernames should be lowercase and avoid special characters. If you receive the error below, alter the username. ~# adduser Tom
adduser: Please enter a username matching the regular expression configured via the NAME_REGEX[_SYSTEM] configuration variable.  Use the `--force-badname' option to relax this check or reconfigure NAME_REGEX.

 

Prompts will appear to enter in information on your new user.  Entering this information is not required and can be skipped by pressing enter in each field.

Enter the new value or press ENTER for the default
Full Name []:
Room Number []:
Work Phone []:
Home Phone []:
Other []:

 

Lastly, the system will ask you to review the information for accuracy.  Enter Y to continue to our next step.

Is the information correct? [Y/n]

 

Step 2: Grant Root Privileges

Assigning a user root access is to grant a user the highest power.  My user, Tom, can then make changes to the system as a whole, so it’s critical to allow this access only to users who need it. Afterward, Tom will be able to use sudo before commands that are usually designed to be used by the root user.

usermod -aG sudo tom

 

Step 3: Verify New User

As root, you can switch to your new user with the su – command and then test to see if your new user has root privileges.

su - tom

If the user has properly been granted root access the command below will show tom in the list.

grep '^sudo' /etc/group

Output:

sudo:x:27:tom

 

MySQL Performance: MyISAM vs InnoDB

 

A major factor in database performance is the storage engine used by the database, and more specifically, its tables. Different storage engines provide better performance in one situation over another. For general use, there are two contenders to be considered. These are MyISAM, which is the default MySQL storage engine, or InnoDB, which is an alternative engine built-in to MySQL intended for high-performance databases. Before we can understand the difference between the two storage engines, we need to understand the term “locking.”

To protect the integrity of the data stored within databases, MySQL employs locking. Locking, simply put, means protecting data from being accessed. When a lock is applied, the data cannot be modified except by the query that initiated the lock. Locking is a necessary component to ensure the accuracy of the stored information.  Each storage engine has a different method of locking used. Depending on your data and query practices, one engine can outperform another. In this series, we will look at the two most common types of locking employed by our two storage engines.

 

Table locking:  The technique of locking an entire table when one or more cells within the table need to be updated or deleted. Table locking is the default method employed by the default storage engine, MyISAM.

Example: MyISAM Table LockingColumn AColumn BColumn C
Query 1 UPDATERow 1Writingdatadata
Query 2 SELECT (Wait)Row 2datadatadata
Query 3 UPDATE (Wait)Row 3datadatadata
Query 4 SELECT (Wait)Row 4datadatadata
Query 5 SELECT (Wait)Row 5datadatadata
The example illustrates how a single write operation locks the entire table causing other queries to wait for the UPDATE query finish.

 

Row-level locking: The act of locking an effective range of rows in a table while one or more cells within the range are modified or deleted. Row-level locking is the method used by the InnoDB storage engine and is intended for high-performance databases.

Example: InnoDB Row-Level LockingColumn AColumn AColumn A
Query 1 UPDATERow 1Writingdatadata
Query 2 SELECTRow 2Readingdatadata
Query 3 UPDATERow 3dataWritingdata
Query 4 SELECTRow 4ReadingReadingReading
Query 5 SELECTRow 5ReadingdataReading
The example shows how using row-level locking allows for multiple queries to run on individual rows by locking only the rows being updated instead of the entire table.

 

By comparing the two storage engines, we get to the crux of the argument between using InnoDB over MyISAM. An application or website that has a frequently used table works exceptionally well using the InnoDB storage engine by resolving table-locking bottlenecks. However, the question of using one over the other is a subjective as neither of them is perfect in all situations. There are strengths and limitations to both storage engines. Intimate knowledge of the database structure and query practices is critical for selecting the best storage engine for your tables.

MyISAM will out-perform InnoDB on large tables that require vastly more read activity versus write activity. MyISAM’s readabilities outshine InnoDB because locking the entire table is quicker than figuring out which rows are locked in the table. The more information in the table, the more time it takes InnoDB to figure out which ones are not accessible. If your application relies on huge tables that do not change data frequently, then MyISAM will out-perform InnoDB.  Conversely, InnoDB outperforms MyISAM when data within the table changes frequently. Table changes write data more than reading data per second. In these situations, InnoDB can keep up with large amounts of requests easier than locking the entire table for each one.

 

Should I use InnoDB with WordPress, Magento or Joomla Sites?

The short answer here is yes, in most cases. Liquid Web’s Most Helpful Humans in Hosting Support Teams have encountered several table-locking bottlenecks when clients are using some of the standard web applications of today. Most users of popular third-party applications like WordPress, Magento, and Joomla have limited knowledge of the underlying database components or code involved to make an informed decision on storage engines. Most table-locking bottlenecks from these content management systems (CMS) are generally resolved by changing all the tables for the site over to  InnoDB instead of the default MyISAM.  If you are hosting many of these types of CMS on your server, it would be beneficial to change the default storage engine in MySQL to use InnoDB for all new tables so that any new table installations start off with InnoDB.

 

Set your default storage engine to InnoDB by adding default_storage_engine=InnoDB to the [mysqld] section of the system config file located at:  /etc/my.cnf . Restarting the MySQL service is necessary for the server to detect changes to the file.

~ $ cat /etc/my.cnf
[mysqld]
log-error=/var/lib/mysql/mysql.err
innodb_file_per_table=1
default-storage-engine=innodb
innodb_buffer_pool_size=128M

 

Unfortunately, MySQL does not inherently have an option to convert tables, leaving each table to be changed individually. Liquid Web’s support team has put together an easy to follow maintenance plan for this process. The script, which you can run on the necessary server via shell access (SSH) will convert all tables between storage engines.

Note
Plan accordingly when performing batch operations of this nature just in case downtime occurs. Best practice is to backup all your MySQL Databases before implementing a change of this magnitude, doing so provides an easy recovery point to prevent any data loss.

Step 1: Prep

Plan to start at a time of day where downtime would have minimal consequences. This process itself does not require any downtime, however, downtime may be necessary to recover from unforeseen circumstances.  

 

Step 2: Backup All Databases To A File

The command below creates a single file backup of all databases named all-databases-backup.sqld and can be deleted once the conversion has succeeded and there are no apparent problems.
mysqldump --all-databases > all-databases-backup.sql

 

Step 3: Record Existing Table Engines To A File

Run the following script to record the existing table engines to a file named table-engine-backup.sql. You can then “import” or “run” this file later to convert back to their original engines if necessary.

mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=",Engine,";") FROM information_schema.tables WHERE table_schema NOT IN("mysql","information_schema","performance_schema");' | tee table-engine-backup.sql

If you need to revert the table engines back for any reason, run:
mysql < table-engine-backup.sql

 

Step 4a: Convert MyISAM Tables To InnoDB

The below command will proceed even if a table fails and lets you know which tables failed to convert. The output is saved to the file named convert-to-innodb.log for later review.
mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=InnoDB;") FROM information_schema.tables WHERE table_schema NOT IN ("mysql","information_schema","performance_schema") AND Engine = "MyISAM";' | while read -r i; do echo $i; mysql -e "$i"; done | tee convert-to-innodb.log

 

Step 4b: Convert All InnoDB Tables To MyISAM

This command will proceed even if a table fails and lets you know which tables failed to convert. The output is also saved to the file named convert-to-myisam.log for later review.

mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=MyISAM;") FROM information_schema.tables WHERE table_schema NOT IN ("mysql","information_schema","performance_schema") AND Engine = "InnoDB";' | while read -r i; do echo $i; mysql -e "$i"; done | tee convert-to-myisam.log

 

The following commands illustrate how converting a single table is accomplished.

Note
Replace database_name with the proper database name and table_name with the correct table name. Make sure you have a valid backup of the table in question before proceeding. 

Backup A Single Table To A File
mysqldump database_name table_name > backup-table_name.sql

 

Convert A Single Table To InnoDB

mysql -Bse ‘ALTER TABLE database_name.table_name ENGINE=InnoDB;’

 

Convert A Single Table To MyISAM:

mysql -Bse ‘ALTER TABLE database_name.table_name ENGINE=MyISAM;’

 

Check out our other articles in this series, MySQL Performance: Identifying Long Queries, to pinpoint slow queries within your database.  Stay tuned for our next article where we will cover caching and optimization.

MySQL Performance: Identifying Long Queries

Every MySQL backed application can benefit from a finely tuned database server. The Liquid Web Heroic Support team has encountered numerous situations over the years where some minor adjustments have made a world of difference in website and application performance. In this series of articles, we have outlined some of the more common recommendations that have had the largest impact on performance.

Preflight Check

This article applies to most Linux based MySQL servers. This includes, but is not limited to, both Traditional Dedicated and Cloud VPS servers running a variety of common Linux distributions. The article can be used with the following Liquid Web system types:

  • Core-managed CentOS 6x/7x
  • Core-managed Ubuntu 14.04/16.04
  • Fully-managed CentOS 6/7 cPanel
  • Fully-managed CentOS 7 Plesk Onyx 17
  • Self-managed Linux servers
Note
Self-managed systems, which have opted out of direct support can take advantage of the techniques discussed here, however, the Liquid Web Heroic Support Team cannot offer direct aid on these server types.

This series of articles assumes familiarity with the following basic system administration concepts:

 

What is MySQL Optimization?

There is no clearly defined definition for the term MySQL Optimization. It can mean something different depending on the person,  administrator, group or company. For the sake of this series of articles on MySQL Optimization, we will define MySQL Optimization as:  The configuration of a MySQL or MariaDB server which has been configured to avoid commonly encountered bottlenecks discussed in this series of articles.

What is a bottleneck?

Very similar to the neck on a soda bottle, a bottleneck as a technical term is a point in an application or server configuration where a small amount of traffic or data can pass through without issue. However, a larger volume of the same type of traffic or data is hindered or blocked and cannot operate successfully as-is. See the following example of a configuration bottleneck:

Visual Difference between Optimized and Non-Optimized DatabaseIn this example, the server is capable of handling 10 connections simultaneously. However, the configuration only accepts 5 connections. This issue would not manifest so long as there were 5 or less connections at one time. However, when traffic ramps up to 10 connections, half of them start to fail due to unused resources in the server configuration. The above examples illustrates the bottleneck shape where it derives its name versus an optimized configuration which corrects the bottleneck.

When Should I Optimize My MySQL database?

Ideally, database performance tuning should occur regularly and before productivity is affected. It is best practice behavior to conduct weekly or monthly audits of database performance to prevent issues from adversely affecting  applications. The most obvious symptoms of performance problems are:

  • Queries stack up and never completing in the MySQL process table.
  • Applications or websites using the database become sluggish.
  • Connection timeouts errors, especially during peak hours.

While it is normal for there to be several concurrent queries running at one time on a busy system, it becomes a problem when these queries are taking too long to finish on a regular basis. Although the specific threshold varies per system and per application, average query times exceeding several seconds will manifest as a slowdown within attached websites and applications. These slowdowns can sometimes start out small and go unnoticed until a large traffic surge hits a particular bottleneck.

Identifying Performance Issues

Knowing how to examine the MySQL process table is vital for diagnosing the specific bottleneck being encountered. There is a number of ways to view the process table depending on your particular server and preference. For the sake of brevity this series will focus on the most common methods used via Secure Shell (SSH) access:

 

Using The MySQL Process Table: Method 1

Use the ‘mysqladmin’ command line tool with the flag ‘processlist’ or ‘proc’ for short. (Adding the flag ‘statistics’ or ‘stat’ for short will show running statistics for queries since MySQL’s last restart.)

Command:

mysqladmin proc stat

Output:

 +-------+------+-----------+-----------+---------+------+-------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+
| 77255 | root | localhost | employees | Query | 150 | | call While_Loop2() | 0.000 |
| 77285 | root | localhost | | Query | 0 | init | show processlist | 0.000 |
+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+
Uptime: 861755 Threads: 2 Questions: 20961045 Slow queries: 0 Opens: 2976 Flush tables: 1 Open tables: 1011 Queries per second avg: 24.323

Pro: Used on the shell interface, this makes piping output to other scripts and tools very easy.
Con: The process table’s info column is always truncated so does not provide the full query on longer queries.

Using The MySQL Process Table: Method 2

Run the ‘show processlist;’ query from within MySQL interactive mode prompt. (Adding the ‘full’  modifier to the command disables truncation of the Info column. This is necessary when viewing long queries.)

 

Command:

show processlist;

Output:
MariaDB [(none)]> show full processlist;
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+
| 77006 | root | localhost | employees | Query | 151 | NULL | call While_Loop2() | 0.000 |
| 77021 | root | localhost | NULL | Query | 0 | init | show full processlist | 0.000 |
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+

Pro: Using the full modifier allows for seeing the full query on longer queries.
Con: MySQL Interactive mode cannot access scripts and tools available in the shell interface.

Using The slow query log

Another valuable tool in  MySQL is the included slow query logging feature. This feature is the preferred method for finding long running queries on a regular basis. There are several directives available to adjust this feature. However, the most commonly needed settings are:

 

slow_query_logenable/disable the slow query log
slow_query_log_filename and path of the slow query log file
long_query_timetime in seconds/microseconds defining a slow query

These directives are set within the [mysqld] section of the MySQL configuration file located at /etc/my.cnf and will require a MySQL service restart before they will take affect. See the example below for formatting:

Caution
There is a large disk space concern with the slow query log file, which needs to be attended to continually until the slow query log feature is disabled. Keep in mind, the lower your long_query_time directive the faster the slow query log fills up a disk partition.
[mysqld]
log-error=/var/lib/mysql/mysql.err
innodb_file_per_table=1
default-storage-engine=innodb
innodb_buffer_pool_size=128M
innodb_log_file_size=128M
max_connections=300
key_buffer_size = 8M
slow_query_log=1
slow_query_log_file=/var/lib/mysql/slowquery.log
long_query_time=5

Once the slow query log is enabled you will need to periodically follow-up with it to review unruly queries that need to be adjusted for better performance. To analyze the slow query log file, you can parse it directly to review its contents. The following example shows the statistics for the sample query which ran longer that the configured 5 seconds:

Caution
There is a performance hit taken by enabling the slow query log feature. This is due to the additional routines needed to analyze each query as well as the I/O needed to write the necessary queries to the log file. Because of this, it is considered best practice on production systems to disable the slow query log. The slow query log should only remain enabled for a specific duration when actively looking for troublesome queries that may be impacting the application or website.
# Time: 180717 0:23:28
# User@Host: root[root] @ localhost [] # Thread_id: 32 Schema: employees QC_hit: No
# Query_time: 627.163085 Lock_time: 0.000021 Rows_sent: 0 Rows_examined: 0
# Rows_affected: 0
use employees;
SET timestamp=1531801408;
call While_Loop2();

Optionally, you can use the mysqldumpslow command line tool, which parses the slow query log file and groups like queries together except values of number and string data:
~ $ mysqldumpslow -a /var/lib/mysql/slowquery.log
Reading mysql slow query log from /var/lib/mysql/slowquery.log
Count: 2 Time=316.67s (633s) Lock=0.00s (0s) Rows_sent=0.5 (1), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root]@localhost
call While_Loop2()
(For usage information visit MySQL documentation here: mysqldumpslow – Summarize Slow Query Log Files)

So concludes the first part of our Database Optimization series and gives us a solid basis to refer back to for benchmark purposes. Though database issues can be complicated, our series will break down these concepts to provide means to optimize your database through database conversion, table conversion, and indexing.

 

Upgrade PHP 5.6 to 7


PHP is a programming language that can run with Apache or Microsoft IIS and works with your server to execute the requests that make up your website. 88% of online sites run on, soon to be vulnerable PHP 5.X technology. At the close of this year, scheduled by Dec 31, 2018 security support will end for our dear old friend PHP 5.6, meaning bugs and security fixes will not be tended to and could lead to security vulnerabilities. 
Each PHP version gets supported actively for two years while the third year only get critical security updates. Luckily, the PHP gods had smiled upon us and extended the life for just a year longer than the typical PHP version before giving us the new year deadline. For all of you developers out there wanting to know exactly what is changing, here’s a helpful migration guide from PHP 5.6 to PHP 7.X.

While the last of PHP 5 closes out with PHP 5.6, this will inevitably leave websites utilizing PHP 5 vulnerable to attacks as well as poor performance. It has substantially reached its infamous End of Life (EOL) title. Switching to the newer PHP 7 versions is not only good for the security, but updating can ultimately save you money. Reducing the cost of doing business by avoiding software incompatibility and compliance issues. If an emotional headache isn’t enough to persuade developers to switch, the benefits will. Benchmarks show PHP 7.x has been tested to run three times faster than PHP5.6!

Let’s see:

  • Faster performance resulting in less memory usage
  • Three times faster page loads*
  • Better for heavy traffic sites
*Performance increase as benchmarked in a testing environment. Other developer’s website performance changes between PHP 5 and PHP 7 may vary.

If you are in a shared environment that manages the OS and framework, then your hosting provider should be sending out notifications of the upcoming change, their plan of action, and cut off dates. Our managed hosting products, such as Storm VPS, Cloud Sites or Managed WordPress, have support teams that can help you switch from PHP 5.X to PHP 7.X easily. Our Managed WordPress product has a compatibility checker built in & one click button to upgrade, yet another reason to love it!


While using WordPress to power your site you can check some vital aspects by going to the
WordPress plugin page and searching for the plugins that you use. Once you find the plugin or themes that you utilize, their spec pages will usually say what PHP version they employ. Also, check out the review tab for comments from users as this section gives useful information. This review tab is helpful for seeing if others have had issues with the plugin or theme and newer PHP versions. It is good practice to look up reviews and see what people have been saying about said plugin. If you don’t see any responses or it hasn’t rated well, then you will want to stray away from it. If you use custom plugins, check with your developer to see how they operate in new PHP versions. The WordPress Compatibility Plugin check will give you a list of plugins and themes that may not mesh well with PHP 7.X.

If you run a mission-critical site its best to do a compatibility checker because blindly upgrading could result in some parts of your page to not function. Checking PHP compatibility, as you would imagine, is a little more in depth but from research online, there is a compatibility checker for VPS servers that you can utilize by downloading the repo from GitHub.

It is worthwhile to note that some plugins may need a PHP module to be installed for the plugin to work. When upgrading the PHP version, you may also need to re-install the PHP module. Fortunately, our support team can assist with installing any PHP module you may need or give the best course of action if the PHP module is not available for your PHP version.

If you are using a Linux VPS the easiest way to check is to ssh into your server and run the following command via your terminal:

php --versionOutput: PHP 7.0.30 (cli) (built: Jun 26 2018 20:34:16)

cPanel:

Note
It’s important to make a backup of your site before migrating to PHP 7.X

Search php, select Multi PHP Manager, will show this screen to show which php version you are using. While on the PHP Version screen you can update the PHP version here by clicking on the check mark next to the domain and selecting the desired PHP version on the right drop down and click Apply.

Search For PHP and Click MulitPHP Manager Icon