Apache Performance Tuning: Swap Memory

Before we get into the nitty-gritty of Apache tuning, we need to understand what happens when servers go unresponsive due to a poorly optimized configuration. An over-tuned server is one that is configured to allow more simultaneous requests (ServerLimit) than the server’s hardware can manage. Servers set in this manner have a tipping point, once reached, the server will become stuck in a perpetual swapping scenario. Meaning the Kernel is stuck rapidly reading and writing data to and from the system swap file. Swap files have read/write access speeds vastly slower than standard memory space. The swap files’ latency causes a bottleneck on the server as the Kernel attempts to read and write data faster than is physically possible or more commonly known as thrashing. If not caught immediately, thrashing spirals the system out of control leading to a system crash. If trashing is left running for too long, it has the potential of physically harming the hard drive itself by simulating decades of read/write activity over a short period. When optimizing Apache, we must be cautious not to create a thrashing scenario. We can accomplish this by calculating the thrashing point of the server based on several factors.

 

Pre-Flight Check

This article covers all Apache-based servers including but is not limited to, both traditional Dedicated servers and Cloud VPS servers running a variety of Linux distributions. We will include the primary locations where stored Apache configurations on the following Liquid Web system types:

Liquid Web Server Types

Calculating the estimated thrashing point or ServerLimit of a server uses a simple equation:

Caculating the Thrashing Point

  • buff/cache: The total memory used by the Kernel for buffers and cache.
  • Reserved: The amount of memory reserved for non-Apache processes.
  • Available: The difference between buff/cache and Reserved memory.
  • Avg.Apache: The average of all running Apache children during peak operational hours.

 

Important
Calculating the thrashing point/ServerLimit should be done during peak operational hours and periodically reassessed for optimal performance.

The thrashing point value is equal to the number Apache children the server can run; this applies to either threaded or non-thread children. When the number of children running in memory meets the calculated thrashing point, the server will begin to topple. The following example walks through a standard Liquid Web Fully Managed cPanel server to illustrate gathering the necessary details to calculate a system’s estimated thrashing point.

 

Buff/Cache Memory

On modern Linux systems, the buffer/cache can be derived using the /proc/meminfo file by adding the Buffers, Cached and Slab statistics. Using the free command, we can quickly grab this information, as in the example below:

free

Output:

Use the free command to get the buff/cache info

Don’t be fooled by the column labeled “available.” We are solely looking at the memory we can reappropriate, which is the buff/cache column (708436).

 

Reserved Memory

Reserved memory is a portion of memory held for other services aside from Apache. Some of the biggest contenders for additional memory outside of Apache are MySQL, Tomcat, Memcache, Varnish, and Nginx. It is necessary to examine these services configs to determine a valid reserved memory. These configs are outside the purview of this article. However, MySQL is the most commonly encountered service running with Apache. You can find tools online to help analyze and configure MySQL separate from this article.

 

Rule-of-Thumb:
Save 25% of the total buff/cache memory for any extra services ran on the server. Examples:

  • A standard cPanel server runs several services along with Apache and MySQL. A server with these services runs on the heavier side, needing 25% reserved for non-Apache services.
  • A pure Apache web node in a high capacity load balanced configuration does not need to reserve any additional ram for other services.

 

Average Apache Memory

Finding the average size of Apache processes is relatively simple using the ps command to list the RSS (Resident Set Size) of all running httpd processes. Note: some distributions use “apache” instead of “httpd” for process name.

This example uses a short awk script to print out the average instead of listing the sizes.

ps o rss= -C httpd|awk '{n+=$1}END{print n/NR}'

Output:

22200.6

 

This example is easy to average by hand, but larger servers will require more calculation.

ps o rss -C httpd

Output:

RSS
5092
27940
28196
27572

 

Calculate the Thrashing Point

Once collected divided the Available memory by Avg. Apache, rounding down to the nearest whole number. Available memory is the buff/cache memory minus the Reserved memory. Below is a summary of the calculation process in table form.

Thrashing point calculations

Conservative estimates are provided below for various memory configurations. These estimates can be used as a starting configuration but will require additional follow up performance assessments during peak hours to adjust directives by the servers.

General Thrashing Estimates

Install Memcached on Ubuntu 16.04

Memcached works to enhance performance by keeping a copy of commonly used script elements within the server’s memory in a form that is more easily read by the server thus reducing time. A bonus feature of this object cacher is its ability to decrease the number of connections to your database. In this tutorial, we instruct how to install Memcached, but it’s important to note that when using Memcache in an application, the application must be specially coded or configured to store and retrieve data this cached data.

Learn more about caching from our dedicated article or visit our series for database optimization.

Pre-flight

  • We are logged in as root on an Ubuntu 16.04 VPS powered by Liquid Web!
  • Installed and running Apache and PHP 7.

Installation of Memcached

Step 1:
Following best practices, we will do a quick package update by using the following command:
apt-get update
Step 2:
Install the Memcached daemon using
apt-get install memcached -y
Step 3:
Install the Memcache module for PHP fuctionality:
apt-get install php-memcached -y

Verify installation of Memcached

Use the php -m flag to show compiled modules while sorting through specifically looking for memcached.

php -m | grep memcached
memcached

Optional Configurations

At some point, you may find that you need to change the default settings of Memcached. These include adjusting the port number, memory for your cache, and the listening IP address.
vim /etc/memcached.conf

Adjust these configurations by keeping the same flags (-m, -p, -u, -l), adjusting the letter or number after the flag and save the file by typing :wq .
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 64
 
# Default connection port is 11211
-p 11211
 
# Run the daemon as root. The start-memcached will default to running as root if no
# -u command is present in this config file
-u memcache
 
# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.
-l 127.0.0.1

 

Restart your Memcached service to recognize the changes to this file:
systemctl restart memcached

MySQL Performance: MyISAM vs InnoDB

 

A major factor in database performance is the storage engine used by the database, and more specifically, its tables. Different storage engines provide better performance in one situation over another. For general use, there are two contenders to be considered. These are MyISAM, which is the default MySQL storage engine, or InnoDB, which is an alternative engine built-in to MySQL intended for high-performance databases. Before we can understand the difference between the two storage engines, we need to understand the term “locking.”

To protect the integrity of the data stored within databases, MySQL employs locking. Locking, simply put, means protecting data from being accessed. When a lock is applied, the data cannot be modified except by the query that initiated the lock. Locking is a necessary component to ensure the accuracy of the stored information.  Each storage engine has a different method of locking used. Depending on your data and query practices, one engine can outperform another. In this series, we will look at the two most common types of locking employed by our two storage engines.

 

Table locking:  The technique of locking an entire table when one or more cells within the table need to be updated or deleted. Table locking is the default method employed by the default storage engine, MyISAM.

Example: MyISAM Table LockingColumn AColumn BColumn C
Query 1 UPDATERow 1Writingdatadata
Query 2 SELECT (Wait)Row 2datadatadata
Query 3 UPDATE (Wait)Row 3datadatadata
Query 4 SELECT (Wait)Row 4datadatadata
Query 5 SELECT (Wait)Row 5datadatadata
The example illustrates how a single write operation locks the entire table causing other queries to wait for the UPDATE query finish.

 

Row-level locking: The act of locking an effective range of rows in a table while one or more cells within the range are modified or deleted. Row-level locking is the method used by the InnoDB storage engine and is intended for high-performance databases.

Example: InnoDB Row-Level LockingColumn AColumn AColumn A
Query 1 UPDATERow 1Writingdatadata
Query 2 SELECTRow 2Readingdatadata
Query 3 UPDATERow 3dataWritingdata
Query 4 SELECTRow 4ReadingReadingReading
Query 5 SELECTRow 5ReadingdataReading
The example shows how using row-level locking allows for multiple queries to run on individual rows by locking only the rows being updated instead of the entire table.

 

By comparing the two storage engines, we get to the crux of the argument between using InnoDB over MyISAM. An application or website that has a frequently used table works exceptionally well using the InnoDB storage engine by resolving table-locking bottlenecks. However, the question of using one over the other is a subjective as neither of them is perfect in all situations. There are strengths and limitations to both storage engines. Intimate knowledge of the database structure and query practices is critical for selecting the best storage engine for your tables.

MyISAM will out-perform InnoDB on large tables that require vastly more read activity versus write activity. MyISAM’s readabilities outshine InnoDB because locking the entire table is quicker than figuring out which rows are locked in the table. The more information in the table, the more time it takes InnoDB to figure out which ones are not accessible. If your application relies on huge tables that do not change data frequently, then MyISAM will out-perform InnoDB.  Conversely, InnoDB outperforms MyISAM when data within the table changes frequently. Table changes write data more than reading data per second. In these situations, InnoDB can keep up with large amounts of requests easier than locking the entire table for each one.

 

Should I use InnoDB with WordPress, Magento or Joomla Sites?

The short answer here is yes, in most cases. Liquid Web’s Most Helpful Humans in Hosting Support Teams have encountered several table-locking bottlenecks when clients are using some of the standard web applications of today. Most users of popular third-party applications like WordPress, Magento, and Joomla have limited knowledge of the underlying database components or code involved to make an informed decision on storage engines. Most table-locking bottlenecks from these content management systems (CMS) are generally resolved by changing all the tables for the site over to  InnoDB instead of the default MyISAM.  If you are hosting many of these types of CMS on your server, it would be beneficial to change the default storage engine in MySQL to use InnoDB for all new tables so that any new table installations start off with InnoDB.

 

Set your default storage engine to InnoDB by adding default_storage_engine=InnoDB to the [mysqld] section of the system config file located at:  /etc/my.cnf . Restarting the MySQL service is necessary for the server to detect changes to the file.

~ $ cat /etc/my.cnf
[mysqld]
log-error=/var/lib/mysql/mysql.err
innodb_file_per_table=1
default-storage-engine=innodb
innodb_buffer_pool_size=128M

 

Unfortunately, MySQL does not inherently have an option to convert tables, leaving each table to be changed individually. Liquid Web’s support team has put together an easy to follow maintenance plan for this process. The script, which you can run on the necessary server via shell access (SSH) will convert all tables between storage engines.

Note
Plan accordingly when performing batch operations of this nature just in case downtime occurs. Best practice is to backup all your MySQL Databases before implementing a change of this magnitude, doing so provides an easy recovery point to prevent any data loss.

Step 1: Prep

Plan to start at a time of day where downtime would have minimal consequences. This process itself does not require any downtime, however, downtime may be necessary to recover from unforeseen circumstances.  

 

Step 2: Backup All Databases To A File

The command below creates a single file backup of all databases named all-databases-backup.sqld and can be deleted once the conversion has succeeded and there are no apparent problems.
mysqldump --all-databases > all-databases-backup.sql

 

Step 3: Record Existing Table Engines To A File

Run the following script to record the existing table engines to a file named table-engine-backup.sql. You can then “import” or “run” this file later to convert back to their original engines if necessary.

mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=",Engine,";") FROM information_schema.tables WHERE table_schema NOT IN("mysql","information_schema","performance_schema");' | tee table-engine-backup.sql

If you need to revert the table engines back for any reason, run:
mysql < table-engine-backup.sql

 

Step 4a: Convert MyISAM Tables To InnoDB

The below command will proceed even if a table fails and lets you know which tables failed to convert. The output is saved to the file named convert-to-innodb.log for later review.
mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=InnoDB;") FROM information_schema.tables WHERE table_schema NOT IN ("mysql","information_schema","performance_schema") AND Engine = "MyISAM";' | while read -r i; do echo $i; mysql -e "$i"; done | tee convert-to-innodb.log

 

Step 4b: Convert All InnoDB Tables To MyISAM

This command will proceed even if a table fails and lets you know which tables failed to convert. The output is also saved to the file named convert-to-myisam.log for later review.

mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=MyISAM;") FROM information_schema.tables WHERE table_schema NOT IN ("mysql","information_schema","performance_schema") AND Engine = "InnoDB";' | while read -r i; do echo $i; mysql -e "$i"; done | tee convert-to-myisam.log

 

The following commands illustrate how converting a single table is accomplished.

Note
Replace database_name with the proper database name and table_name with the correct table name. Make sure you have a valid backup of the table in question before proceeding. 

Backup A Single Table To A File
mysqldump database_name table_name > backup-table_name.sql

 

Convert A Single Table To InnoDB

mysql -Bse ‘ALTER TABLE database_name.table_name ENGINE=InnoDB;’

 

Convert A Single Table To MyISAM:

mysql -Bse ‘ALTER TABLE database_name.table_name ENGINE=MyISAM;’

 

Check out our other articles in this series, MySQL Performance: Identifying Long Queries, to pinpoint slow queries within your database.  Stay tuned for our next article where we will cover caching and optimization.

MySQL Performance: Identifying Long Queries

Every MySQL backed application can benefit from a finely tuned database server. The Liquid Web Heroic Support team has encountered numerous situations over the years where some minor adjustments have made a world of difference in website and application performance. In this series of articles, we have outlined some of the more common recommendations that have had the largest impact on performance.

Preflight Check

This article applies to most Linux based MySQL servers. This includes, but is not limited to, both Traditional Dedicated and Cloud VPS servers running a variety of common Linux distributions. The article can be used with the following Liquid Web system types:

  • Core-managed CentOS 6x/7x
  • Core-managed Ubuntu 14.04/16.04
  • Fully-managed CentOS 6/7 cPanel
  • Fully-managed CentOS 7 Plesk Onyx 17
  • Self-managed Linux servers
Note
Self-managed systems, which have opted out of direct support can take advantage of the techniques discussed here, however, the Liquid Web Heroic Support Team cannot offer direct aid on these server types.

This series of articles assumes familiarity with the following basic system administration concepts:

 

What is MySQL Optimization?

There is no clearly defined definition for the term MySQL Optimization. It can mean something different depending on the person,  administrator, group or company. For the sake of this series of articles on MySQL Optimization, we will define MySQL Optimization as:  The configuration of a MySQL or MariaDB server which has been configured to avoid commonly encountered bottlenecks discussed in this series of articles.

What is a bottleneck?

Very similar to the neck on a soda bottle, a bottleneck as a technical term is a point in an application or server configuration where a small amount of traffic or data can pass through without issue. However, a larger volume of the same type of traffic or data is hindered or blocked and cannot operate successfully as-is. See the following example of a configuration bottleneck:

Visual Difference between Optimized and Non-Optimized DatabaseIn this example, the server is capable of handling 10 connections simultaneously. However, the configuration only accepts 5 connections. This issue would not manifest so long as there were 5 or less connections at one time. However, when traffic ramps up to 10 connections, half of them start to fail due to unused resources in the server configuration. The above examples illustrates the bottleneck shape where it derives its name versus an optimized configuration which corrects the bottleneck.

When Should I Optimize My MySQL database?

Ideally, database performance tuning should occur regularly and before productivity is affected. It is best practice behavior to conduct weekly or monthly audits of database performance to prevent issues from adversely affecting  applications. The most obvious symptoms of performance problems are:

  • Queries stack up and never completing in the MySQL process table.
  • Applications or websites using the database become sluggish.
  • Connection timeouts errors, especially during peak hours.

While it is normal for there to be several concurrent queries running at one time on a busy system, it becomes a problem when these queries are taking too long to finish on a regular basis. Although the specific threshold varies per system and per application, average query times exceeding several seconds will manifest as a slowdown within attached websites and applications. These slowdowns can sometimes start out small and go unnoticed until a large traffic surge hits a particular bottleneck.

Identifying Performance Issues

Knowing how to examine the MySQL process table is vital for diagnosing the specific bottleneck being encountered. There is a number of ways to view the process table depending on your particular server and preference. For the sake of brevity this series will focus on the most common methods used via Secure Shell (SSH) access:

 

Using The MySQL Process Table: Method 1

Use the ‘mysqladmin’ command line tool with the flag ‘processlist’ or ‘proc’ for short. (Adding the flag ‘statistics’ or ‘stat’ for short will show running statistics for queries since MySQL’s last restart.)

Command:

mysqladmin proc stat

Output:

 +-------+------+-----------+-----------+---------+------+-------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+
| 77255 | root | localhost | employees | Query | 150 | | call While_Loop2() | 0.000 |
| 77285 | root | localhost | | Query | 0 | init | show processlist | 0.000 |
+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+
Uptime: 861755 Threads: 2 Questions: 20961045 Slow queries: 0 Opens: 2976 Flush tables: 1 Open tables: 1011 Queries per second avg: 24.323

Pro: Used on the shell interface, this makes piping output to other scripts and tools very easy.
Con: The process table’s info column is always truncated so does not provide the full query on longer queries.

Using The MySQL Process Table: Method 2

Run the ‘show processlist;’ query from within MySQL interactive mode prompt. (Adding the ‘full’  modifier to the command disables truncation of the Info column. This is necessary when viewing long queries.)

 

Command:

show processlist;

Output:
MariaDB [(none)]> show full processlist;
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+
| 77006 | root | localhost | employees | Query | 151 | NULL | call While_Loop2() | 0.000 |
| 77021 | root | localhost | NULL | Query | 0 | init | show full processlist | 0.000 |
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+

Pro: Using the full modifier allows for seeing the full query on longer queries.
Con: MySQL Interactive mode cannot access scripts and tools available in the shell interface.

Using The slow query log

Another valuable tool in  MySQL is the included slow query logging feature. This feature is the preferred method for finding long running queries on a regular basis. There are several directives available to adjust this feature. However, the most commonly needed settings are:

 

slow_query_logenable/disable the slow query log
slow_query_log_filename and path of the slow query log file
long_query_timetime in seconds/microseconds defining a slow query

These directives are set within the [mysqld] section of the MySQL configuration file located at /etc/my.cnf and will require a MySQL service restart before they will take affect. See the example below for formatting:

Caution
There is a large disk space concern with the slow query log file, which needs to be attended to continually until the slow query log feature is disabled. Keep in mind, the lower your long_query_time directive the faster the slow query log fills up a disk partition.
[mysqld]
log-error=/var/lib/mysql/mysql.err
innodb_file_per_table=1
default-storage-engine=innodb
innodb_buffer_pool_size=128M
innodb_log_file_size=128M
max_connections=300
key_buffer_size = 8M
slow_query_log=1
slow_query_log_file=/var/lib/mysql/slowquery.log
long_query_time=5

Once the slow query log is enabled you will need to periodically follow-up with it to review unruly queries that need to be adjusted for better performance. To analyze the slow query log file, you can parse it directly to review its contents. The following example shows the statistics for the sample query which ran longer that the configured 5 seconds:

Caution
There is a performance hit taken by enabling the slow query log feature. This is due to the additional routines needed to analyze each query as well as the I/O needed to write the necessary queries to the log file. Because of this, it is considered best practice on production systems to disable the slow query log. The slow query log should only remain enabled for a specific duration when actively looking for troublesome queries that may be impacting the application or website.
# Time: 180717 0:23:28
# User@Host: root[root] @ localhost [] # Thread_id: 32 Schema: employees QC_hit: No
# Query_time: 627.163085 Lock_time: 0.000021 Rows_sent: 0 Rows_examined: 0
# Rows_affected: 0
use employees;
SET timestamp=1531801408;
call While_Loop2();

Optionally, you can use the mysqldumpslow command line tool, which parses the slow query log file and groups like queries together except values of number and string data:
~ $ mysqldumpslow -a /var/lib/mysql/slowquery.log
Reading mysql slow query log from /var/lib/mysql/slowquery.log
Count: 2 Time=316.67s (633s) Lock=0.00s (0s) Rows_sent=0.5 (1), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root]@localhost
call While_Loop2()
(For usage information visit MySQL documentation here: mysqldumpslow – Summarize Slow Query Log Files)

So concludes the first part of our Database Optimization series and gives us a solid basis to refer back to for benchmark purposes. Though database issues can be complicated, our series will break down these concepts to provide means to optimize your database through database conversion, table conversion, and indexing.

 

Optimizing Your Website in Cloud Sites

optimizing-your-site-cloud-sties-header

To ensure that your site performs at its best, there are a few things you can do to optimize it when using Liquid Web Cloud Sites technology. This article will take you through the top five best practices to optimize your website.

Continue reading “Optimizing Your Website in Cloud Sites”

How to Install the Memcached PHP Extension on Ubuntu 15.04

Memcached is a distributed, high-performance, in-memory caching system that is primarily used to speed up sites that make heavy use of databases. It can, however, be used to store objects of any kind. Nearly every popular CMS has a plugin or module to take advantage of Memcached, and many programming languages have a Memcached library, including PHP, Perl, Ruby, and Python. Memcached runs in memory and is thus quite speedy since it does not need to write data to disk.

Pre-Flight Check

  • These instructions are intended specifically for installing the Memcached PHP Extension on a single Ubuntu 15.04 node.
  • I’ll be working from a Liquid Web Core Managed Ubuntu 15.04 server, and I’ll be logged in as root.
  • Follow our tutorial on How to Install Memcached on Ubuntu 15.04 prior to this KB!

Continue reading “How to Install the Memcached PHP Extension on Ubuntu 15.04”

How to Install Memcached on Ubuntu 15.04

Memcached is a distributed, high-performance, in-memory caching system that is primarily used to speed up sites that make heavy use of databases. It can however be used to store objects of any kind. Nearly every popular CMS has a plugin or module to take advantage of memcached, and many programming languages have a memcached library, including PHP, Perl, Ruby, and Python. Memcached runs in memory and is thus quite speedy, since it does not need to write data to disk.

Pre-Flight Check

  • These instructions are intended specifically for installing Memcached on a single Ubuntu 15.04 node.
  • I’ll be working from a Liquid Web Core Managed Ubuntu 15.04 server, and I’ll be logged in as root.

Continue reading “How to Install Memcached on Ubuntu 15.04”

How to Resize a Server

Upgrading your Storm® server is a simple process, and can be done in just a few satisfying clicks. Upgrades are a necessity. You work hard on your blog, or ecommerce store, and the traffic grows! Once you’ve optimized your WordPress site or Magento store, reward yourself with an easy upgrade process and increase the available resources to your Storm® by following the steps below. This process can be followed for Storm® VPS or Storm® Dedicated servers.

Pre-Flight Check
  • These instructions are intended specifically for resizing your Storm® server.

Continue reading “How to Resize a Server”

How to Upgrade Your VPS

Upgrading your Storm® VPS (Virtual Private Server) is a simple process, and can be done in just a few satisfying clicks. Upgrades are a necessity. Let’s face it, you work hard on your blog, or ecommerce store, and the traffic grows! So, once you’ve optimized your WordPress site or Magento store, reward yourself with an easy upgrade process and increase the available resources to your Storm® VPS by following the steps below!

Pre-Flight Check

Continue reading “How to Upgrade Your VPS”

How to Install the Memcached PHP Extension on Fedora 21

Memcached is a distributed, high-performance, in-memory caching system that is primarily used to speed up sites that make heavy use of databases. It can, however, be used to store objects of any kind. Nearly every popular CMS has a plugin or module to take advantage of Memcached, and many programming languages have a Memcached library, including PHP, Perl, Ruby, and Python. Memcached runs in memory and is thus quite speedy since it does not need to write data to disk.

Pre-Flight Check
  • These instructions are intended specifically for installing the Memcached PHP Extension on a single Fedora 21 node.
  • I’ll be working from a Liquid Web Self Managed Fedora 21 server, and I’ll be logged in as root.
  • Follow our tutorial on How to Install Memcached on Fedora 21 prior to this KB!
  • Apache and PHP must be installed.

Continue reading “How to Install the Memcached PHP Extension on Fedora 21”