MySQL Performance: InnoDB Buffers & Directives

As discussed earlier in our MySQL Performance series, the InnoDB storage engine is designed to be a high-performance database for very large datasets. The row-locking technique it uses allows for many read and write requests to occur on a single table concurrently. This is a vast improvement in speed over traditional table-locking of the MyISAM engine. This part of our MySQL Performance series will focus on configuring InnoDB tables for maximum concurrency with minimal disk input/output (I/O).

Disk I/O & Temporary Tables

Disk I/O & Temporary Tables.

A simple and effective change that can provide a noticeable performance boost immediately is the practice of forcing all temporary table writes into memory instead of writing to a hard disk. Writing temp tables to RAM reduces the impact MySQL has on overall system disk I/O, which greatly improves performance. This change is done by leveraging the /dev/shm tmpfs partition in Linux. This special device gives server processes the ability to write files directly into RAM as if it were a standard disk drive.

 

InnoDB Log File Size

InnoDB Log File Size

The InnoDB log file is not your traditional log file. It’s a special binary redo log, which InnoDB uses to store redo activity for every write request handled by the server. Every write query executed gets a redo entry in the log file so that change can be recovered in the event of a crash. Because of this reason, the InnoDB log file size plays a critical role in MySQL write query performance, particularly on write-heavy databases.

Rule of Thumb: innodb_log_file_size
MySQL®️ Recommends: “The larger the value, the less checkpoint flush activity is required in the buffer pool, saving disk I/O.” A log file size of 1GB is sufficient for most situation. This may need to be raised when database growth exceeded several dozen GB in size.
Notice
MySQL®️ Warns: “Larger log files also make crash recovery slower, although improvements to recovery performance make log file size less of a consideration than it was in earlier versions of MySQL. “

Due to the special nature of the InnoDB Redo Logs, this setting cannot be changed without first deleting the existing InnoDB log files. The following procedure for adjusting the innodb_log_file_size directive must be followed before MySQL will start again.

Changing innodb_log_file_size Procedure:

  1. Stop MySQL Serviceservice mysql stop
  1. Update innodb_log_file_size within [mysqld] header in /etc/my.cnf file.innodb_log_file_size=1G
  1. Move the existing log files out of the mysql directorymv /var/lib/mysql/ib_logfile* /backup
  1. Start MySQL Serviceservice mysql start

 

InnoDB Buffer Pool (IBP)

The InnoDB Buffer Pool plays a critical role in MySQL Performance. The IBP is a reserved portion of system memory where InnoDB table and index data are cached. This allows frequently accessed data to be returned quickly, without the need of spinning up a physical hard drive. The more InnoDB tablespace that is cached in memory, the less often MySQL will access physical disks, which manifests as faster query response times and improved overall system performance.

There are multiple directives which control IBP behavior. Tuning these to match the servers existing tablespace data, with some room to grow, will optimize the performance of any queries using InnoDB tables. Since InnoDB is the recommended engine for very large tables, optimizing the IBP gives major performance gains.

 

InnoDB Buffer Pool Size

InnoDB Buffer Pool Size

The larger the IBP, the more InnoDB functions as a high-performance in-memory database. However, setting the size too large can be detrimental to overall system performance. A balance must be struck between the IBP size and needed memory for other system services. In an ideal configuration, the IBP should be as large as possible, without causing excessive swapping (i.e., Thrashing) to occur. There are some simple rules to follow when making these determinations.

Rule of Thumb: innodb_buffer_pool_size (Dedicated Servers)

When MySQL is the only major service on a server.

The recommended configuration by MySQL is for innodb_buffer_pool_size to use as much as 80% of the system’s total physical memory.

Calculate innodb_buffer_pool_size Script: (Dedicated Server)

awk '
/MemTotal/{
$3="GB"
$2=sprintf("%.0f",$2/1048576)
print
$1="  Mem80%:"
$2=sprintf("%.0f",$2*.8)
print
}' /proc/meminfo
Example Output:

MemTotal: 15 GB
Mem80%: 12 GB

Rule of Thumb: innodb_buffer_pool_size (Shared Servers)

When MySQL runs alongside other major services like Web, Email, etc... (e.g. cPanel, Plesk)

Due to their varied resource requirements, there is no one-size-fits-all calculation for shared servers. It becomes necessary to calculate the memory requirement needs of all other critical services on the server and subtract those from total system memory to find a proper amount of available memory which can be assigned to innodb_buffer_pool_size.

A simplified method is to use a generic calculation. A conservative starting point is assigning between 30 and 80 percent of available system memory, instead of total physical memory. However, determining the exact setting may require some guesswork and testing.

The following awk script reads MemAvail from /proc/meminfo and provides a selection of percentage based calculations to choose for the server.

Calculate innodb_buffer_pool_size Script: (Shared Server)

awk '
/MemAvail/{
$3="G";_=$2
$2=sprintf("% 3.0f",_/1048576)
print
for (i=80;i>=25;i-=10) {
$1="MemAvail_"i"%:"
$2=sprintf("% 3.0f",_*(i/100)/1048576)
$4=sprintf("| %.0f M",_*(i/100)/1024)
print
}
}' /proc/meminfo

Example Output:

MemAvailable:  10 G
MemAvail_80%:   8 G | 8405 M
MemAvail_70%:   7 G | 7354 M
MemAvail_60%:   6 G | 6304 M
MemAvail_50%:   5 G | 5253 M
MemAvail_40%:   4 G | 4203 M
MemAvail_30%:   3 G | 3152 M

 

InnoDB Buffer Pool Instances

InnoDB Buffer Pool Instances

The innodb_buffer_pool_instances directive controls the number of memory pages Innodb creates. MySQL ignores this directive unless the innodb_buffer_pool_size is greater than 1G/1024M. When larger than 1G, the buffer pool is divided up into a number of equal sized memory pages specified by this directive.

Rule of Thumb: innodb_buffer_pool_instances
MySQL®️ Recommends: “For best efficiency, specify a combination of innodb_buffer_pool_instances and innodb_buffer_pool_size so that each buffer pool instance is at least 1GB.”

  ‣ Rule Exception: innodb_buffer_pool_instances should not exceed Total_IO_Threads

Total_IO_Threads=(innodb_read_io_threads + innodb_write_io_threads)

Preference a 1:1 Ratio for: Total_Instances:Total_IO_Threads

 

InnoDB I/O Threads

InnoDB I/O ThreadsInnoDB I/O Threads

Input/Output threads are sub-processes of MySQL which directly access the IBP memory pages. There are two types of I/O threads: innodb_read_io_threads and innodb_write_io_threads. These threads read and write, respectively, to the individual memory pages created by innodb_buffer_pool_instances.

The default value for both of these thread types is 4, making a total of 8 threads running concurrently. This pairs up with the default number of memory pages created when innodb_buffer_pool_size is larger than 1G (x8). This synergy is a key factor when improving InnoDB performance on very large database servers with large volumes of RAM and CPU cores. When increasing innodb_buffer_pool_instances past the default of 8, it becomes optimal to increase the number of threads to handle the extra memory pages concurrently. The goal being, to keep the synergy of total I/O threads equal to total memory pages:

( innodb_read_io_threads + innodb_write_io_threads ) = innodb_buffer_pool_instances

When to change InnoDB I/O Threads

Two conditions should be satisfied before adjusting I/O threads for a server.

  1. When innodb_buffer_pool_size is larger than 8 Gigabytes, this means there are more memory pages than I/O threads to handle them concurrently.
  2. When the server has more than 8 CPU cores to devote to the MySQL Service.

It is not enough to merely divide the innodb_buffer_pool_instances in half to get equal numbers of read and write threads. The goal is to increase the number of threads primarily used by the specific server’s query workload. For instance, if your server reads more data than it writes, increasing write threads would be counter-productive. The same holds true in reverse.

How to Calculate MySQL Read:Write Ratio

The global statistics kept by a MySQL server can be leveraged to determine a systems Read:Write Ratio. The following MySQL queries can be used to calculate the Total_Reads and Total_Writes of a server.

Total_Reads = Com_select

SHOW GLOBAL STATUS LIKE 'Com_select';

Total_Writes = Com_delete + Com_insert + Com_replace + Com_update

SHOW GLOBAL STATUS WHERE Variable_name IN ('Com_insert', 'Com_update', 'Com_replace', 'Com_delete');

  • If the Total_Reads is greater than Total_Writes, the server is considered Read_Heavy.
  • If the Total_Writes is greater than Total_Reads, the server is considered Write_Heavy.

The following awk script will print the necessary statistics variables from MySQL and automatically calculate a simplified Read:Write Ratio to help determine whether the system in question is Read_Heavy or Write_Heavy.

Calculate Read:Write Ratio Script:

mysql -e 'SHOW GLOBAL STATUS;'|\
awk '
$1~/Com_(delete|insert|update|replace)$/{
w += $2
printf $0 "\t + Total_Writes = " w "\n"
}
$1~/Com_(select)/{
r += $2
printf $0 "\t + Total_Reads = " r "\n"
}
END {
printf "\nRead:Write Ratio:\n\t" r ":" w " "
if (r >= w) {
R=sprintf("%.0f",r/w)
print R ":1"
} else {
W=sprintf("%.0f",w/r)
print "1:" W
}
}'

Example Output:

Com_delete    14916     + Total_Writes = 14916
Com_insert    87413     + Total_Writes = 102329
Com_replace   0         + Total_Writes = 102329
Com_select    675528    + Total_Reads = 675528
Com_update    18976     + Total_Writes = 121305
Read:Write Ratio:
675528:121305 6:1

Note:
Adjust the first line’s mysql statement as needed to connect to the appropriate server. e.g.,

mysql -h localhost -u root -p -e 'SHOW GLOBAL STATUS;' |\

Hardware CPU Core Considerations

When adjusting innodb_read_io_threads or innodb_write_io_threads, keep the total threads between the two equal to the number of CPU cores available to MySQL. This ensures maximum concurrency as each memory page can be accessed simultaneously by each individual CPU core.

Rule of Thumb: InnoDB I/O Threads (High-Performance) ‣innodb_read_io_threads + innodb_write_io_threads
The total number of InnoDB I/O Threads should not surpass the total number of CPU cores available to MySQL.

When increasing Innodb I/O Threads, increase only the read or write threads depending on whether the server is Read_Heavy or Write_Heavy.

Other Directives

There are several more advanced techniques for fine tuning the InnoDB Buffer Pool and its behavior. These are beyond the scope of this article. However, you can find more details about these techniques on the MySQL website here: 15.5.1 Buffer Pool.

 

MySQL Performance: System Configuration File & Routine Maintenance

The majority of work needed when adjusting the MySQL server is editing the applicable directives within a MySQL configuration file. There are multiple, optional configuration files that MySQL looks for when starting up. They are read in the following order:

MySQL Configuration Files:

MySQL Configuration Files:

1./etc/my.cnf

2./etc/mysql/my.cnf

3.SYSCONFDIR/my.cnf

4.$MYSQL_HOME/my.cnf

5.~/.my.cnf

Any of these files can contain MySQL server directives. For the sake of simplicity, this article assumes the default file /etc/my.cnf is being used for all examples. This is the most widely known file path in the majority of cases. However, some Linux distributions favor others than the ones listed from above. Though the file location is changed in these other distributions, the syntax and recommendations in this article series apply equally to all Linux based systems.

 

MySQL Configuration File Syntax

Each entry inside the configuration file applies to the most recent section header. Section headers are made up of a single line with the name of the header encapsulated within square brackets (e.g. [name]). When optimizing the MySQL service daemon,  changes will need to be within in the [mysqld] section header. The example below illustrates how this should look.

Example: Configuration File Syntax

[mysqld] innodb_buffer_pool_size=2G
innodb_buffer_pool_instances=2

Note
The trailing d character after mysql is required to apply settings to the MySQL service daemon. The [mysql] header without a trailing d character is incorrect and applies only to MySQL client.

 

Applying MySQL Config Changes

The vast majority of directive changes merely require a restart of the MySQL service. There are a handful of other directives which may require an additional task to be undertaken while the MySQL service is offline. These tasks will be outlined in their specific section later in the article.

Restart MySQL Service:

service mysql restart

Important:
Refrain from making several optimization changes to your existing MySQL config at one time. This is especially true when adjusting production level servers. Making small incremental changes will make it easier to identify problems with individual changes and isolate settings that may not work well for your unique setup.

Routine Maintenance

In the follow-up articles to this part of the MySQL Performance series, we will be outlining several directives and some suggested techniques to use for configuring those directives. It is imperative to understand that MySQL Optimization is an ongoing, ever-evolving configuration. As sites grow, so do data sets and workload behavior. The settings you configure today, will eventually become obsolete and probably sooner than you would like. Because of this eventuality, it is vitally important that routine maintenance is conducted on your configuration.

What are some tasks to perform routinely:
  1. Reevaluate all buffers and directives that have been modified previously. This includes the changes discussed and recommended in the entirety of this article series.
  2. Reassess MySQL Query Statistics to determine workload behaviors.
  3. Reassess tablespace data, rate of growth or other trends in data consumption.
  4. Archive old data from large or heavy trafficked tables to ease the burden of read/write requests to those tables.
  5. Reevaluate indexes and their performance. Create new, better indexes and remove old, unused indexes.

Performance Expectations

Although, the changes recommended in this article series aim to squeeze the best performance out of MySQL, the performance increase you may see is entirely subjective. Following these recommendations should help smooth the edges on any servers hitting the bottlenecks they are designed to target. However, there is no guarantee that these changes will have a positive or noticeable impact on the application level.

The biggest issues facing application performance are often on the coding level and not the server level. The underlying MySQL server configuration only carries application performance so far. Identifying problematic queries and inefficient coding practices that serialize your workload are problems that even a finely tuned MySQL server configuration just cannot correct. If you are in doubt, or your application performance seems to suffer despite all the server level optimization in the world, then you’re probably hitting code level performance problems, and we recommend contacting a qualified Database Administrator (DBA) to evaluate your application performance properly.

MySQL Performance: How To Leverage MySQL Database Indexing

A Mysql Indexing Logo

Throughout this tutorial, we will cover some of the fundamentals of indexing. As part of the MySQL series, we will introduce capabilities of MySQL indexing and the role it plays in optimizing database performance. Liquid Web recommends consulting with a DBA before making any changes to your production level application.

What is Indexing?

Indexing is a powerful structure in MySQL which can be leveraged to get the fastest response times from common queries. MySQL queries achieve efficiency by generating a smaller table, called an index, from a specified column or set of columns. These columns, called a key, can be used to enforce uniqueness. Below is a simple visualization of an example index using two columns as a key.

+------+----------+----------+
| ROW | COLUMN_1 | COLUMN_2 |
+------+----------+----------+
| 1 | data1 | data2 |
+------+----------+----------+
| 2 | data1 | data1 |
+------+----------+----------+
| 3 | data1 | data1 |
+------+----------+----------+
| 4 | data1 | data1 |
+------+----------+----------+
| 5 | data1 | data1 |
+------+----------+----------+

Queries utilize indexes to identify and retrieve the targeted data, even if they are a combination of keys. Without an index, running that same query results in an inspection of every row for the needed data. Indexing produces a shortcut, with much faster query times on expansive tables. A textbooks analogy may provide another common way to visualize how indexes function.
This analogy compares MySQL indexing to indexing in the back of a book.

When to Enable Indexing?

Indexing is only advantageous for huge tables with regularly accessed information. For instance, to continue with our textbook analogy, it makes little sense to index a children’s storybook with only a dozen pages. It’s more efficient to simply read the book to find each occurrence of the word “turtle” than it would be to set up and maintain indexes, query for those indexes, and then review each page provided. In the computing world, those extra tasks surrounding indexing represent wasted resources which would be better purposed by not indexing.

Without indexes, when tables grow to enormous proportions, response times suffer from queries targeting those obtuse tables. Inefficient queries manifest into latency within application or website performance. We commonly identify this latency by using the MySQL slow query log feature. You can find more details about using the slow query log feature in the first article in this series: MySQL Performance: Identifying Long Queries.
Once a colossal table hits its tipping point, it reaches the potential for downtime for applications and websites. Conducting routine evaluations for growing database establishes optimal database performance and sidesteps long queries’ inherent interruptions.

MySQL Indexing Pros vs. Cons

There are benefits and downsides to using MySQL indexing, and we’ll discuss the significant pros and cons for your consideration. These aspects will guide you to decide whether indexing is an appropriate choice for your situation.

quick data transmissions and ideal for OLAP.

What Information Does One Index?

Selecting what to index is probably the most challenging part to indexing your databases. Determining what is important enough to index and what is benign enough to not index. Generally speaking, indexing works best on those columns that are the subject of the WHERE clauses in your commonly executed queries. Consider the following simplified table:

ID, TITLE, LAST_NAME, FIRST_NAME, MAIDEN_NAME, DOB, GENDER, AGE, DESCRIPTION, HISTORY, ETC...

If your queries rely on testing the WHERE clause using LAST_NAME and FIRST_NAME then indexing by these two columns would significantly increase query response times. Alternately, if your queries rely on a simple ID lookup, indexing by ID would be the better choice.

These examples are merely a rudimentary example, and there are several types of indexing structures built-in to MySQL. The following MySQL page discusses these types of indexes in greater detail, and a recommended read for anyone considering indexing: How MySQL Uses Indexes

What is a Unique Index?

Another point for consideration when evaluating which columns to serve as the key in your index is whether to use the UNIQUE constraint. Setting the UNIQUE constraint will enforce uniqueness based on the configured indexing key. As with any key, this can be a single column or a concatenation of multiple columns. The function of this constraint ensures that there are no duplicate entries in the table based on the configured key.

UNIQUE constraints increase write speeds, a taxation of implementation.

What is a Primary Key Index?

As commonly invoked as the UNIQUE constraint the PRIMARY KEY is used to optimize indexes. This constraint ensures that the designated PRIMARY KEY cannot be of a null value. As a result, a performance boost occurs when running on an InnoDB storage engine for the table in question. This boost is due to how InnoDB physically stores data, placing null valued rows in the key out of contiguous sequence with rows that have values. Enabling this constraint ensures the rows of the table are kept in contiguous order for quicker responses.

Primary Key Index is absolutely necessary for large tables.

Managing Indexes

Now we will cover some of the basics of manipulating indexes using MySQL syntax. In examples, we will include the creation, deletion, and listing of indexes.Keywords for Managing Indexes: dbName, tableName, indexName Keep in mind, these examples have placeholder entries for the specific keywords. These keywords are self-explanatory by nature for easy reading, and below is an outline of them.

Instead of tableName you can use dbName.tableName.

Listing/Showing Indexes

Tables can have multiple indexes. Managing indexes will inevitably require being able to list the existing indexes on a table. The syntax for viewing an index is below.

SHOW INDEX FROM tableName;

SHOW INDEX FROM tableName; shows all indexes.

Indexing are present on 3 different columns.

Creating Indexes

Index creation has a simple syntax. The difficulty is in determining what columns need indexing and whether enforcing uniqueness is necessary. Below we will illustrate how to create indexes with and without a PRIMARY KEY and UNIQUE constraints.

As previously mentioned, tables can have multiple indexes. Multiple indexing is useful for creating indexes attuned to the queries required by your application or website. The default settings allow for up to 16 indexes per table, increase this number but is generally more than is necessary. Indexes can be created during a table’s creation or added on to the table as additional indexes later on. We will go over both methods below.

Creating too many indexes can add latency, but if you must then increase buffers in MySQL config.

Example: Create a Table with a Standard Index

CREATE TABLE tableName (
ID int,
LName varchar(255),
FName varchar(255),
DOB varchar(255),
LOC varchar(255),
INDEX ( ID )
);
You can create an index for several columns, using ID as the index.

Example: Create a Table with Unique Index & Primary Key

CREATE TABLE tableName (
ID int,
LName varchar(255),
FName varchar(255),
DOB varchar(255),
LOC varchar(255),
PRIMARY KEY (ID),
UNIQUE INDEX ( ID )
);
You can create an Primary Key and UNIQUE constraint over several columns.

Example: Add an Index to Existing Table

CREATE INDEX indexName ON tableName (ID, LName, FName, LOC);CREATE INDEX statement creates an index and names it.

Example: Add an Index to Existing Table with Primary Key

CREATE UNIQUE INDEX indexName ON tableName (ID, LName, FName, LOC);the CREATE UNIQUE command can add an index to a table ensuring no duplicate data.

Deleting Indexes

While managing indexes, you may find it necessary to remove some. Deleting indexes is also a very simple process, see the example below:

DROP INDEX indexName ON tableName;The DROP INDEX command lets us drop indexes on particular column.

There are many ways to optimize your database for true efficiency. If you would like to learn more or convert the search engines types available in MySQL read through our MyISAM vs. InnoDB tutorial.  Or if you are need of high functioning databases check out our MySQL product page to view different options.

Apache Performance Tuning: Configuring MPM Directives

 

Our previous article in this series focused on defining and fitting MPM to match your environment.  Building off of our last tutorial we will be discussing specific details on how to adjust the previously mentioned Apache configuration directives on the various types of Liquid Web servers.  

Continue reading “Apache Performance Tuning: Configuring MPM Directives”

Apache Performance Tuning: MPM Modules

The keystone for understanding Apache server performance is by far the MultiProcessing Modules (MPMs). These modules determine the basis for how Apache addresses multiprocessing. Multiprocessing means running multiple operations simultaneously in a system with multiple central processing units (CPU Cores).

There are many MPMs to choose; however, this article focuses on the most commonly used modules found in Liquid Web Linux based servers. These modules are:

The self-regulating MPM Prefork derives its namesake from how it forks or copies itself into new identical processes preemptively to wait for incoming requests. A non-threaded process-based approach at multiprocessing, MPM Prefork runs Apache in a single master parent server process. This parent is responsible for managing any additional child servers that make up its serverpool. While using MPM Prefork, each child server handles only a single request. This focus provides complete isolation from other requests dealt with on the server. MPM Prefork is typically used for compatibility when non-threaded libraries/software, like mod_php (DSO), are required. From an optimization standpoint, MPM Prefork can be sorely lacking when compared to multi-threaded solutions, requiring vastly more resources to reach similar traffic levels as a threaded MPM. It is resource intensive due to its need to spawn full copies of Apache for every request.

MPM Prefork

Rule-of-Thumb:
Avoid using MPM Prefork whenever possible. It’s inability to scale well with increased traffic will quickly outpace the available hardware on most system configurations.

 

A hybrid pre-forking, multithreaded, multiprocessing web server. In the same fashion as MPM Prefork, MPM Worker uses the same approach with a single master parent process governing all children within its serverpool. However, unlike MPM Prefork, these children are multi-threaded processes that can handle dozens of threads (requests) simultaneously. MPM Worker has set the foundation for multithreaded multiprocessing in Apache servers which became stable in Apache 2.2. The threaded configuration allows Apache to service hundreds of requests with ease while retaining only a dozen or so child processes in memory. The MPM Worker make for both a high capacity and low resource solution for web service.

MPM Worker

Note
The KeepAliveTimeOut directive currently defines the amount of time Apache will wait for requests. When utilizing KeepAlive with MPM Worker use the smallest KeepAliveTimeout as possible (1 second preferably).

Based off the MPM Worker source code, MPM Event shares configuration directives with MPM Worker. It works nearly identical to MPM Worker except when it comes to handling KeepAlive requests. MPM Event uses a dedicated Listener thread in each child process. This Listening thread is responsible for directing incoming requests to an available worker thread. The Listening thread solves the issue encountered by MPM Worker which locks entire threads into waiting for the KeepAliveTimeout. The Listener approach of MPM Event ensures worker threads are not “stuck” waiting for KeepAliveTimeout to expire. This method keeps the maximum amount of worker threads handling as many requests as possible.


MPM EventMP

Tip:
MPM Event is stable in Apache 2.4, older versions can use MPM Worker as an alternative.

There is an assortment of additional MPMs available. These are typically part of Apache’s integration into Operating Systems other than Unix based systems. These have specific MPMs which are requirements or utilizing Apache on their respective system types. These types of MPMs are beyond the purview of this article. You can find more information on specific MPM in the MPM Defaults section of the official Apache Documentation.

MPM EventMP

Tip:
We recommend staying away from experimental and unstable MPMs. The unreliable nature of these types of software renders them unsupportable.

 

When considering optimization, it is essential to understand there is no such thing as a one-size-fits-all Apache configuration. Correctly choosing an MPM requires analysis of many moving variables like traffic, site code, server type, PHP Handler and available hardware. Every server is unique making the best MPM an entirely subjective choice.

If your application code does not support multi-threading, then your choice will inevitably be MPM Prefork purely on a compatibility basis. MPM Prefork includes software modules like mod_php (DSO). MPM Worker without KeepAlive performs very well if your application is a high-performance load balanced API system. The scalability and flexibility of MPM Event is a solid choice for hosting multiple small to medium sites in a shared hosting configuration.

Most simple servers setups operate well under the self-governing default configuration of MPM Event, making it an ideal starting point for optimization tuning. Once chosen, an MPM can then move onto Configuration Directives to review which settings pertain to server performance and optimization. Or check out our previous article in this series, Apache Performance Tuning: Swap Memory.

Install Nginx on Ubuntu 16.04

Nginx is an open source Linux web server that accelerates content while utilizing low resources. Known for its performance and stability Nginx has many other uses such as load balancing, reverse proxy, mail proxy, and HTTP cache. With all these qualities it makes a definite competitor for Apache. To install Nginx follow our straightforward tutorial.

Pre-Flight Check

  • Logged into an as root and are working on an Ubuntu 16.04 LTS server powered by Liquid Web! If using a different user with admin privileges use sudo before each command.

Step 1: Update Apt-Get

As always, we update and upgrade our package manager.

apt-get update && apt-get upgrade

Step 2: Install Nginx

One simple command to install Nginx is all that is needed:

apt-get -y install nginx

Step 3: Verify Nginx Installation

When correctly installed Nginx’s default file will appear in /var/www/html as index.nginx-debian.html . If you see the Apache default page rename index.html file. Much like Apache, by default, the port for Nginx is port 80, which means that if you already have your A record set for your server’s hostname you can visit the IP to verify the installation of Nginx. Run the following command to get the IP of your server if you don’t have it at hand.

ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'

Take the IP given by the previous command and visit via HTTP. (http://xxx.xxx.xxx.xxx) You will be greeted with a similar screen, verifying the installation of Nginx!

Nginx Default Page

Note
Nginx, by default, does not execute PHP scripts and must be configured to do so.

If you already have Apache established to port 80, you may find the Apache default page when visiting your host IP, but you can change this port to make way for Nginx to take over port 80. Change Apache’s port by visiting the Apache port configuration file:

vim /etc/apache2/ports.conf

Change “Listen 80” to any other open port number, for our example we will use port 8090.

Listen 8090

Restart Apache for the changes to be recognized:

service apache2 restart

All things Apache can now be seen using your IP in the replacement of the x’s. For example http://xxx.xxx.xxx.xxx:8090

WordPress Tutorial 4: Recommended WordPress Plugins

This is part 4 in an ongoing series on WordPress. Please see Part 1: WordPress Tutorial 1: Installation Setup and Part 2: WordPress Tutorial 2: Terminology and Part 3: WordPress Tutorial 3: How to Install a New Plugin, Theme, or Widget.

__________

Now that you have WordPress installed, understand the interface, and know how to install new parts, let’s take a look at our recommended plugins.
Continue reading “WordPress Tutorial 4: Recommended WordPress Plugins”

How and Why: Enabling Apache’s Piped Logging

Apache by default logs data directly to log files. While this isn’t a bad thing, it is not your only option. Both Apache 1.x and Apache 2.x bring with them the option of enabling something called “Piped Logging”, though cPanel will only allow you to enable it for version 2.x.

Continue reading “How and Why: Enabling Apache’s Piped Logging”