MySQL Performance: InnoDB Buffers & Directives

As discussed earlier in our MySQL Performance series, the InnoDB storage engine is designed to be a high-performance database for very large datasets. The row-locking technique it uses allows for many read and write requests to occur on a single table concurrently. This is a vast improvement in speed over traditional table-locking of the MyISAM engine. This part of our MySQL Performance series will focus on configuring InnoDB tables for maximum concurrency with minimal disk input/output (I/O).

Disk I/O & Temporary Tables

Disk I/O & Temporary Tables.

A simple and effective change that can provide a noticeable performance boost immediately is the practice of forcing all temporary table writes into memory instead of writing to a hard disk. Writing temp tables to RAM reduces the impact MySQL has on overall system disk I/O, which greatly improves performance. This change is done by leveraging the /dev/shm tmpfs partition in Linux. This special device gives server processes the ability to write files directly into RAM as if it were a standard disk drive.

 

InnoDB Log File Size

InnoDB Log File Size

The InnoDB log file is not your traditional log file. It’s a special binary redo log, which InnoDB uses to store redo activity for every write request handled by the server. Every write query executed gets a redo entry in the log file so that change can be recovered in the event of a crash. Because of this reason, the InnoDB log file size plays a critical role in MySQL write query performance, particularly on write-heavy databases.

Rule of Thumb: innodb_log_file_size
MySQL®️ Recommends: “The larger the value, the less checkpoint flush activity is required in the buffer pool, saving disk I/O.” A log file size of 1GB is sufficient for most situation. This may need to be raised when database growth exceeded several dozen GB in size.
Notice
MySQL®️ Warns: “Larger log files also make crash recovery slower, although improvements to recovery performance make log file size less of a consideration than it was in earlier versions of MySQL. “

Due to the special nature of the InnoDB Redo Logs, this setting cannot be changed without first deleting the existing InnoDB log files. The following procedure for adjusting the innodb_log_file_size directive must be followed before MySQL will start again.

Changing innodb_log_file_size Procedure:

  1. Stop MySQL Serviceservice mysql stop
  1. Update innodb_log_file_size within [mysqld] header in /etc/my.cnf file.innodb_log_file_size=1G
  1. Move the existing log files out of the mysql directorymv /var/lib/mysql/ib_logfile* /backup
  1. Start MySQL Serviceservice mysql start

 

InnoDB Buffer Pool (IBP)

The InnoDB Buffer Pool plays a critical role in MySQL Performance. The IBP is a reserved portion of system memory where InnoDB table and index data are cached. This allows frequently accessed data to be returned quickly, without the need of spinning up a physical hard drive. The more InnoDB tablespace that is cached in memory, the less often MySQL will access physical disks, which manifests as faster query response times and improved overall system performance.

There are multiple directives which control IBP behavior. Tuning these to match the servers existing tablespace data, with some room to grow, will optimize the performance of any queries using InnoDB tables. Since InnoDB is the recommended engine for very large tables, optimizing the IBP gives major performance gains.

 

InnoDB Buffer Pool Size

InnoDB Buffer Pool Size

The larger the IBP, the more InnoDB functions as a high-performance in-memory database. However, setting the size too large can be detrimental to overall system performance. A balance must be struck between the IBP size and needed memory for other system services. In an ideal configuration, the IBP should be as large as possible, without causing excessive swapping (i.e., Thrashing) to occur. There are some simple rules to follow when making these determinations.

Rule of Thumb: innodb_buffer_pool_size (Dedicated Servers)

When MySQL is the only major service on a server.

The recommended configuration by MySQL is for innodb_buffer_pool_size to use as much as 80% of the system’s total physical memory.

Calculate innodb_buffer_pool_size Script: (Dedicated Server)

awk '
/MemTotal/{
$3="GB"
$2=sprintf("%.0f",$2/1048576)
print
$1="  Mem80%:"
$2=sprintf("%.0f",$2*.8)
print
}' /proc/meminfo
Example Output:

MemTotal: 15 GB
Mem80%: 12 GB

Rule of Thumb: innodb_buffer_pool_size (Shared Servers)

When MySQL runs alongside other major services like Web, Email, etc... (e.g. cPanel, Plesk)

Due to their varied resource requirements, there is no one-size-fits-all calculation for shared servers. It becomes necessary to calculate the memory requirement needs of all other critical services on the server and subtract those from total system memory to find a proper amount of available memory which can be assigned to innodb_buffer_pool_size.

A simplified method is to use a generic calculation. A conservative starting point is assigning between 30 and 80 percent of available system memory, instead of total physical memory. However, determining the exact setting may require some guesswork and testing.

The following awk script reads MemAvail from /proc/meminfo and provides a selection of percentage based calculations to choose for the server.

Calculate innodb_buffer_pool_size Script: (Shared Server)

awk '
/MemAvail/{
$3="G";_=$2
$2=sprintf("% 3.0f",_/1048576)
print
for (i=80;i>=25;i-=10) {
$1="MemAvail_"i"%:"
$2=sprintf("% 3.0f",_*(i/100)/1048576)
$4=sprintf("| %.0f M",_*(i/100)/1024)
print
}
}' /proc/meminfo

Example Output:

MemAvailable:  10 G
MemAvail_80%:   8 G | 8405 M
MemAvail_70%:   7 G | 7354 M
MemAvail_60%:   6 G | 6304 M
MemAvail_50%:   5 G | 5253 M
MemAvail_40%:   4 G | 4203 M
MemAvail_30%:   3 G | 3152 M

 

InnoDB Buffer Pool Instances

InnoDB Buffer Pool Instances

The innodb_buffer_pool_instances directive controls the number of memory pages Innodb creates. MySQL ignores this directive unless the innodb_buffer_pool_size is greater than 1G/1024M. When larger than 1G, the buffer pool is divided up into a number of equal sized memory pages specified by this directive.

Rule of Thumb: innodb_buffer_pool_instances
MySQL®️ Recommends: “For best efficiency, specify a combination of innodb_buffer_pool_instances and innodb_buffer_pool_size so that each buffer pool instance is at least 1GB.”

  ‣ Rule Exception: innodb_buffer_pool_instances should not exceed Total_IO_Threads

Total_IO_Threads=(innodb_read_io_threads + innodb_write_io_threads)

Preference a 1:1 Ratio for: Total_Instances:Total_IO_Threads

 

InnoDB I/O Threads

InnoDB I/O ThreadsInnoDB I/O Threads

Input/Output threads are sub-processes of MySQL which directly access the IBP memory pages. There are two types of I/O threads: innodb_read_io_threads and innodb_write_io_threads. These threads read and write, respectively, to the individual memory pages created by innodb_buffer_pool_instances.

The default value for both of these thread types is 4, making a total of 8 threads running concurrently. This pairs up with the default number of memory pages created when innodb_buffer_pool_size is larger than 1G (x8). This synergy is a key factor when improving InnoDB performance on very large database servers with large volumes of RAM and CPU cores. When increasing innodb_buffer_pool_instances past the default of 8, it becomes optimal to increase the number of threads to handle the extra memory pages concurrently. The goal being, to keep the synergy of total I/O threads equal to total memory pages:

( innodb_read_io_threads + innodb_write_io_threads ) = innodb_buffer_pool_instances

When to change InnoDB I/O Threads

Two conditions should be satisfied before adjusting I/O threads for a server.

  1. When innodb_buffer_pool_size is larger than 8 Gigabytes, this means there are more memory pages than I/O threads to handle them concurrently.
  2. When the server has more than 8 CPU cores to devote to the MySQL Service.

It is not enough to merely divide the innodb_buffer_pool_instances in half to get equal numbers of read and write threads. The goal is to increase the number of threads primarily used by the specific server’s query workload. For instance, if your server reads more data than it writes, increasing write threads would be counter-productive. The same holds true in reverse.

How to Calculate MySQL Read:Write Ratio

The global statistics kept by a MySQL server can be leveraged to determine a systems Read:Write Ratio. The following MySQL queries can be used to calculate the Total_Reads and Total_Writes of a server.

Total_Reads = Com_select

SHOW GLOBAL STATUS LIKE 'Com_select';

Total_Writes = Com_delete + Com_insert + Com_replace + Com_update

SHOW GLOBAL STATUS WHERE Variable_name IN ('Com_insert', 'Com_update', 'Com_replace', 'Com_delete');

  • If the Total_Reads is greater than Total_Writes, the server is considered Read_Heavy.
  • If the Total_Writes is greater than Total_Reads, the server is considered Write_Heavy.

The following awk script will print the necessary statistics variables from MySQL and automatically calculate a simplified Read:Write Ratio to help determine whether the system in question is Read_Heavy or Write_Heavy.

Calculate Read:Write Ratio Script:

mysql -e 'SHOW GLOBAL STATUS;'|\
awk '
$1~/Com_(delete|insert|update|replace)$/{
w += $2
printf $0 "\t + Total_Writes = " w "\n"
}
$1~/Com_(select)/{
r += $2
printf $0 "\t + Total_Reads = " r "\n"
}
END {
printf "\nRead:Write Ratio:\n\t" r ":" w " "
if (r >= w) {
R=sprintf("%.0f",r/w)
print R ":1"
} else {
W=sprintf("%.0f",w/r)
print "1:" W
}
}'

Example Output:

Com_delete    14916     + Total_Writes = 14916
Com_insert    87413     + Total_Writes = 102329
Com_replace   0         + Total_Writes = 102329
Com_select    675528    + Total_Reads = 675528
Com_update    18976     + Total_Writes = 121305
Read:Write Ratio:
675528:121305 6:1

Note:
Adjust the first line’s mysql statement as needed to connect to the appropriate server. e.g.,

mysql -h localhost -u root -p -e 'SHOW GLOBAL STATUS;' |\

Hardware CPU Core Considerations

When adjusting innodb_read_io_threads or innodb_write_io_threads, keep the total threads between the two equal to the number of CPU cores available to MySQL. This ensures maximum concurrency as each memory page can be accessed simultaneously by each individual CPU core.

Rule of Thumb: InnoDB I/O Threads (High-Performance) ‣innodb_read_io_threads + innodb_write_io_threads
The total number of InnoDB I/O Threads should not surpass the total number of CPU cores available to MySQL.

When increasing Innodb I/O Threads, increase only the read or write threads depending on whether the server is Read_Heavy or Write_Heavy.

Other Directives

There are several more advanced techniques for fine tuning the InnoDB Buffer Pool and its behavior. These are beyond the scope of this article. However, you can find more details about these techniques on the MySQL website here: 15.5.1 Buffer Pool.

 

Creating a Virtual Environment on Ubuntu 16.04

Virtualenv is a tool that creates an isolated environment separate from other projects. In this instance we will be installing different Python versions, including their dependencies.  Creating a virtual environment allows us to work on a Python project without affecting other projects that also use Python. It will utilize Python’s core files on the global environment to run, thus saving you disk space while providing the freedom to use different Python version for separate apps.

Pre-flight

  • Pip installation is required this will install Python at the same time.
  • Logged in as root or a user with admin privileges on an Ubuntu 16.04 LTS server. If logged in with a regular user with admin privileges be sure to use sudo before the commands discussed within this tutorial.

Step 1: Install Virtualenv

First, we will update our apt-get, then we will install the module virtualenv module.

apt-get update

apt-get install python-virtualenv

Step 2: Create a Virtual Environment & Install Python 3

Virtualenv works by creating a folder that houses the necessary Python executables in the bin directory. In this instance, we are installing Python 3.5 while also creating two folders, the virtualenvironment, and project_1 directory.

virtualenv -p /usr/bin/python3 virtualenvironment/project_1

Virtualenv will create the necessary directories into the project_1 directory. In this directory you’ll find bin, include, lib, local and share.

Step 3: Activate Your Virtual Environment

Navigate to the project_1/bin directory and activate your new environment from within that folder by using the source command below. Anytime you need to work on your project you will need to enable with:
cd virtualenvironment/project_1/bin

source activate

Or if you are outside of the bin directory, you can use

$ source virtualenvironment/project_1/bin/activate

You’ll see that you are now in this newly made environment by the change of the shell prompt reflecting the name created in Step 2.
(project_1) root@host2:~#When Python packages are installed, they will live in the lib directory, project_1/lib/python3.5/site-packages.

Exit your virtual environment by typing:
deactivate

 

MySQL Performance: System Configuration File & Routine Maintenance

The majority of work needed when adjusting the MySQL server is editing the applicable directives within a MySQL configuration file. There are multiple, optional configuration files that MySQL looks for when starting up. They are read in the following order:

MySQL Configuration Files:

MySQL Configuration Files:

1./etc/my.cnf

2./etc/mysql/my.cnf

3.SYSCONFDIR/my.cnf

4.$MYSQL_HOME/my.cnf

5.~/.my.cnf

Any of these files can contain MySQL server directives. For the sake of simplicity, this article assumes the default file /etc/my.cnf is being used for all examples. This is the most widely known file path in the majority of cases. However, some Linux distributions favor others than the ones listed from above. Though the file location is changed in these other distributions, the syntax and recommendations in this article series apply equally to all Linux based systems.

 

MySQL Configuration File Syntax

Each entry inside the configuration file applies to the most recent section header. Section headers are made up of a single line with the name of the header encapsulated within square brackets (e.g. [name]). When optimizing the MySQL service daemon,  changes will need to be within in the [mysqld] section header. The example below illustrates how this should look.

Example: Configuration File Syntax

[mysqld] innodb_buffer_pool_size=2G
innodb_buffer_pool_instances=2

Note
The trailing d character after mysql is required to apply settings to the MySQL service daemon. The [mysql] header without a trailing d character is incorrect and applies only to MySQL client.

 

Applying MySQL Config Changes

The vast majority of directive changes merely require a restart of the MySQL service. There are a handful of other directives which may require an additional task to be undertaken while the MySQL service is offline. These tasks will be outlined in their specific section later in the article.

Restart MySQL Service:

service mysql restart

Important:
Refrain from making several optimization changes to your existing MySQL config at one time. This is especially true when adjusting production level servers. Making small incremental changes will make it easier to identify problems with individual changes and isolate settings that may not work well for your unique setup.

Routine Maintenance

In the follow-up articles to this part of the MySQL Performance series, we will be outlining several directives and some suggested techniques to use for configuring those directives. It is imperative to understand that MySQL Optimization is an ongoing, ever-evolving configuration. As sites grow, so do data sets and workload behavior. The settings you configure today, will eventually become obsolete and probably sooner than you would like. Because of this eventuality, it is vitally important that routine maintenance is conducted on your configuration.

What are some tasks to perform routinely:
  1. Reevaluate all buffers and directives that have been modified previously. This includes the changes discussed and recommended in the entirety of this article series.
  2. Reassess MySQL Query Statistics to determine workload behaviors.
  3. Reassess tablespace data, rate of growth or other trends in data consumption.
  4. Archive old data from large or heavy trafficked tables to ease the burden of read/write requests to those tables.
  5. Reevaluate indexes and their performance. Create new, better indexes and remove old, unused indexes.

Performance Expectations

Although, the changes recommended in this article series aim to squeeze the best performance out of MySQL, the performance increase you may see is entirely subjective. Following these recommendations should help smooth the edges on any servers hitting the bottlenecks they are designed to target. However, there is no guarantee that these changes will have a positive or noticeable impact on the application level.

The biggest issues facing application performance are often on the coding level and not the server level. The underlying MySQL server configuration only carries application performance so far. Identifying problematic queries and inefficient coding practices that serialize your workload are problems that even a finely tuned MySQL server configuration just cannot correct. If you are in doubt, or your application performance seems to suffer despite all the server level optimization in the world, then you’re probably hitting code level performance problems, and we recommend contacting a qualified Database Administrator (DBA) to evaluate your application performance properly.

Using a Cron Wrapper Script

This tutorial is intended to do two things: to expand on the Cron Troubleshooting article; and to give an overview of a simple scripting concept that uses the creation of a file as a flag to signify something is running. This is primarily useful when you need to run something continuously, but not more than one copy at a time. You can create a file as a flag to check if a job is already running, , and in turn, check for that flag before taking further action.

The direct application of this is when you have a cron job that runs every minute, or every few minutes. With a rapidly repeating cron, if the previous job takes any longer than the scheduled time, these tasks can pile up causing load on the server, or exacerbating other issues. To avoid this, a simple script can be set up in the crontab (in place of the intended cron task). When the cron is run, it only runs the actual task if there is not a competing process already running.

Why Use a Cron Wrapper?

A cron wrapper is used when you have a cron job that needs to run back to back but needs to not step on itself. This is good for tasks that you want to setup to run continuously. Jobs that should be run anywhere between every minute and every five minutes should be utilizing a wrapper like this.

If you do not use a wrapper on a cron job that runs too frequently, you can get multiple jobs running at the same time trying to do the same thing. These competing tasks slow down the whole works. These “stacking cron jobs” can even get so out of hand that it overloads a server and causes the server to stop responding normally.

What is a Cron Wrapper?

The reason this is called a cron wrapper is that it is a script that wraps around the cron job, and checks if another instance of the cron is already running. If there is another copy running, the wrapper will make the cron skip this run, and wait until the next run to check again. There are a few ways that the cron wrappers ensures no overlap.

 

Process Check Method

One way is to check all the running processes for the user and double checks that there isn’t already another process with the same name or attributes as the one you want to run. This is how Magento’s cron.sh file works, it checks for another instance of cron.php being run as the user, and if there is one running, it exits. This can be complicated to do reliably, and so is not something that we would recommend for just starting out.

 

Lockfile Method

A straightforward method is to use what is called a lockfile. The cron wrapper checks if the lockfile (any file with a specific name/location) exists at the start of the run. If the lockfile is missing, the script creates that file and continues. The creation of the lockfile signals the start of the cron job. When the cron job completes the wrapper script then removes the lock file.

So long as the lockfile exists, a new wrapper will not run the rest of the cron job while another one is running. Once the first run completes and the lock is removed another wrapper will be able to create a new lock file again and process normally.

 

A Wrapper Script Example

To start, we want to create a simple bash script. Within a file we state the script to be interpreted by the program /bin/bash

#!/bin/bash

Then we want to define the name and location of the lockfile we’ll be using as our flag.

# Set lockfile name and location
lockfile="~/tmp/cronwrapper.lock"

 

Next, the script needs to check if that lockfile exists. If it does exist, then another copy of the script is already running, and we should exit the script.

# Check if the lockfile exists
if [[ -f $lockfile ]]; then
# If the lockfile exists quit
exit;

Else, if the lockfile does not exist, then we should create a new lock file to signify that we are continuing with the rest of the script. Creating the lockfile also tells any future copies that might be run to hold off until the lockfile is removed. We also want to include the actual job to be run, whether that’s a call to a URL through the web, running a PHP file on the command line, or anything else.

# If the lockfile is missing continue
else
# Create the lockfile
touch $lockfile
# Insert cron task here/code>

Once the intended job is run and completes, we want to clean up our lockfile, so that the next run of the cron job knows that the last run completed and everything is ready to go again.

# Cleanup the lockfile
rm -f $lockfile
fi

In the example above, it is convenient to define the lock file as a variable ($lockfile) so that it can be referenced easily later on. Also if you want to change the location, you only have to change it one place in the script.

This example also uses a “~” in the path to the lock file as a shortcut. This tells the shell to assume the user’s home directory. As such, the full path would look something more like this: /home/username/tmp/cron.lock.

However, by using the “~” you can use copies of the same script for many users on the same server, and not have to modify the full path for each user. The shell will automatically use the home directory for each user when the wrapper script is run.

Putting It All Together (cronwrapper.sh)

You can copy and paste the following into your text editor of choice on your server. You can name it whatever you want, but here are all the parts put together.

#!/bin/bash
lockfile="~/tmp/cronwrapper.lock"
if [[ -f $lockfile ]]; then
exit;
else
touch $lockfile
# Insert cron task here
rm -f $lockfile
fi

This is a very simple example and could be expanded much further. Ideally, you might add a check  to ignore a lock file older than an hour and to run a new instance of cron job anyway. This would account for an interrupted job that failed to clean up after itself. Another extension might be to confirm that the previous job completed cleanly,. Or yet another suggestion, would check for errors from the cron job being run and make decisions or send alerts based on those errors.  The world is your oyster when it comes to cron wrappers! Take a look at our Liquid Web’s VPS servers, for tasks like these to run smoothly.

Fixing WordPress Errors

Let’s face it. At some point, while running your WordPress site, you will run into issues and errors and may ultimately have to ask yourself…

A backup & restore may not resolve your issue, and a plugin may not display itself as the source of your problem, at least, not immediately. It’s hard to tell exactly what is causing your site issues just by looking at it. This can get pretty serious in some cases and can range from a large variety of issues. In this tutorial we will cover the basics on troubleshooting problems with your WordPress installation to try and correct common issues seen with WordPress. The first place you need to look for the source of your problem is within the error log.

 

The most common or likely seen error log used within WordPress investigations does not actually stem from WordPress but rather from your PHP installation on the server. The php.ini file used to control PHP settings for your site will determine if and where the error log is reporting. If this is enabled you can usually find the error log in the directory (or folder) of your WordPress installation. In most cases, this file is titled error_log but is dependent on the setting in the php.ini file. You can also find the WordPress PHP error log (if enabled) within the wp-content folder in a file called php.error_log. If you see neither of these and your site is not loading properly, you need to enable debugging mode or enable PHP logging in your php.ini.

 

You can enable debugging for WordPress within the wp-config.php file. This is essential when trying to determine why a site is no longer loading or is erroring. You may never understand why a site does not load without seeing the errors generated. To better see what is occurring simply edit the following line within your wp-config.php file:

define('WP_DEBUG', false);

And change the false to true:

define('WP_DEBUG', true);

Changing the value to true enables debug mode and will display any errors site code directly on the page. This can be useful when trying to track down site issues or to see if upgrades have created any new issues. If you change PHP versions and the site no longer loads this method  will tell you why. Wp-config.php is also the location where you can enable the WordPress PHP error log and log directly to a file rather than printing to the screen. You can do this by adding the following code to the wp-config file:

define('WP_DEBUG_LOG', true);This code creates the WordPress PHP error log (php.error_log) if errors are present and are being generated. You can find this file within the wp-content folder of your WordPress installation. You may not see this error file if errors are not being generated so the lack of presence, after enabling this setting, may mean no errors are being reported. For instance, if your .htaccess file has a syntax error the php.error_log will not show the error because it is not a PHP related error.

If you would rather enable PHP error logging you can add values to the php.ini for the domain or via .htaccess if your configuration supports them:

Open your site’s php.ini file. If you are unsure where this is located, you can use a phpinfo page to show the location or you can also run the following in command line:

cpUser=`pwd | cut -d/ -f3`; for i in `pwd`; do touch $i/phpinfo.php; chown $cpUser. $i/phpinfo.php ; echo "<?php phpinfo(); ?>" > $i/phpinfo.php; done

Manually create a phpinfo.php file within your sites public_html folder with the following code.

<?php
// Show all information
phpinfo();
?>
Afterwards, access this file via a browser at the location you created it. You will find the php.ini path under Loaded Configuration File:

With a PHP info file, you will find the php.ini path under Loaded Configuration File

Once you find this location, edit the file and add the following code if it does not exist:

;;; log php errors
display_startup_errors = false
display_errors = false
html_errors = false
log_errors = true
track_errors = true
error_log = /home/USER/logs/error_log
error_reporting = E_ALL | E_STRICT

You can change the path for error_log to wherever you want this to be stored within your users home directory. The WordPress install is bound to the same access rights as the user installing it so it will not have the permissions to write outside their home directory.

On older setups you can change the logging information via .htaccess if your configuration supports php_flags (using DSO aka as Data Source Object)

# log php errors
php_flag display_startup_errors off
php_flag display_errors off
php_flag html_errors off
php_flag  log_errors on
php_value error_log /home/path/logs/error_log
Most likely newer and up-to-date configurations are not using DSO and you will need to modify this via the php.ini file.

 

To understand how to read the output of these logs look at the following entry:

[09-Sep-2018 22:57:20 UTC] PHP Fatal error:  Allowed memory size of 41943040 bytes exhausted (tried to allocate 32768 bytes) in home/USERNAME/public_html/wp-content/plugins/wordpress-seo/inc/class-wpseo-meta.php on line 477

You can see the date and timestamp followed by the general message and path this stems from. This tells you most of the details you will need to determine where the problem lies. You can see from the timestamp of this error when the error is occurring and if that relates to the current issue or if it was a different error. The path will usually show if this stems from a plugin or theme and the location of the software generating the error. This will even display the line in the document or file that triggered the error which can be further reviewed by your sites developer.

 

The “fatal error” is the most common type of error seen and the cause can vary from coding, like “undefined function”, indicate the function and problematic line of code to memory errors (like the one used in the above example). This usually occurs when the server has run out of memory or the PHP memory limit is not set high enough to run the code’s requirements. To fix these errors you may need to update software (themes and plugins usually) as it may be using deprecated code and or functions. You may also need to increase the PHP memory limit or locate any heavy resource usage on the server that may be consuming memory.

This generally means there is an issue with the database in use or the configuration of your WordPress setup. This could mean your database is corrupt or the configuration settings used in your wp-config are not correct or have been modified. Check your wp-config file has the correct credentials and syntax to ensure your database can communicate with your WordPress files. You may also see this error when the server has heavy load or the MySQL service is down. You’ll need to investigate resource usage on the server to determine why.

A standard 404 error means your server could not locate the file being called by the software in use on the domain. This usually occurs when ownership or permissions are incorrect, the file path is called incorrectly or the file is completely missing.

 

WordPress can sometimes run for a while without issue but some common errors can be solved with a little bit of background. As always our helpful support experts are here to assist with any WordPress related errors. Should you need assistance with troubleshooting your WordPress installation and we even offer a Managed WordPress hosting platform with WordPress error experts to investigate many issues.

 

Kubernetes RBAC Authorization

What is RBAC?

Kubernetes Role-Based Access Control or the (RBAC) system describes how we define different permission levels of unique, validated users or groups in a cluster. It uses granular permission sets defined within a .yaml file to allow access to specific resources and operations.

Starting with Kubernetes 1.6, RBAC is enabled by default and users start with no permissions, and as such, permissions must be explicitly granted by an admin to a specific service or resource. These policies are crucial for effectively securing your cluster. They permit us to specify what types of actions are allowed, depending on the user’s role and their function within the organization.

Prerequisites for using Role-Based Access Control

To take advantage of RBAC, you must allow a user the ability to create roles by running the following command:

root@test:~# kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user

Afterwards, to start a cluster with RBAC enabled, we would use the flag:

--authorization-mode=RBAC

 

The RBAC Model

Basically, the RBAC model is based on three components; Roles, ClusterRoles and Subjects. All k8s clusters create a default set of ClusterRoles, representing common divisions that users can be placed within.

The “edit” role lets users perform basic actions like deploying pods.
The “view” lets users review specific resources that are non-sensitive.
The “admin” role lets a user manage a namespace.
The “cluster-admin” allows access to administer a cluster.

Roles

A Role consists of rules that define a set of permissions for a resource type. Because there is no default deny rules, a Role can only be used to add access to resources within a single virtual cluster. An example would look something like this:

kind: Role
apiVersion: rbac.authorization.domain.com/home
metadata:
namespace: testdev
name: dev1
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"] verbs: ["get", "watch", "list"]

In this case, the role defines that a user (dev1) can use the “get”, “watch” or “list” commands for a set of pods in the “testdev” namespace.

ClusterRoles

A ClusterRole can be used to grant the same permissions as a Role but, because they are cluster-scoped, they can also be used to grant wider access to:

  • cluster-scoped resources (like nodes)
  • non-resource endpoints (like a folder named “/test”)
  • namespaced resources (like pods) in and across all namespaces. We would need to run kubectl get pods --all-namespaces

It contains rules that define a set of permissions across resources like nodes or pods.
An example would look something like this:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]

The default ClusterRole command looks like this:

root@test:~# kubectl create clusterrole [Options]

A command line example to create a ClusterRole named “pod“, that allows a user to perform “get“, “watch” and “list” on a pod would be:

root@test:~# kubectl create clusterrole pod --verb=get,list,watch --resource=pod

 

RoleBinding

A RoleBinding is a set of configuration rules that designate a permission set. It binds a role to subjects (Subjects are simply users, a group of users or service accounts). Those permissions can be set for a single namespace (virtual cluster), with a RoleBinding or, cluster-wide with a ClusterRoleBinding.A RoleBinding is a set of config rules that designate a permission set.

Let’s allow the group “devops1” the ability to modify the resources in the “testdev” namespace:

root@test:~# kubectl create rolebinding devops1 --clusterrole=edit --group=devops1 --namespace=dev rolebinding "devops1" created

Because we used a RoleBinding, these functions only apply within the RoleBinding’s namespace. In this case, a user within the “devops1” group can view resources in the “testdev” namespace but not in a different namespace.

ClusterRoleBindings

A ClusterRoleBinding defines which users have permissions to access which ClusterRoles. Because a “role” contains rules that represent a set of permissions, ClusterRoleBindings extend the defined permissions for:

  • unique namespace in resources like nodes
  • resources in all namespaces in a cluster
  • undefined resource endpoints like ”/foldernames”

The default ClusterRoles (admin, edit, view) can be created using the command:

root@test:~# kubectl create clusterrolebinding [options]

An example of creating a ClusterRoleBinding for user1, user2,and group1 using the cluster-admin ClusterRole

root@test:~# kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1

 

ConfigMaps

What is a configMap?

A configMap is a resource that easily attaches configuration data to a pod. The information that is stored in a ConfigMap is used to separate config data from other content to keep images/pods portable.

How Do I Create A ConfigMap?

To create a configmap, we simply use the command:
kubectl create configmap <map-name> <data-source>

Let’s create a default.yaml file to create a ConfigMap:
kubectl create -f default.yaml /configmaps/location/basic.yaml

Basic RBAC Commands For Kubectl

Note
These commands will give errors if RBAC isn’t configured correctly.

kubectl get roles
kubectl get rolebindings
kubectl get clusterroles
kubectl get clusterrolebindings
kubectl get clusterrole system:node -o yaml

 

Namespaces

Namespaces are used to define, separate and identify a cluster of resources among a large number of users or spaces. You should only use namespaces when you have a very diverse set of clusters, locations or users. They are used in settings where companies have multiple users or teams that are spread across various projects, locations or departments. A Namespaces also provides a way to prevent naming conflicts across a wide array of clusters.
Inside a Namespace, an object can be identified by a shorter name like ‘cluster1’ or it can be as complex as ‘US.MI.LAN.DC2.S1.R13.K5.ProdCluster1’ but, there can only be a single ‘cluster1’ or a ‘US.MI.LAN.DC2.S1.R13.K5.ProdCluster1’ inside of that namespace. So, the names of resources within a namespace must be unique but, not spanning namespaces. You could have several namespaces which are different, and they can all contain a single ‘cluster1’ object.

You can get a list of namespaces in a cluster by using this command:

root@test:~# kubectl get namespaces
NAME STATUS AGE
cluster2dev Active 1d
cluster2prod Active 4d
cluster3dev Active 2w
cluster3prod Active 4d

Kubernetes always starts with three basic namespaces:

  • default: This is the default namespace for objects that have no specifically identified namespace (eg. the big barrel o’ fun).
  • kube-system: This is the default namespace for objects generated by the Kubernetes system itself.
  • kube-public: This namespace is created automatically and is world readable by everyone. This namespace is primarily reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

Finally, the essential concept of role-based access control (RBAC) is to ensure that users who require specific access to a resource can be assigned to those Roles, Clusterroles, and ClusterRolebindings as needed or desired. The granularity of these permission sets is structured and enabled to allow for increased security, ease of security policy modification, simplified security auditing, increased productivity (RBAC cuts down on onboarding time for new employees). Lastly, RBAC allows for increased cost reduction via removing unneeded applications and licensing costs for less used applications. All in all, RBAC is a needed addition to secure your Kubernetes infrastructure.

Ready to Learn More?

Reach out to one of our Solutions team via chat to decide if a Private Cloud service from Liquid Web will meet your Kubernetes needs!

Configure Nginx to Read PHP on Ubuntu 16.04

Nginx is an open source Linux web server that accelerates content while utilizing low resources. Known for its performance and stability Nginx has many other uses such as load balancing, reverse proxy, mail proxy, and HTTP cache. Nginx, by default, does not execute PHP scripts and must be configured to do so.  In this tutorial, we will show you how to enable and test PHP capabilities with your server.

Pre-Flight Check

  • This article assumes you are logged in as root and using Ubuntu 16.04 LTS server. If not logged in as root, please add sudo before each command or login as root user.
  • This article also assumes you have installed Nginx. If you have not yet installed Nginx or are unsure how to do so, please reference this easy to follow article
  • This article is tested using NGINX 1.10.3+,  older versions should work, but it’s a good idea to update to the latest version if available before you try to configure PHP.
  • This article will be running php7.0-fpm or later  (as of Dec 31, 2018, PHP 5.6 will be approaching “end of life” and no longer supported.)

 

Step 1a: Give Me the Packages!

First, we want to make sure our manager is up-to-date. Run the following command
sudo apt-get update && apt-get upgrade
You also want to make sure that your Nginx is up-to-date. You can check the current version by running this command. nginx -v
The result should return something like this.

Running the command nginx -v allows you to see which NGINX version you are currently using.

Note
Note: This article is using Nginx 1.10.3. If you need to update your Nginx version make sure you backup your configuration BEFORE you make any changes.

sudo cp /etc/nginx/nginx.conf   / etc/nginx/nginx.conf.1.x.x.backup

This article will NOT cover updating your Ngnix as its focus is for PHP configuration.

You can test your NGINX configuration file for syntax errors with the following command

nginx -t

If successful you should get something similar to the following
Test a NGINX configuration file for syntax errors using the nginx -t command.Using nginx -t should also give you a starting point to look for errors if for some reason this was unsuccessful it will return an error.

Note
Note: Ngnix should have started on its own after install. However, here are a couple of commands to keep on hand.

sudo systemctl stop nginx.service

sudo systemctl start nginx.service

sudo systemctl enable nginx.service

sudo systemctl restart nginx.service

Step 1b: PHP Version Check

Now it is time to check that your PHP version is running and using the correct version. You can do so with the following:

sudo systemctl status php7.0-fpm
You will see something similar to this if working properly:Check your PHP version is running and using the correct version.

Note
Note: If you need to install PHP you can run the following line :

sudo apt-get -y install php7.0 php7.0-fpm

Replace 7.0 with whatever version of PHP is the most recent. You can check for updates here.

Alternatively, if you need to update PHP to the latest version, please be sure you make a backup BEFORE any changes then run the following:

sudo apt-get upgrade

Step 2:Time for Some NGINX Configuration

Once you have verified that both PHP and Nginx are running on your system, it’s time to configure your PHP settings!

From home cd into your NGINX folder

cd ~

cd /etc/nginx
To configure your NGINX PHP settings. Cd into the etc/php folder.
cd etc/php/
Depending on which PHP version you are using the following folders may differ. This article is using PHP 7.0. You can replace the 7.0 with the version of PHP you are using. We are looking for a file called the php.ini file.

On PHP 7.0 and 7.1 that is located at :
vim 7.0/fpm/php.ini

Or
vim 7.1/fpm/php.ini
The php.ini  is a giant file but it’s where you can customize your environment. If this is your first time, it might be a good idea to make a copy of this file BEFORE you make any changes.
cp php.ini php.ini_copy
However, if you are pro and just like to read articles for the warm fuzzies you feel inside, then go ahead and edit away!

Note
First time using vim?  Use the I command to insert, Esc to exit and :wq to save the file. If you need to leave the file without saving that command is :q. Feel free to use whatever file editor you most comfortable with.
Here are a couple of the recommended values for the php.ini file.

max_execution_time = 300

max_input_time = 60

memory_limit = 256M

upload_max_filesize = 100M

Simply find these variables in the file and update the values.

Before Edit :Set the max_execution_time to 300 in the php.ini file.

After edit:Setting the max_execution_time to 300 in the php.ini file allows more processing time.

Step 2b: Default Sites Configuration

We are almost done!  Now it is time to set your default sites environment. Open your sites configuration file. It should be stored by default at the following path.

/etc/nginx/sites-available/default

You can cd there or open it right up with vim.

You want to remove the following commented out lines. Here is an example for both PHP7.0 and PHP 7.1

PHP 7.0
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
     location ~ \.php$ {
include snippets/fastcgi-php.conf;

#
#     # With php7.0-cgi alone:
#     fastcgi_pass 127.0.0.1:9000;
#     # With php7.0-fpm:
            fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}

 

PHP 7.1
server {
listen 80;
listen [::]:80;
root /var/www/html;
index  index.php index.html index.htm;
server_name  example.com www.example.com;
location / {
try_files $uri $uri/ =404;
}
# pass PHP scripts to FastCGI server
#
     location ~ \.php$ {
             include snippets/fastcgi-php.conf;
  #
#     # With php-fpm (or other unix sockets):
            fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
#     # With php-cgi (or other tcp sockets):
#     fastcgi_pass 127.0.0.1:9000;
}
}

Step 3: Testing PHP on NGINX

Once you have made the necessary edits go ahead and restart NGINX and PHP with the following lines.
sudo systemctl restart nginx.service
To test your PHP configuration by creating a  phpinfo.php file in the /var/www/html file path.
sudo vim /var/www/html/phpinfo.phpAdd the following inside the file and save :
<?php phpinfo( ); ?>
To test your setup go to your server IP followed by /phpinfo.php in your web browser.

http://localhost/phpinfo.php

If you see the following then kick back and relax because you are all done! Congrats you deserve it!Set up a PHP info page to ensure PHP is enabled on your NGINX server.

How to Install Python on Windows

Python is a popular programming language for developing applications. The Python design philosophy emphasizes code readability and focuses on clear programming for both small and large-scale projects. Python allows you to run modules and full applications from a large library of resources (or even applications you write yourself) on your server. Python works on a number of popular operating systems, including Windows Server OS.

Installation of Python on the Windows Server operating system is a simple matter of downloading the installer from Python.org and running it on your server. Additional configuration adjustments can make using Python easier.

Installing Python

  1. Log in to your server via Remote Desktop Protocol (RDP). For more information on using RDP, see Using Remote Desktop Protocol (RDP) to Log into Your Windows Server.
  2. Download and execute the latest Python installation package from Python.org. For Liquid Web servers, you’ll most likely want the 64-bit version of the installer, but you may want to discuss software requirements with your developer.
  3. Choose the recommended installation options for the simplest installation experience (You can also choose Customize Installation if you need to adjust locations or features, but this may require additional configuration. See Python.org for further instructions on custom installation options).
  4. Check the box for “Add Python 3.7 to PATH”. This will adjust your System Environment Variables automatically so that Python can be launched from any command prompt.The installion screen of Python on a Windows server.
  5. Verify a successful installation by opening a command prompt window in your Python installation directory (probably C:\Users\*yourusername*\AppData\Local\Programs\Python\Python37 if you’ve installed the latest available version). You should receive a message similar to what is shown below.
    If you selected “Add Python 3.7 to PATH”, you can verify the installation from any command prompt window.

When installing Python on a Windows server you'll see the path of installation.

Installing PIP

If you didn’t install PIP using the default settings in the installer, you’d want to install this program to make application and module management easier. You can verify PIP installation by opening a command prompt and entering

pip -VYou’ll see the output similar to the following:

By running the pip -V command we see which version of pip and its path that your Windows Server is running.

If PIP is not installed or for more information on installing PIP, see our article How to Install PIP on Windows.

Any core-managed server at Liquid Web, both traditional-dedicated and Cloud servers, can run Python once installed. If you need assistance, our Helpful Humans can help install and verify the installation. You can also install Python on any of our self-managed servers. Plesk fully-managed servers support Python through ActiveState Python 2.6.5. Plesk also supports full Python installations, but does not include the software in its default components. For more information about using Python with Plesk, see Plesk’s support documentation. Please contact our Solutions team if you have further questions about ordering a server that can support Python.

Note
NOTE: Python software installation is considered Beyond Scope Support. This means it is not covered under our managed support, but we will do what we reasonably can to assist. It may take longer for us to assist as the SLA for Beyond Scope Support is different than our managed services. Find out more in our article What Is Beyond Scope Support?

Install vsftpd on Ubuntu 16.04

Installing vsftpd allows you to upload files to a server, the concept is comparable to that of Google Drive.  When you invite specified users to your Google Drive they can create, delete, upload and download files all behind a secure login. Vsftpd is excellent for company’s looking for an alternative to Google Drive or for anyone who wants to create a robust server. This “Very Secure File Transfer Protocol Daemon” is favored for its security and speed and we’ll be showing you how to install vsftpd on an Ubuntu 16.04 LTS server.

 

Pre-Flight Check

  • These instructions are intended specifically for installing vsftpd on Ubuntu 16.04.
  • You must be logged in via SSH as the root user to follow these directions.
Warning:
FTP data is insecure; traffic is not encrypted, and all transmissions are clear text, including usernames, passwords, commands, and data. Consider securing your FTP connection (FTPS).

Step 1: Updating Apt-Get

As a matter of best practices we update apt-get with the following command:

apt-get update

Step 2: Installing Vsftpd

One command allows us to install vsftpd very easily.

apt-get -y install vsftpd

Step 3: Configuring Vsftpd

We’ve installed vsftpd, and now we will edit some options that will help us to protect the FTP environment and enable the environment for utilization. Enter the configuration file using the text editor of your choice.

vim /etc/vsftpd.conf

Change the values in the config file to match the values below and lastly, save exit by typing

:wq

 

anonymous_enable=NO
local_enable=YES
write_enable=YES
chroot_local_user=YES
ascii_upload_enable=YES
ascii_download_enable=YES

 

Click Here for a Further Explaination on Each Directive
Anonymous_enable: Prohibit anonymous internet users access files from your FTP. Change anonymous_enable section to NO.

Local_enable: If you have created users you can allow these users to log in by changing the local_enable setting to YES.

Write_enabled: Enable users the ability to write the directory, allowing them to upload files. Uncomment by removing the # in from of write_enabled:

Chroot jail: You can “chroot jail” local users, restricting them to their home directories (/home/username) and prohibiting access to any other part of the server. Choosing this is optional but if you state YES follow the steps in Step 4 for removing write privileges and making their own directory for uploads. If you select NO, the user will have access to other directories.

Step 4: Editing Permissions for a User

If you have an existing or new user that is not able to connect, try removing write privileges to their directory:

chmod a-w /home/username

Step 5: Creating the User a Directory

Create a directory just for FTP, in this case, and we are name it files. Afterward, this user will be able to upload and create files within the files folder:

mkdir /home/username/files

Step 6: Accepting FTP Traffic to Ports

There are a few ways to open ports within a server, below is one way of opening port 20 and 21 for FTP users to connect.

Note
Directly passing iptable commands, like below, can break some firewalls. In whichever method you choose to edit your iptables ensure that port 20 and 21 are open.

iptables -I INPUT 1 -p tcp --dport=20 -j ACCEPT

iptables -I INPUT 1 -p tcp --dport=21 -j ACCEPT

Step 7: Restarting the Vsftpd Service

Restarting vsftpd enables changes to the file (step 3) to be recognized.

service vsftpd restart

Step 8: Verifying Vsftpd

Now for a little fun, let’s connect to our FTP to verify it is working.

ftp 79.212.205.191

Example Output:

ftp 79.212.205.191
Connected to 79.212.205.191.
220 Welcome to FTP!
Name (79.212.205.191:terminalusername):<enter your FTP user>

You’ll also be able to connect via an FTP client, like Filezilla, using the IP address of your hostname and leaving the port number blank.  Take it for a spin and try to upload a file or write a file. If you enabled the chroot jail option, the user should not be able to go to any other parent directory.