How to Check for Installed Packages on CentOS

Reading Time: 3 minutes

While managing your server, you’ll sometimes need to check on which software (or packages) you have installed on your system. You’ll need to know package names, version numbers, dates of installation, etc. In this Liquid Web tutorial, we’re going to be discussing how to inspect packages installed on your CentOS system. There are several ways to accomplish this, and we’ll discuss a few of them. Let’s dig in! To use these commands, you’ll need to log in to your server via SSH. For more information, see Logging into Your Server via Secure Shell (SSH).

Using RPM Package Manager

This first command uses the rpm package manager to poll for installed packages. This command allows you to see every installed package on your system, along with the version that is currently installed:

rpm -qa
Note the -q means “query” and -a means “all”. We’re asking rpm to query all installed packages.

Let’s examine a small portion of the results in detail. Note that you might not have these specific packages installed on your CentOS server. The important thing here is to understand how to read the output. Take a look at a small excerpt of entries from the list.kpartx-0.4.9-123.el7.x86_64
dracut-033-554.el7.x86_64
elfutils-libs-0.172-2.el7.x86_64

Each entry can be broken up into three parts. From left to right, these are:
Package name: (kpartx)
Version: (0.4.9-123.el7)
Architecture: (x86_64)

Instead of displaying all installed packages, rpm can also be used to search for a single package. Let’s use rpm to query kpartx:
rpm -q kpartx
You’ll see the output displays the same package name and version we saw from rpm -qa.
kpartx-0.4.9-123.el7.x86_64

Using Yum to Check Installed Packages

Using rpm is not the only way to check for installed packages on your system. Now we will discuss how to use “yum” to accomplish the same task. Try the following command:
yum list installed
You will see that the list yum provides is formatted slightly differently. Let’s look at an entry in depth.
whois.x86_64 5.1.1-2.el7 @base
The first column shows the package name and architecture: (whois.x86_64).
The second column shows the version installed: (5.1.1-2.el7).
Finally the third column shows the repository the software was installed from: (@base).

Using Yum to View Historical Installation Data

We can also use yum to view historical installation data on your system. Run the following command to see a list of anytime yum was used to install, remove, or upgrade a package:
yum history
Here is an example of the output you might see. Your system will show different results here, and that is OK. We’re just interested in learning how to read the output.
ID | Command line | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
10 | upgrade | 2019-06-01 04:13 | I, U | 12 EE
9 | install whois | 2019-05-04 17:40 | Install | 1
8 | install python36 | 2019-05-03 21:23 | Install | 2
7 | install epel-release | 2019-05-03 21:02 | Install | 1
6 | install bind-utils | 2019-05-03 19:33 | Install | 2
5 | install docker-ce docker | 2019-05-03 17:37 | Install | 4
4 | install yum-utils yum-co | 2019-05-03 17:26 | Install | 6
3 | install git | 2019-05-03 17:19 | Install | 4
2 | install vim | 2019-05-03 17:18 | Install | 31
1 | update | 2019-05-03 17:09 | I, U | 57
history list

Notice the column headings: “ID number, Data and time, Action(s), and Altered.” This is a good summary of when yum was used, but it is lacking detailed information. Let’s examine one of these history entries in detail. Try the following command, replacing “ID_NUMBER” with the actual ID you want to inspect.
yum history info ID_NUMBER

Here is some example output:
Loaded plugins: fastestmirror
Transaction ID : 9
Begin time : Sat May 4 17:40:24 2019
Begin rpmdb : 356:8ab21eca9f4a219812e33c41a73fbd4eb7de1ed8
End time : (0 seconds)
End rpmdb : 357:cf2bf4588ba4d3263d1c9af051c3bcc525596a68
User : Cloud User <centos>
Return-Code : Success
Command Line : install whois
Transaction performed with:
Installed rpm-4.11.3-35.el7.x86_64 installed
Installed yum-3.4.3-161.el7.centos.noarch installed
Installed yum-plugin-fastestmirror-1.1.31-50.el7.noarch installed
Packages Altered:
Install whois-5.1.1-2.el7.x86_64 @base
history info

In this tutorial, we discussed how to use rpm and yum to search your CentOS server for installed packages. These utilities are both critical tools for Linux sysadmins on CentOS systems. Of course if you have any questions about how to use these utilities on your own Liquid Web server, let us know! The Most Helpful Humans in Hosting are standing by 24×7 and we’ll be happy to answer your questions.

An Intro to React JS

Reading Time: 6 minutes

In our modern world of smartphones and apps, it is more important than ever to have a fast, responsive website that impresses your visitors. Created by the development team at Facebook, ReactJS is a JavaScript ‘framework’ or method of building web pages and apps that can ‘react’ to user interaction and external changes. ReactJS does this by way of components that can refresh themselves and their contents without a page reload. Better still, these components are modular. This concept means they can be coded quickly (called ‘hacking’ in the ReactJs community) and reused easily between projects.

While programming React JavaScript code is outside the scope of this tutorial, we are happy to help you and your development team hit the ground running by creating a development environment. Our focus is on getting the official ReactJS tutorial example running on a Liquid Web WHM/cPanel Virtual Private Server or VPS. Servers built on our VPS platform work well as development/testing servers and can be easily created. Feel free to spin one up to follow along with me today!

A dedicated server may provide more stability for your application over the long term once development has been completed. If your application is HIPAA related, you will need a dedicated server with HIPAA compliance. The differences between the VPS and Dedicated Server platforms can be found in our Knowledge Base article. Addtionally, ReactJS can also be installed on our Plesk, Windows, and Ubuntu platforms. Let’s get started!

Install NodeJS

First, we will need to install the latest version of NodeJS; This application allows our JavaScript files to import (or reference) each other, share global variables, access advanced command-line arguments, install additional modules and more.

At the time of writing, the latest version of NodeJS is 11.X Always use the most up-to-date software to ensure the stability and safety of your applications. Today we will be downloading and installing the latest version.

First, we log into our terminal and set up the NodeSource repository :

curl  -sL https://rpm.nodesource.com/setup_11.x | bash -

Next, we install NodeJS and its dependencies :

yum  install  -y nodejs  gcc-c++ make

To verify the installation and check the version, we run :

node --version

Install Serve

ReactJS Apps can be served by Apache or by the lightweight “Serve” application. Serve can run alongside Apache/Nginx accomplished by running Serve on an alternate port.

To install Serve globally, we run the following command :

npm install -g serve

In this tutorial, we will be running this application on a newly created cPanel account. but it can also run on any existing account. Our cPanel username in this example is ‘react’, and our development folder meant to house our source code will be named ‘dev’. This dev folder will sit our cPanel account’s root folder (/home/react/dev). A folder can be created via File Manager or FTP, but in this example, we are doing this via the command line :

mkdir /home/react/dev/

Install ReactJS

Our next step is to use the application ‘create-react-app’ command to build the framework and install the necessary prerequisites. No need to worry about updating ‘create-react-app’, this application updates automatically when run.

This command requires a path to build properly. In our example we are running the following to build in our dev folder.

npx create-react-app /home/react/dev

The create-react-app process can take a few minutes, but it will keep you updated of progress like so :

When completed, you’ll see an overview of some important development commands and a warm welcome into the ReactJS Community.

The create-react-app script generated the above files for us with default content. With that, you and your developers can begin the development process.

Note
Three main components of a ReactJS application are:

  • .html files; These provide the structure to your app.
  • .css files; These contain the styling for your app content.
  • .js or Javascript Files; These provide app functionality.

Run ReactJS Default Application

While Liquid Web Support is not able to assist with code-related issues, in this tutorial, we will be going a step further by launching the default application. Finally, we’ll be reviewing and launching the ReactJS ‘tic-tac-toe game’ tutorial application.

First, let’s confirm that ReactJS is running by building the default application. We want to be in the command line and our working folder.

cd /home/react/dev/Run the following code to have React build the default package contents.

npm  run build

There are a few options here, by default if we use ‘Serve’ it will use port 5000;  This is run by calling the following :

serve -s build

Note
You may notice the error regarding ‘xsel’. That application is used to copy from command line to a Linux Desktop Environment or GUI (Gnome, KDE, Mate among others). In our case, our Linux server is not using a Desktop Environment, so we do not need nor could we use the xsel library so we’ll ignore the error.

Viewing ReactJS Default App

To have our app available to the outside world, we will need to ensure that port 5000 is open in the firewall; Though this port is not open by default we can adjust that easily;

We are looking for the section called ‘Opening and Closing Ports in the Firewall’. Once implemented, you’ll see the default ReactJS application running in the browser. We do not own the domain ‘react.com’ so we will need to adjust our /etc/hosts file locally on our workstation. More information on editing your /etc/hosts files can be found here.

If you own your domain and your site’s DNS ‘A’ Record pointing to the Liquid Web server IP this should also be visible to you. The default ReactJS screen looks like this if everything is configured and running properly.

Run ReactJS

It is possible to run ReactJS applications over the typical Apache or Nginx ports; Please note, running ReactJS with Serve over port 80 means Apache or Nginx would need to be disabled. If the primary focus of your server is to handle this application, Apache can be disabled server-wide. Serve is a very lightweight application and works very well. If you do not have another service running on port 80; The following command will start ‘Serve’ on port 80.

PORT=80 serve -s build

The above can be adjusted to a different port as well. In this way, multiple ReactJS applications can be run on the same domain. In addition to this, Apache can serve your ReactJS application as static content. To do so, we would need to copy the contents of our application’s /build folder to the desired location within the directory configured for web delivery.

In our case, we will be copying /home/react/dev/build to /home/react/public_html. This can be done on the command line with :

cp -r  /home/react/dev/build  /home/react/public_html

Depending on how you built the application package, you may need to adjust the permissions of your files. In our case, the ownership of these files was set to ‘root’. In response to this, we are changing the ownership to ‘react’.

In this example, we are going to be adjusting all files in the public_html folder to be owned by the user ‘react’ with the following command :

chown  -R react.  /home/react/dev/public_html/*

To have Apache serve the above, we are going to be adjusting the .htaccess configuration in our project.

vim /home/react/public_html/.htaccess

With our text editor, we are going to be adding the following :

Options -MultiViews
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.html [QSA,L]

If everything is configured correctly, you should now see the contents of your application as delivered by the Apache service when visiting your site. You now know the basics of how to get the default ReactJS environment running on a Liquid Web Linux WHM/cPanel VPS Server!

Example ReactJS Application

We are going one step further here by going over a tutorial. This tutorial goes over every aspect of getting started with ReactJS with great detail into programming some components needed to get a running tic-tac-toe game. We advise going line by line through the code as well as the documentation. With this knowledge, you can build your own application. The best source for ReactJS syntax and concepts can be found in their documentation. Feel free to take your time going through these examples and documentation; There is a lot to go through, but you can start small and build your way up. Today we are going to copy the contents of the Final Code to see how they run on our Liquid Web Server.

There are multiple ways to adjust the contents of index.html, index.css, and index.jss files. While outside of the scope of this tutorial, you can use other command line text editing tools, FTP, cPanel ‘s File Manager tool, Git  and many more to edit files. In our case, we are using the cPanel File Manager’s Text Editor. A walkthrough of this interface and how to edit files can be found here.

First, we open the File Manager, navigate to the location of our file /home/react/dev/public/index.html, then we select the index.html file ( 1 ) and click“Edit” ( 2 )

We are going to be copy-pasting the code from CodePen into our server-side files.

Second, we do the same with /src/index.css.

Finally, we copy paste the contents of /src/index.js.

At the top of the index.js file, we add some needed code; This is necessary for referencing the React installation and our CSS files.

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';

With all three files copied over, we are going to rebuild our project. To rebuild, we will go back to our development folder and rerun our build script. With that, we can test by serving our new build with our serve script. By visiting react.com:5000 we see our tic-tac-toe game live on our Liquid Web server!

In conclusion, we went over what ReactJS offers developers, installed ReactJS along with customization. WIth the official documentation, you now have all you need to build out your dream application. We cannot wait to see what you have in mind!

 

How to Set Up Multiple SSLs on One IP With Nginx

Reading Time: 6 minutes

With the shortage of available address space in IPv4, IPs are becoming increasingly difficult to come by, and in some cases, increasingly expensive. However, in most instances, this is not a drawback. Servers are perfectly capable of hosting multiple websites on one IP address, as they have for years.

But, there was a time when using an SSL certificate to secure traffic to your site required having a separate IPv4 address for each secured domain. This is not because SSLs were bound to IPs, or even to servers, but because the request for SSL certificate information did not specify what domain was being loaded, and thus the server was forced to respond with only one certificate. A name mismatch caused an insecure certificate warning, and therefore, a server owner was required to have unique IPs for all SSL hosts.

Luckily, IPv4 limitations have brought new technologies and usability to the forefront, most notably, Server Name Indication (SNI).

 

Why Do I Need an SSL?

Secure Socket Layer (SSL) certificates allow two-way encrypted communication between a client and a server. This allows any data protection from prying eyes, including sensitive information like credit card numbers or passwords. SSLs are optionally signed by a well-known, third-party signing authority, such as GlobalSign. The most common use of such certificates are to secure web traffic over HTTPS.

When browsing an HTTPS site, rather than displaying a positive indicator, modern browsers show a negative indicator for a site that is not using an SSL. So, websites that don’t have an SSL will have a red flag right off the bat for any new visitors. Sites that want to maintain reputation are therefore forced to get an SSL.

Luckily, it is so easy to get and install an SSL, even for free, that this is reduced to a basic formality. We’ll cover the specifics of this below.

 

What is SNI?

Server Name Indication is a browser and web server capability in which an HTTPS request includes an extra header, server_name, to which the server can respond with the appropriate SSL certificate. This allows a single IP address to host hundreds or thousands of domains, each with their own SSL!

SNI technology is available on all modern browsers and web server software, so some 98+% of web users, according to W3, will be able to support it.

 

Pre-Flight Check

We’ll be working on a CentOS 7 server that uses Nginx and PHP-FPM to host websites without any control panel (cPanel, Plesk, etc.). This is commonly referred to as a “LEMP” stack, which substitutes Nginx for Apache in the “LAMP” stack. These instructions will be similar to most other flavors of Linux, though the installation of Let’s Encrypt for Ubuntu 18.04 will be different. I’ll include side-by-side instructions for both CentOS 7 and Ubuntu 18.04.

For the remainder of the instructions, we’ll assume you have Nginx installed and set up to host multiple websites, including firewall configuration to open necessary ports (80 and 443). We are connected over SSH to a shell on our server as root.

Note
If you have SSLs for each domain, but they are just not yet installed, you should use Step 3a to add them manually. If you do not have SSLs and would like to use the free Let’s Encrypt service to order and automatically configure them, you should use Step 3b.

 

Step 1: Enabling SNI in Nginx

Our first step is already complete! Modern repository versions of Nginx will be compiled with OpenSSL support to server SNI information by default. We can confirm this on the command line with:

nginx -V

This will output a bunch of text, but we are interested in just this line:

...
TLS SNI support enabled
...

If you do not have a line like this one, then Nginx will have to be re-compiled manually to include this support. This would be a very rare instance, such as in an outdated version of Nginx, one already manually compiled from source with a different OpenSSL library. The Nginx version installed by the CentOS 7 EPEL repository (1.12.2) and the one included with Ubuntu 18.04 (1.14.0) will support SNI.

Step 2: Configuring Nginx Virtual Hosts

Since you have already set up more than one domain in Nginx, you likely have server configuration blocks set up for each site in a separate file. Just in case you don’t, let’s first ensure that our domains are set up for non-SSL traffic. If they are, you can skip this step. We’ll be working on domain.com and example.com.

vim /etc/nginx/sites-available/domain.com

Note
If you don’t happen to have sites-enabled or sites-available folders, and you want to use them, you can create /etc/nginx/sites-available and /etc/nginx/sites-enabled with the mkdir command. Afterward,  inside /etc/nginx/nginx.conf, add this line anywhere inside the main http{} block (we recommend putting it right after the include line that talks about conf.d):

include /etc/nginx/sites-enabled/*;

Otherwise, you can make your configurations in /etc/nginx/conf.d/*.conf.

At the very least, insert the following options, replacing the document root with the real path to your site files, and adding any other variables you require for your sites:

server {
listen 80;
server_name domain.com;
root /var/www/domain.com;
...
}

A similar file should be set up for example.com, and any other domains you wish to host. Once these files are created, we can enable them with a symbolic link:

ln -s /etc/nginx/sites-available/domain.com /etc/nginx/sites-enabled/

ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/

Now, we restart Nginx…

systemctl reload nginx

This reloads the configuration files without restarting the application. We can confirm that the two we just made are loaded using:

nginx -T

You should see your server_name line for both domain.com and example.com.

Note
The listen line included in the server block above will allow the site to listen on any IP that is on the server. If you would like to specify an IP instead, you can use the IP:port format instead, like this:

server {
listen 123.45.67.89:80;
...
}

Step 3a: Add Existing SSLs to Nginx Virtual Hosts

Now that we have valid running configurations, we can add the SSLs we have for these domains as new server blocks in Nginx. First, save your SSL certificate and the (private) key to a global folder on the server, with names that indicate the relevant domain. Let’s say that you chose the global folder of /etc/ssl/. Our names, in this case, will be /etc/ssl/domain.com.crt (which contains the certificate itself and any chain certificates from the signing authority), and /etc/ssl/domain.com.key, which contains the private key. Edit the configuration files we created:

vim /etc/nginx/sites-available/domain.com

Add a brand new server block underneath the end of the existing one (outside of the last curly brace) with the following information:

server {
listen 443;
server_name domain.com;
root /var/www/domain.com;
ssl_certificate /etc/ssl/domain.com.crt;
ssl_certificate_key /etc/ssl/domain.com.key;
...
}

Note the change of the listening port to 443 (for HTTPS) and the addition of the ssl_certificate and ssl_certificate_key lines. Instead of rewriting the whole block, you could copy the original server block and then add these extra lines, while changing the listen port. Save this file and reload the Nginx configuration.

systemctl reload nginx

We again confirm the change is in place using:

nginx -T

For some setups you’ll see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443. If you do, you can skip to Step 4, otherwise continue to the next step.

Step 3b: Install and Configure Let’s Encrypt

Let’s next set up the free SSL provider Let’s Encrypt to automatically sign certificates for all of the domains we just set up in Nginx. On Ubuntu 18.04, add the PPA and install the certificate scripts with aptitude:

add-apt-repository ppa:certbot/certbot

apt-get update

apt-get install certbot python-certbot-nginx

In CentOS 7, we install the EPEL repository and install the certificate helper from there.

yum install epel-release

yum install certbot python2-certbot-nginx

On both systems, we can now read the Nginx configuration and ask the Certbot to assign us some certificates.

certbot --nginx

This will ask you some questions about which domains you would like to use (you can leave the option blank to select all domains) and whether you would like Nginx to redirect traffic to your new SSL (we would!). After it finishes it’s signing process, Nginx should automatically reload its configuration, but in case it doesn’t, reload it manually:

systemctl reload nginx

You can now check the running configuration with:

nginx -T

You should now instead see two server_name lines each for domain.com and example.com, one using port 80 and one using port 443.

Let’s Encrypt certificates are only valid for 90 days from issuance, so we want to ensure that they are automatically renewed. Edit the cron file for the root user by running:

crontab -e

The cron should look like this:

45 2 * * 3,6 certbot renew && systemctl reload nginx

Once you save this file, every Wednesday and Saturday at 2:45 AM, the certbot command will check for any needed renewals, automatically download and install the certs, followed by a reload of the Nginx configuration.

Step 4: Verify Installation and Validity

We should now check the validity of our SSLs and ensure that browsers see the certificates properly. Visit https://sslcheck.liquidweb.com/ and type in your domain names to check the site’s SSL on your server. You should see four green checkmarks, indicatating SSL protection.

We hope you’ve enjoyed our tutorial on how to install SSLs on multiple sites within one server. Liquid Web customers have access to our support team 24/7.  We can help with signed SSL or ordering a new server for an easy transfer over to Liquid Web.

The Best Settings for Configuring FastCGI (mod_fcgid)

Reading Time: 5 minutes

In our last tutorial, we showed you how to install Apache’s mod_fcgid and provided Linux scripts to assist in transitioning from mod_php. In this next section, we’ll be discussing how to configure a baseline setting for PHP optimization. Continue reading “The Best Settings for Configuring FastCGI (mod_fcgid)”

How to Install mod_fcgid on cPanel’s EasyApache 4 with CloudLinux

Reading Time: 6 minutes

When it comes to PHP execution, mod_fcgid (also called FCGI) is one of the heavyweight contenders. There are a few rival handlers, like PHP-FPM or mod_lsapi, which come close to matching its execution speed, but they generally leave something to be desired when it comes to fine-tuning and resource consumption. FCGI is built for speed and includes a myriad of Apache directives that can be leveraged for resource regulation.

This article will cover installing mod_fcgid followed by basic configuration in a separate article. The article applies to any cPanel servers running the following operating systems:

The article will not cover EasyApache 3 (EA3). Due to the End-of-Fife (EOL) status of EA3, it is imperative that any systems running EA3 upgrade to EA4 as soon as possible. To avoid conflicts, upgrading to EA4 should be handled as an entirely separate procedure from installing mod_fcgid. If you need assistance with upgrading from EA3 to EA4, please feel free to contact our support team. If you’re running a Liquid Web Fully Managed cPanel server, our team will perform the entire upgrade procedure for you.

Expectations: Downtime & Performance

Downtime – Please plan ahead of time as this operation may cause downtime. While installing an Apache module and enabling a baseline configuration should only require an Apache restart, there may be unforeseen circumstances that require troubleshooting. This can lead to sites becoming unresponsive and/or slow.

Note
Always plan for more downtime than expected and always have a reversion plan. Allot extra time for troubleshooting, testing, and reverting all changes if necessary.

Performance – While FCGI provides superior PHP execution time, it is not a blanket fix for performance. For server optimization there will be an adjustment period for configuration tweaking.This period can take hours to weeks as it must account for the unique caveats with the specific server hardware, software, traffic habits, and many other unpredictable variables.

Note
Optimization is an ongoing, perceptual process. There is no one-size-fits-all optimized configuration. Traffic & resource usages continually change over time on all servers. Periodic evaluation and configuration adjustment are necessary to stay ahead of the curve.

 

Installation of mod_fcgid

The following steps should be followed as close to the examples as possible. Things will vary slightly depending on CentOS/CloudLinux versions, and a few other factors. The article will denote the differences where they are expected.

Step 1) [su_highlight background="#3ac6eb"]Liquid Web Servers Only[/su_highlight] Disable Mod_Zeus & Other EA3 Modules

Older Liquid Web cPanel servers with EasyApache 3 who upgraded to EA4 may find residual configs on the system that can cause conflicts in the Apache configuration. This step will help make sure these older configs are disabled. The following sed one-liner will take care of disabling the inclusion line for these modules. These modules are stored in the /usr/local/lp/configs/httpd/conf.d/ directory. This directory is typically mentioned in the /etc/apache2/conf.d/includes/post_virtualhost_global.conf config file. The sed code looks for and comments out the specific include statement for this file.

sed -i -e 's/[^#]+\(Include [/]usr[/]local[/]lp[/]configs[/]httpd[/]\)/#\1/g' /etc/apache2/conf.d/includes/post_virtualhost_global.conf

To confirm the change, print the contents of the post_virtualhost_global.conf file using cat:

cat /etc/apache2/conf.d/includes/post_virtualhost_global.conf

The output should be blank or have a commented out inclusion line like below:

#Include /usr/local/lp/configs/httpd/conf.d/*.conf

Step 2) Disable Litespeed

FCGI is not compatible with Litespeed, which uses its own mod_lsapi module to process PHP using lsphp. Disabling Litespeed in this way does not remove it from the server; it merely enables Apache as the default web server.

/usr/local/lsws/admin/misc/cp_switch_ws.sh apache

Step 3) Install mod_fcgid

The following yum command will install the necessary module:

yum install ea-apache24-mod_fcgid -y

Once completed, confirm Apache has the fcgid_module loaded:

httpd -M | grep expires\|version\|fcgid

Example output:

fcgid_module (shared)

Step 4) [su_highlight background="#3ac6eb"]CloudLinux Only[/su_highlight] Configure CageFS Map for FCGI

The following snippet will create the necessary directories needed by mod_fcgid to execute correctly. It will then add those directory entries into the /etc/cagefs/cagefs.mp file, allowing user-level access to said directories from within their caged environment. It finally forces cagefs to remount all user directories for access to the new directory on all sites.

mkdir -p /var/run/mod_fcgid /usr/share/cagefs-skeleton/var/run/mod_fcgid /run/mod_fcgid
cp -p /etc/cagefs/cagefs.mp{,.lwbak.$(date +%F_%H%M%S)}
cat <<EOF>>/etc/cagefs/cagefs.mp
/var/run/mod_fcgid
/run/mod_fcgid
/usr/local/cpanel/cgi-sys/
EOF
cagefsctl -M

Step 5) [OPTIONAL] Remove Unnecessary Writable Permission

Due to security restrictions, any website files or directories with group-writable or other-writable permissions will be denied and a 500 Internal Server Error will be displayed. The following awk one-liner uses the find command to search all DocumentRoot directories configured on the server. It is advised to run this process in a screen session as it may take an hour or more depending on the size of the file system in question. The code takes care to use nice and ionice commands to run the process as a low priority so there will be minimal impact on server load or disk I/O. All changed files and their previous permissions are recorded in the /var/log/fixperms.log file.

Step 5a) Create & Attach to a Screen Session

screen -dmS fixperms; screen -x fixperms

Step 5b) Run the One-Liner

nice -n 15 ionice -c2 -n7 awk '/DocumentRoot/{DR[$NF]=$NF}END{for (e in DR) {x="find \""e"\" \\( -type f -or -type d \\) -and -perm /g+w,o+w -printf \"%M %y %m %p\\n\" -exec chmod g-w,o-w {} +"; while(x|getline) {print $0;print strftime("%F %T %Z"),$0 >> "/var/log/fixperms.log"} close(x)}}' /etc/apache2/conf/httpd.confExit screen by holding CTRL/CMD then pressing A, then D.

 

Step 6) [OPTIONAL] Disable mod_php Directives in .htaccess Files

Another common caveat when switching to FCGI is that any existing mod_php related directives inside any .htaccess file are not compatible with mod_fcgid and will cause the site to throw a 500 Internal Server Error. So these entries need to be located and disabled or removed.  The following awk one-liner checks all configured DocumentRoot directories for .htaccess files, and if they contain a php_value or php_admin_value entry, it will disable by commentting the line out. First, an in-place backup is created of the original file. The backup is named .htaccess.bak.YYYY-MM-DD_HHMMSS. All changed files and their previous permissions are logged in the /var/log/fixhtaccess.log file.

Step 6a) Create & Attach to a Screen Session

screen -dmS fixhtaccess; screen -x fixhtaccess

Step 6b) Run the One-Liner

nice -n 15 ionice -c2 -n7 awk '/DocumentRoot/{DR[$NF]=$NF}END{for (e in DR) { x="find "e" -name .htaccess -exec grep -iEl \"^([^#]*php_(admin_)?value)\" {} +"; s="sed -i.bak.$(date +%F_%H%M%S) \047s/^\\([^#]*php_\\(admin_\\)\\?value\\)/#\\1/gi\047 2>&1";
while(x|getline) {print $0; print s,$0; print strftime("%F %T %Z"),s,$0 >> "/var/log/fixhtaccess.log"; while(s" "$0|getline y) { print y; print strftime("%F %T %Z"),y >> "/var/log/fixhtaccess.log" } close(s" "$0)} close(x)}}' /etc/apache2/conf/httpd.conf

Step 7) Rebuild Apache Config (Troubleshoot Any Errors)

The following command checks the system httpd.conf file for syntax error and if none are found, runs the cPanel httpd.conf rebuild script. Fix any syntax errors, until a clean rebuild is completed without error.

httpd -t && /scripts/rebuildhttpdconf

Step 8) [su_highlight background="#3ac6eb"]CloudLinux ONLY[/su_highlight] Setup PHP Selector

The PHP Selector feature of CloudLinux is only compatible with the inherit PHP versions in the cPanel MultiPHP Manager interface. All sites should be using the inherited version of PHP or PHP Selector will not function for that site. This only applies to CloudLinux servers.

Step 8a) Force All Sites to Use Inherited Version of PHP in MultiPHP Selector

The following command uses cPanel’s whmapi1 system to force all sites onto the inherited version of PHP in MultiPHP Manager.

/usr/sbin/whmapi1 php_get_vhost_versions | awk  -F'[: ]+' '$2~/vhost/{x="/usr/sbin/whmapi1 php_set_vhost_versions version=inherit vhost-0="$3;print x;system(x);close(x)}'

Step 8b) Disable MultiPHP Manager & MultiPHP INI Editor

The following uses the cPanel whmapi1 system to add MultiPHP Manager/INI Editor to the disabled features list.

/usr/sbin/whmapi1 update_featurelist featurelist=disabled multiphp=1 multiphp_ini_editor=1 ; /usr/sbin/whmapi1 update_featurelist featurelist=disabled multiphp_ini_editor=1

Step 9) Switch All PHP Handlers over to FCGI

The following will convert all installed PHP Handlers to using FCGI. These handlers are viewalbe through the Handlers tab of WHM’s MultiPHP Manager interface or by running the cPanel rebuild_phpconf script.

/usr/local/cpanel/bin/rebuild_phpconf --current | awk 'NR>1{x="/usr/local/cpanel/bin/rebuild_phpconf --"$1"=fcgi"; print x; system(x);
close(x)}'

To confirm the changes, run:

/usr/local/cpanel/bin/rebuild_phpconf --current

Example Output:

DEFAULT PHP: ea-php71
ea-php54 SAPI: fcgi
ea-php55 SAPI: fcgi
ea-php56 SAPI: fcgi
ea-php70 SAPI: fcgi
ea-php71 SAPI: fcgi
ea-php72 SAPI: fcgi

Step 10) Perform a Full Stop & Restart of Apache

The following script will stop Apache (gracefully if possible), and kill any unresponsive Apache & PHP processes before starting the Apache service again. It will also verify the Apache configuration syntax and will only perform the restart procedure if the syntax returns ok. This technique is handy as it is common for Apache processes to get stuck from time to time on busy servers.  This snippet deals with those scenarios after performing the humane stop request first.

httpd -t && (/scripts/restartsrv_apache stop; sleep 3; killall httpd php lsphp php-cgi; sleep 3; killall -9 httpd php lsphp php-cgi; /scripts/restartsrv_apache start) || echo Fix Apache Config and try again.

Note
Toss this snippet into an alias called apache_rescue which you can add to your ~/.bashrc for easy access to this code. Below is a one-liner that will create this alias for you and load the modified profile in your current session. Once this alias is installed, it will always be available on that  server by typing apache_rescue.

cat <<'EOF'>>~/.bashrc && source ~/.bashrc
alias apache_rescue='httpd -t && (/scripts/restartsrv_apache stop; sleep 3; killall httpd php lsphp php-cgi; sleep 3; killall -9 httpd php lsphp php-cgi; /scripts/restartsrv_apache start) || echo Fix Apache Config and try again.'
EOF

This concludes our ten step process for installing mod_fcgid onto your cPanel system.  It’s recommended to adjust FCGI settings from their default settings. Tune into our next tutorial where we’ll be advising on how to optimize FCGI for various environments.

Troubleshooting: Can’t Resolve Hostname

Reading Time: 2 minutes

You may find the “can’t resolve hostname” or “temporary failure in name resolution” error when using retrieval command like wget, cURL, ping or nslookup. There are many reasons why these commands can cause an error, including file corruption.  For the sake of brevity, we look towards commonalities between these commands to solve the issue.

These commands connect to the Internet using gateways to communicate and provide information.   If the connection from your local machine, in this case, a CentOS server, is disconnected you’ll likely run into issues trying to access the world wide web. In this troubleshooting tutorial, we’ll show you some common solutions to connectivity issues.

Step 1: Amongst many other configuration tasks, the resolv.conf file is used to resolve DNS requests. Manually editing the resolv.conf file to configure name resolution will only do so temporarily. The Network Manager controls this essential /etc/resolv.conf file to create permanent changes. So, we’ll first stop and disable the Network Manager:

Note
Be sure to run these commands as the root user, or a privileged user using sudo before each command.

chkconfig NetworkManager off; service NetworkManager stop

 

Step 2: The method for permanent changes is to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file instead of resolv.conf file. Open the file:

vim /etc/sysconfig/network-scripts/ifcfg-eth0

Next, we’ll set our DNS IP’s to use Google’s Public DNS (8.8.8.8 & 8.8.4.4).

DEVICE="em1"
BOOTPROTO="static"
DNS1="127.0.0.1"

DNS2="8.8.8.8"


DNS3="8.8.4.4"

GATEWAY="some_ip"
HWADDR="hwid"
IPADDR="some_ip"
IPV6INIT="yes"
NETMASK="255.255.255.0"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"

Save and quit the file using ESC and :wq.

 

Step 3: Enable and restart your network, using the commands associated with your server version.

CentOS 6, CloudLinux 6, RHEL 6:

chkconfig network on

service network start

 

CentOS 7, CloudLinux 7, RHEL 7:

systemctl enable network.service

systemctl start network.service

 

Step 4: Test the reachability of a host by using ping, curl, wget or any testing tool of your choice. In our example, we’ve successfully ping’d Google!  

ping google.com
PING google.com (172.217.4.46) 56(84) bytes of data.
64 bytes from lga15s46-in-f14.1e100.net (172.217.4.46): icmp_seq=1 ttl=57 time=6.65 ms
64 bytes from lga15s46-in-f14.1e100.net (172.217.4.46): icmp_seq=2 ttl=57 time=6.68 ms
64 bytes from lga15s46-in-f14.1e100.net (172.217.4.46): icmp_seq=3 ttl=57 time=6.68 ms

You don’t have to rack your brain over connectivity issues!  Liquid Web customers enjoy 24/7 support for our Managed products. Our knowledgable team of support techs have experience with solving errors of this nature.  Access our support team through a ticket, chat or phone call!

How to Install and Configure Puppet on CentOS, Fedora, Ubuntu or Opensuse

Reading Time: 4 minutes

What is Puppet?

Puppet: A Closer Look At Who Holds The Strings

Puppet is an intuitive, task-controlling software which provides a straightforward method to manage Linux and Windows server functions from a central master server. It can perform administrative work across a wide array of systems that are primarily defined by a “manifest” file, for the group or type of server(s) being controlled.

System Requirements

Puppet uses a master/client setup to communicate between the master and client servers. The master server will require more resources than the client servers utilize. The resources needed on the master server will mainly depend on:

  • The number of remote agents (servers) being utilized
  • How frequently those remote agents check in to the master server
  • How many resources are being managed on each remote agent
  • The complexity of the manifest files and modules in use
Note
A Puppet master server must run on UNIX variants known as  “ *nix ” operating systems. Currently, Puppet masters CANNOT run in a Windows environment.
Master Hardware Requirements

The minimum hardware requirements for the Puppet master servers will be based on multiple factors as stated above and noted in Puppet’s guidelines.

 

Client Software Platforms

The Puppet-agent (or client) packages are available for these platforms:

 

Dependencies

If you are installing the Puppet client using an official distribution package via a repository then your system’s package manager usually ensure that the proper dependencies are installed. If you install the agent on a platform without a supported package, you must also manually install the dependent packages, libraries, and gems:

  • Ruby 2.5.x
  • CFPropertyList 2.2 or later
  • Facter 2.0 or later
  • The msgpack gem from MessagePack, if you’re using msgpack serialization

 

Timekeeping and Name Resolution

Before installing the client, there are certain network requirements which you will require you to preparie, review and consider. The most important aspects include time syncing and implementing an idea for name resolution.

Timekeeping

You will want to make sure that the Network Time Protocol (NTP) service is in place to ensure that the time is in sync between the master server, (which acts as the certificate authority) and clients. This is recommended due to the issues that can develop if the servers time drifts out of sync. You may encounter odd certificate issues. A service like NTP (available on most servers) assures accurate timekeeping and will reduce the risk of error like this occurring.

Name resolution

The second part of this component is to decide on an iterable naming convention. For example, by using a master name like puppet.domain.com establishes the continuity of this naming convention. This also allows optimal master communication and that all future agents can reach the master. You can simplify this by utilizing a CNAME record (a name forwarding DNS entry) to ensure the master is always reachable.

 

Firewall Configuration

In a master/client setup, the master server must have port 8140 open to allow for incoming connections from the remote clients. You can use either of the following commands to check that the port is open and listening:

root@master [~]# netstat -tulpn | grep LISTEN |grep 8140
root@master [~]# lsof -i -P -n | grep LISTEN |grep 8140

If nothing is returned with the above command then you’ll need to open port 8140. To open the port in the UFW firewall, use the following command:

root@master:~# ufw allow 8140/tcp
Rules updated
Rules updated (v6)
root@master:~#

 

Puppet Installation

Usually, Puppet uses approximately 2 GB of RAM by default. Plan on this amount plus any additional RAM needed to run the server’s OS itself. If you plan on creating a 2 GB server, opt for one that has 4GB of RAM if you are going to use it as a Puppet master.

Puppet is available on multiple OS variants including:

  • Red Hat/CentOS/Fedora
  • Debian/Ubuntu
  • SUSE Linux Enterprise Server

The basic install steps across all of the above mentioned OS is as follows:

Available Puppet Repositories

root@master [~]# wget https://apt.puppetlabs.com/puppet-release-bionic.deb
root@master [~]# dpkg -i puppet-release-bionic.deb
root@master [~]# apt update
root@master [~]# apt install puppetserver

Note
Our Fully Managed servers (cPanel or Plesk) wouldn’t be good options for Puppet implementation since additional repositories may conflict.

Install the Puppet Master’s Software

Red Hat/CentOS/Fedora

yum install puppetserver

Debian/Ubuntu

apt-get install puppetserver

SUSE/Opensuse

zypper install puppetserver

 

Start the Puppet Master Service

Red Hat/CentOS/Fedora

systemctl start puppetserver

Debian/Ubuntu

service puppetserver start

SUSE/Opensuse

/etc/rc.d/puppetmaster start

 

Install the Puppet Client’s Software

Yum:

yum install puppet-agent

Apt:

apt-get install puppet-agent

Zypper:

zypper install puppet-agent

 

Puppet Configuration

Puppet contains around 200 different configuration settings located within the puppet.conf file. For most servers, you will only need to adjust about 20 settings or less in the file depending on your server’s setup. You can use the command below to set the needed values.

puppet config

We’ve listed the 5 most requested settings to suit your specific needs:

  • dns_alt_names – This is a list of allowed hostnames acting as the Puppet master.
  • environment_timeout – This setting is defaulted to 0 and should be untouched unless you have a particular cause to alter it. You can adjust this setting to unlimited to make master refreshes a part of your standard code deployment process.
  • environmentpath –  The environment path defines the locations where Puppet can find the specific directories for any unique environments. T
  • basemodulepath – This is a list of directories that contains the Puppet modules used in various environments.
  • reports – Directs which report handlers, listed below, to use.
    • HTTPS – Sends reports via HTTP/HTTPS as a POST request to the address defined in the reporturl setting.
    • Log – Sends reports to the local default log destination (usually syslog)
    • Store – Hosts will send a YAML dump of data to a local directory (defined by the reportdir setting in the puppet.conf)

The config reference provides a more comprehensive array of available options in modifying your server to suit your specific needs

 

More Information

Overall, Puppet is an attractive addition to your everyday toolset for managing and automating tedious tasks. Once it is installed and configured, it will maintain your day to day servers tasks with ease. You may want to consult the Puppet documentation for more in-depth information on this topic or consult the following resources for additional info.

 

How Can We Help?

If you would like more information on how this software can benefit your current setup, simply reach out to us via a phone call, chat or ticket, and one of our Most Helpful Humans in Hosting will follow up with you to advise on how best you can integrate this process into your existing infrastructure! We are looking forward to speaking with you!

 

 

Top 4 Lessons Learned Using Ubuntu

Reading Time: 5 minutes

When choosing a server operating system, there are a number of factors and choices that must be decided. An often talked about and referenced OS, Ubuntu, is a popular choice and offers great functionality with a vibrant and helpful community. However; if you’re unfamiliar with Ubuntu and have not worked with either the server or desktop versions, you may encounter differences in common tasks and functionality from previous operating systems you’ve worked with. I’ve been a system administrator and running my own servers for a number of years, almost all of which were Ubuntu, here are the top four lessons I’ve learned while running Ubuntu on my server.

  1. Understanding the “server” vs. “desktop” model
  2. Deciphering the naming scheme/release schedule
  3. Figuring out the package manager
  4. Networking considerations on Ubuntu

 

Understanding “Server” vs. “Desktop” Model

Ubuntu comes in a number of images and released versions. One of the first choices to consider is whether you want the “desktop” or “server” release. When running an application or project on a server the choice to make is the “server” image. You may be wondering what the difference is, and it comes down to two main differences:

  • The “desktop” image has a graphical user interface (GUI) where the “server” image does not
  • The “desktop” image has default applications more suited for a user’s workstation, whereas the “server” image has default applications more suited for a web server.

It is really that simple. Currently, the majority of the differences relate to the default applications offereed when installing Ubuntu and whether or not you need a GUI. When I first started using Ubuntu, there were a number of differences between the two (such as a slightly tweaked Linux kernel on the “server” image designed to perform better on server grade hardware), but today it is a much more simplified decision process.

The choice comes down to saving time for the individual by either adding or removing applications (also called “packages”) that they do or don’t need. At Liquid Web we offer “server” images since it’s best designed for operating on a web server. You do have the choice of the Ubuntu version, which brings me to my next lesson learned:

 

Deciphering the Naming Scheme/Release Schedule

Both Ubuntu and CentOS are based on what is called an “upstream” provider, or another operating system that then CentOS/Ubuntu tweak and re-publish. For Ubuntu, this upstream provider is the OS, Debian, whereas CentOS is based on Red Hat (which is further based on Fedora). It was confusing at first for me to try and understand what the names for Ubuntu releases meant: “Vivid Vervet”, “Xenial Xerus”, “Zesty Zapus”; but that’s just a cute “name” the developers give a particular version until it is released.

The company that oversees Ubuntu, called Canonical, has a regular and set release schedule for Ubuntu. This is a distinct difference between CentOS, which relies on Red Hat, which further relies on Fedora. Every six months a release of Ubuntu is published, and every fourth release (or once every two years) a “long-term support” or LTS release is published. The numbering they use for releases is fairly straight-forward: YY.MM where YY is the two digit year and MM is the month of the release. The latest release is 19.04, meaning it was released in April of 2019.

This was tricky for me to initially navigate and created a headache since the different versions have different levels of support and updates provided to them. For some people, this is not an issue. As a developer, you may want to build your application on the latest and greatest OS that takes advantage of new technologies. Receiving support may not matter because you won’t be using the server for very long. You may also be in a position where you need to know that your OS won’t have any server-crashing bugs.

Which version do you choose? How do you know whether your OS will be stable and receive patches/updates? I ran into these issues with when accidentally creating a server that lost support just a few months later; however, I learned an easy-to-use trick to ensure that in the future I chose an LTS release:

  • LTS releases start with an even number and end in 04

Liquid Web offers a variety of Ubuntu releases within our Core-Managed and Self-Managed support tiers:

Notice that all the version of Ubuntu that we offer are the .04 releases. These are all LTS releases, ensuring that you receive the longest possible lifespan and vital security updates for your OS.

 

Ubuntu’s Package Manager

Prior to working with Ubuntu, I had the most experience with CentOS. Moving to Ubuntu meant having to learn how to install packages since the CentOS package manager, yum, was no longer available. I discovered that the general process is the same with a few minor differences.

The first difference I encountered is that Ubuntu uses the apt-get command. This is part of the Apt package manager and follows a very similar syntax to the yum command for the most basic command: installing packages. To install a package I had to use:

apt-get install $package

Where $package was the name of the package. I didn’t always know the name of the package I wanted to find though so I thought I could search much like I could with yum search $package; however there is another utility that Apt uses for searching: apt-cache search $package

If you forget,  you’ll get an error that you’ve run an illegal operation:

One similarity to yum that is super useful: the “y” flag to bypass any prompts about installing the package. This was my favorite “trick” on CentOS to speed up installing packages and improve my efficiency. Knowing that it works the same on Ubuntu was a major relief:

Of course installing packages meant I had to add the repositories I wanted. This was a significant change since I was use to the CentOS location of /etc/yum.repos.d/ for adding .repo files. With the shift in the Apt system on Ubuntu, I learned that I needed to add a .list file, pointing to the repository file in question in the /etc/apt/sources.list.d/ folder. Take Nginx for example; it was quite easy to add once I realized what had to go where:

 

From here it was as straight-forward as installing Nginx with the apt-get install nginx command:

 

There was a time when adding a repository did not immediately work for me; however, it was quickly resolved by telling Apt that it needed to refresh/update the list of repos it uses with the apt-get update command.

Note
Don’t worry, the apt-get update command does not update any packages needlessly, this simply runs through the configured repositories and ensures that your system is aware of them as well as the packages offered.

 

Networking Consideration on Ubuntu

The last major lesson I want to talk about was one learned painfully. Networking on Ubuntu, by default, is handled from the /etc/network/ directory:

There is a flat file called interfaces, and this will contain all the networking information for your host (configured IPs/gateways, DNS servers, etc…):

Notice in the above configuration that both the primary eth0 interface and the sub-interface, eth0:1, are configured in the same file (/etc/network/interfaces). This caught me in a bad spot once when I made a configuration syntax mistake, restarted networking, and suddenly lost all access to my host. That interfaces file contained all the networking components for my host, and when I restarted networking while it contained a mistake, it prevented ANY networking from coming back up. The only way to correct the issue was to console the server, correct the mistake, and restart networking again.

Though this is the default configuration for Ubuntu, there is now a way to ensure that you have the configuration setup for individual interfaces so, a simple mistake does not break all of the networking: /etc/network/interfaces.d/

To properly use this folder and have individual configuration files for each interface, you need to modify the /etc/network/interface file to tell it to look in the interfaces.d/ folder. This is a simple change of adding one line:

source /etc/network/interfaces.d/*

This will tell the main interfaces configuration file to also review /etc/network/interfaces.d/ for any additional configuration files and use those as well. This would have saved me a lot of hassle had I known about it earlier.

Hopefully, you can use my mistakes and struggles to your advantage. Ubuntu is a great operating system, is supported by a fantastic community, and is covered by “the most helpful humans in hosting” of Liquid Web. If you have any questions about Ubuntu, our support or how you might best use it in your environment feel free to contact us at support@liquidweb.com or via phone at 1-800-580-4985.

 

 

How to Redirect URLs Using Nginx

Reading Time: 3 minutes

What is a Redirect?

A redirect is a web server function that will redirect traffic from one URL to another. Redirects are an important feature when the need arises. There are several different types of redirects, but the more common forms are temporary and permanent. In this article, we will provide some examples of redirecting through the vhost file, forcing a secure HTTPS connection, redirection to www and non-www as well as the difference between temporary and permanent redirects.

Note
As this is an Nginx server, any .htaccess rules will not apply. If your using the other popular web server, Apache, you’ll find this article useful.

Common Methods for Redirects

Temporary redirects (response code: 302 Found) are helpful if a URL is temporarily being served from a different location. For example, these are helpful when performing maintenance and can redirect users to a maintenance page.

However, permanent redirects (response code: 301 Moved Permanently) inform the browser there was an old URL that it should forget and not attempt to access anymore. These are helpful when content has moved from one place to another.

 

How to Redirect

When it comes to Nginx, that is handled within a .conf file, typically found in the document root directory of your site(s), /etc/nginx/sites-available/directory_name.conf. The document root directory is where your site’s files live and it can sometimes be in the /html if you have one site on the server. Or if your server has multiple sites it can be at /domain.com.  Either way that will be your .conf file name. In the /etc/nginx/sites-available/ directory you’ll find the default file that you can copy or use to append your redirects. Or you can create a new file name html.conf or domain.com.conf.

Note
If you choose to create a new file be sure to update your symbolic links in the /etc/nginx/sites-enabled. With the command:

ln -s /etc/nginx/sites-available/domain.com.conf /etc/nginx/sites-enabled/domain.com.conf

The first example we’ll cover is redirection of a specific page/directory to the new page/directory.

Temporary Page to Page Redirect

server {
# Temporary redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage redirect;
}

Permanent Page to Page Redirect

server {
# Permanent redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage permanent;
}

Permanent www to non-www Redirect

server {
# Permanent redirect to non-www
server_name www.domain.com;
rewrite ^/(.*)$ http://domain.com/$1 permanent;
}

Permanent Redirect to www

server {
# Permanent redirect to www
server_name domain.com;
rewrite ^/(.*)$ http://www.newdomain.com/$1 permanent;
}

Sometimes the need will arise to change the domain name for a website. In this case, a redirect from the old sites URL to the new sites URL will be very helpful in letting users know the domain was moved to a new URL.

The next example we’ll cover is redirecting an old URL to a new URL.

Permanent Redirect to New URL

server {
# Permanent redirect to new URL
server_name olddomain.com;
rewrite ^/(.*)$ http://newdomain.com/$1 permanent;
}

We’ve added the redirect using the rewrite directive we discussed earlier. The ^/(.*)$ regular expression will use everything after the / in the URL. For example, http://olddomain.com/index.html will redirect to http://newdomain.com/index.html. To achieve the permanent redirect, we add permanent after the rewrite directive as you can see in the example code.

When it comes to HTTPS and being fully secure it is ideal for forcing everyone to use https:// instead of http://.

Redirect to HTTPS

server {
# Redirect to HTTPS
listen      80;
server_name domain.com www.domain.com;
return      301 https://example.com$request_uri;
}

After these rewrite rules are in place, testing the configuration prior to running a restart is recommended. Nginx syntax can be checked with the -t flag to ensure there is not a typo present in the file.

Nginx Syntax Check

nginx -t

If nothing is returned the syntax is correct and Nginx has to be reloaded for the redirects to take effect.

Restarting Nginx

service nginx reload

For CentOS 7 which unlike CentOS 6, uses systemd:

systemctl restart nginx

Redirects on Managed WordPress/WooCommerce

If you are on our Managed WordPress/WooCommerce products, redirects can happen through the /home/s#/nginx/redirects.conf file. Each site will have their own s# which is the FTP/SSH user per site. The plugin called ‘Redirection’ can be downloaded to help with a simple page to page redirect, otherwise the redirects.conf file can be utilized in adding more specific redirects as well using the examples explained above.

Due to the nature of a managed platform after you have the rules in place within the redirects.conf file, please reach out to support and ask for Nginx to be reloaded. If you are uncomfortable with performing the outlined steps above, contact our support team via chat, ticket or a phone call.  With Managed WordPress/WooCommerce you get 24/7 support available and ready to help you!

How To Set Up Multiple PHP Versions in Webmin

Reading Time: 4 minutes

What is Webmin?

Webmin is a browser-based graphical interface to help you administrate your Linux server.  Much like cPanel or Plesk, Webmin allows you to set up and manage accounts, Apache, DNS zones, users and configurations.  As these configurations can get somewhat complicated Webmin works to simplify this process. The result is fewer issues during server and domain setup.  Which results in a stable server and a pleasant administration experience. Unlike Plesk or cPanel, Webmin is completely free and open to the public. Unfortunately, here at Liquid Web, we do not offer managed support for Webmin, but we are always willing to assist as much as possible when issues arise.   You can download Webmin from their site. Also, you can find some excellent documentation on this interface.

 

Installing Webmin

Before beginning “if you have not already” you will need to install Webmin on your server.  For this article, we will mainly be working with Webmin installed on a Ubuntu server. However, it is very similar to CentOS therefore we have included instructions for both operating systems below.

  • First, you will need to access your server SSH. If you are not sure how to SSH into your server, please visit our link on the subject.  
  • Once you are logged into your server SSH, please run the following commands in order or copy and paste the entire syntax.
Debian/Ubuntu

sudo sh -c 'echo "deb http://download.webmin.com/download/repository sarge contrib" > /etc/apt/sources.list.d/webmin.list'wget -qO - http://www.webmin.com/jcameron-key.asc | sudo apt-key add -
sudo apt-get updatesudo apt-get install webmin

CentOS/RedHat/Fedora

(echo "[Webmin] name=Webmin Distribution Neutral
baseurl=http://download.webmin.com/download/yum
enabled=1
gpgcheck=1
gpgkey=http://www.webmin.com/jcameron-key.asc" >/etc/yum.repos.d/webmin.repo;
yum -y install webmin)

 

Accessing Webmin

Webmin is a web-based application.  So once Webmin is installed, you can access Webmin by using a browser of your choice.   Be sure to make sure port 10000 is open on your server as Webmin utilizes this port to function.  We have included steps below to ensure the correct port is open for iptables and firewalld.

IPTABLES

iptables-save > /tmp/tabsav
vi /tmp/tabsav
iptables-restore < /tmp/tabsav
You should be able to use the command above to alter you iptables to look something like what we have included below.
# Generated by iptables-save v1.4.7 on Thu Jan 3 00:02:49 2019
*filter
:INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [3044:1198306] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Thu Jan 3 00:02:49 2019

FirewallD

firewall-cmd --zone=public --add-port=10000/tcp --permanent
firewall-cmd --reload

Once you have made sure port 10000 is open, you should be able to access the Webmin interface by entering in your servers IP address followed by the port number “10000”

Example:   https://192.168.1.100:10000             <—— 192.168.1.100 should be replaced with your server IP.

Installing PHP Versions in Webmin

There is a lot of situations where we may need to use multiple PHP versions.  For example, you may have multiple domains or applications on your server that require an older version of PHP while at the same time you may have newer domains that are configured for newer versions of PHP.   For this article, we will be installing PHP7 and PHP5.6 on Debian.

Step 1: First, you will want to SSH into your server and run the following command.
apt-get install php7.0-cli php7.0-fpmYou can check the installation after it has completed by running php –v in your terminal.

Step 2: Now here is where things tend to get tricky.  By default, Debian only offers a single PHP version in the official repository. So, we will have to add an additional repository for Debian. While adding this repository, it is good practice to enable HTTPS for APT and register the APT key. You can accomplish this by executing the commands we have included below.

apt-get install apt-transport-https
curl https://packages.sury.org/php/apt.gpg | apt-key add -
echo 'deb https://packages.sury.org/php/ stretch main' > /etc/apt/sources.list.d/deb.sury.org.list
apt-get update

Once the repository is added, we can go ahead and add our second PHP version to the server.

apt-get install php5.6-cli php5.6-fpmWe can now check both PHP versions on the server by running these commands.

php7.0 -V

Output:


php5.6 -V

Output:

Now that we have confirmed both PHP versions are installed you can access their configuration files in the following directories.

  • /etc/php/5.6/cli/php.ini
  • /etc/php/7.0./cli/php.ini

Step 3: To make things easier, later on, we will want to add the location of the configuration files to Webmin.  This can be done from within the Webmin interface.

  1. Log into Webmin
  2. Navigate to Others >> PHP Configuration
  3. Add the PHP configuration file location
  4. Click Save

You can use this tool to add and edit directives for different PHP versions. For example, you’ll be able to edit PHP’s memory limit, timeout length, extensions and more.  This simply helps consolidate configurations within one interface. From here we can just use a .htaccess file to specify what version of PHP a site should use.

Step 4: If you do not have this file already within your document root you can add this file by navigating to /var/www/exampledomain/  and running the following command to indicate which PHP version you are going to use.

echo "AddHandler application/x-httpd-php56 .php" >  .htaccess  | chown exampleuser. .htaccess

echo "AddHandler application/x-httpd-php70.php" >  .htaccess  | chown exampleuser. .htaccess

Step 5: Once you have completed this, you can test to see if your site is running on the desired PHP version.  You can accomplish this by creating a PHP information page. by making a file in your document root, usually in the path of /var/www/html/

You will want to insert the code below and save the file.

<? phpinfo(); ?>   After you have created this file, you can view the page by visiting your domain followed by the name of the file you created.  For example, www.example.com/phpinfo.php.

Congratulations you can now use Webmin to accomplish your daily admin tasks!  Take a look at our Cloud VPS servers for 24/7 support and lightening speed servers!