How To Secure Your WordPress Site

Reading Time: 4 minutes

WordPress is one of the most popular Content Management Systems on the Internet. Due to it’s popularity, it is also the target of many hackers.  We’re here to show you our top 5 recommendations on how to secure your WordPress site based on issues we’ve come across.

1. Keep WordPress Up to Date!

Our number one and top recommendation is to keep WordPress up to date! WordPress is a very active platform and updates come out regularly for it. The updates include many new features and changes in the backend, but also patch many bugs and exploits that the WordPress team finds. Just take a look at the releases and patch notes on https://wordpress.org/news/ sometime to get an idea of how much work gets put into finding and fixing these problems!

Being even just one or two versions behind can leave your site open to hackers that analyze the updates, create exploits, and go looking for outdated sites across the Internet. The longer a site isn’t updated, the more exploits and vulnerabilities are out there, and the increased likelihood that your site could be compromised.

The same rule applies to your Plugins and Themes, make sure those are all properly updated for the same reasons! Which brings us to item two on our list!

2. Review your Plugins and Themes!

WordPress is great because the plugins can quickly and easily give new features and customize your site very quickly, and themes can give your site a very professional appearance in a matter of seconds, but if they are not properly maintained, it could lead to problems down the road.

First, just remove any plugins and themes you don’t need. You could leave them disabled, but outright removing them is a safer option as the files wouldn’t be sitting on your server. Even if it’s disabled, it could still potentially be reached if an exploit can gain access to the files.

A side effect of removing the plugins is that it could actually speed up our site as well!

After removing any plugins and themes you don’t need, make sure to keep the ones you have left updated. WordPress can generally check for updates right in the admin area, but if you bought a plugin from a third party source, make sure to check with them for any updates. It would also be recommended to visit the website of each plugin and theme or even check reviews on other sites to make sure development is still active on it and that there are no known vulnerabilities.

3. Protect your logins!

Having your site publicly accessible out on the internet means customers and potential clients can access your site, but so can bots and people with malicious intentions! By default WordPress allows you to go to yourdomain.com/wp-login.php or yourdomain.com/wp-admin and should bring up a login page. If you try that on any of your sites and you get the login page, it’s highly recommended you use a plugin to hide where you go to log in.

WordPress does have some general security that blocks attempts after a few failed attempts, but if thousands of bots are all trying to log in and guess passwords, why even give them the chance to try? Make sure you use strong passwords, don’t use the same username and passwords in multiple locations, and go through your WordPress users to make sure they are all still valid. For example, if you gave a developer access years ago, maybe you don’t need that user sitting around there still.

4. Install protection plugins

I know I said to reduce your plugins earlier, but I would recommend having some plugins to block malicious connections, monitor suspicious activity, and scan site files for malware. We recommend iThemes Security, as it offers a lot of different features in just a single plugin, but you can look up what’s popular and read other reviews to help you decide on what would be the best fit with your site. For example, if you have a site where users can upload data, it would be a good idea to scan those files as they are being uploaded and block or at least report any that trigger warnings in a plugin that scans for viruses and malware.

Depending on how much protection you need, paid options would be recommended over free options to help increase the chances that newer exploits are blocked as well with more features and newer virus definitions.

5. Make sure you have good backups

Having good backups isn’t exactly a proactive step on how to protect your site, but it sure is a reactive step that can greatly help if the need arises. It’s a good idea for any business that relies on the data on their site. Think about all the orders, profiles, records, logs, and any other important information stored on your site, then imagine if something causes a problem and the whole site gets deleted, maybe a hard drive crash or malicious code is injected into it somewhere and causes the data to be lost. If you have no backups at all to restore from, then depending on the nature of your business this may be hundreds of work hours for a team to rebuild the site, lost revenue, lost customers, and would definitely be a major hit to your site’s reputation.

If you did have backups, depending on the frequency of the backups and how quickly the problem was noticed and rolled back, there may be little to no data loss, customers may not notice, and the site can quickly bounce back.

We highly recommend having multiple backups taken over a period of time. The more backups you have the more options you would have to restore from. Having only one daily backup could cause problems if an issue isn’t noticed until three days after it happens. Active sites may need continuous backups compared to a static site that maybe hasn’t changed in months.

Also storing your backups in different locations would help spread out the number of available backup copies. Like if a dedicated backup hard drive failed then you could still have remote backups saved on a different service that wouldn’t be affected. Think of it like not putting all your eggs in one basket! For more information on good backup practices, see Best Practices: Developing a BackUp Plan.

Hopefully, you gained something useful from this article! If you or a friend are in the market for a web host, feel free to talk to a Liquid Web tech by phone or in a chat 24 hours a day! Thanks for reading!

How to Check for Installed Packages on CentOS

Reading Time: 3 minutes

While managing your server, you’ll sometimes need to check on which software (or packages) you have installed on your system. You’ll need to know package names, version numbers, dates of installation, etc. In this Liquid Web tutorial, we’re going to be discussing how to inspect packages installed on your CentOS system. There are several ways to accomplish this, and we’ll discuss a few of them. Let’s dig in! To use these commands, you’ll need to log in to your server via SSH. For more information, see Logging into Your Server via Secure Shell (SSH).

Using RPM Package Manager

This first command uses the rpm package manager to poll for installed packages. This command allows you to see every installed package on your system, along with the version that is currently installed:

rpm -qa
Note the -q means “query” and -a means “all”. We’re asking rpm to query all installed packages.

Let’s examine a small portion of the results in detail. Note that you might not have these specific packages installed on your CentOS server. The important thing here is to understand how to read the output. Take a look at a small excerpt of entries from the list.kpartx-0.4.9-123.el7.x86_64
dracut-033-554.el7.x86_64
elfutils-libs-0.172-2.el7.x86_64

Each entry can be broken up into three parts. From left to right, these are:
Package name: (kpartx)
Version: (0.4.9-123.el7)
Architecture: (x86_64)

Instead of displaying all installed packages, rpm can also be used to search for a single package. Let’s use rpm to query kpartx:
rpm -q kpartx
You’ll see the output displays the same package name and version we saw from rpm -qa.
kpartx-0.4.9-123.el7.x86_64

Using Yum to Check Installed Packages

Using rpm is not the only way to check for installed packages on your system. Now we will discuss how to use “yum” to accomplish the same task. Try the following command:
yum list installed
You will see that the list yum provides is formatted slightly differently. Let’s look at an entry in depth.
whois.x86_64 5.1.1-2.el7 @base
The first column shows the package name and architecture: (whois.x86_64).
The second column shows the version installed: (5.1.1-2.el7).
Finally the third column shows the repository the software was installed from: (@base).

Using Yum to View Historical Installation Data

We can also use yum to view historical installation data on your system. Run the following command to see a list of anytime yum was used to install, remove, or upgrade a package:
yum history
Here is an example of the output you might see. Your system will show different results here, and that is OK. We’re just interested in learning how to read the output.
ID | Command line | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
10 | upgrade | 2019-06-01 04:13 | I, U | 12 EE
9 | install whois | 2019-05-04 17:40 | Install | 1
8 | install python36 | 2019-05-03 21:23 | Install | 2
7 | install epel-release | 2019-05-03 21:02 | Install | 1
6 | install bind-utils | 2019-05-03 19:33 | Install | 2
5 | install docker-ce docker | 2019-05-03 17:37 | Install | 4
4 | install yum-utils yum-co | 2019-05-03 17:26 | Install | 6
3 | install git | 2019-05-03 17:19 | Install | 4
2 | install vim | 2019-05-03 17:18 | Install | 31
1 | update | 2019-05-03 17:09 | I, U | 57
history list

Notice the column headings: “ID number, Data and time, Action(s), and Altered.” This is a good summary of when yum was used, but it is lacking detailed information. Let’s examine one of these history entries in detail. Try the following command, replacing “ID_NUMBER” with the actual ID you want to inspect.
yum history info ID_NUMBER

Here is some example output:
Loaded plugins: fastestmirror
Transaction ID : 9
Begin time : Sat May 4 17:40:24 2019
Begin rpmdb : 356:8ab21eca9f4a219812e33c41a73fbd4eb7de1ed8
End time : (0 seconds)
End rpmdb : 357:cf2bf4588ba4d3263d1c9af051c3bcc525596a68
User : Cloud User <centos>
Return-Code : Success
Command Line : install whois
Transaction performed with:
Installed rpm-4.11.3-35.el7.x86_64 installed
Installed yum-3.4.3-161.el7.centos.noarch installed
Installed yum-plugin-fastestmirror-1.1.31-50.el7.noarch installed
Packages Altered:
Install whois-5.1.1-2.el7.x86_64 @base
history info

In this tutorial, we discussed how to use rpm and yum to search your CentOS server for installed packages. These utilities are both critical tools for Linux sysadmins on CentOS systems. Of course if you have any questions about how to use these utilities on your own Liquid Web server, let us know! The Most Helpful Humans in Hosting are standing by 24×7 and we’ll be happy to answer your questions.

How to Convert .htaccess Rules to NGINX Directives

Reading Time: 6 minutes

NGINX is a webserver that is becoming an increasingly popular option for webhosting, as sixteen percent of all sites on the internet are utilizing NGINX. This percentage is constantly increasing as clients are in need of a web server that can serve content faster. It can also be used for proxies, reverse proxies, load balancing, and more depending on what modules you load onto NGINX. One of the significant differences between Apache (a popular webserver) and NGINX is the way each system handles access rules. If you are familiar with using .htaccess rules in Apache, then the method that NGINX uses of including directives in the server’s vhost block will be substantial change.

We will be showing how to convert .htaccess rewrite rules to NGINX rewrite directives. The NGINX rewrite directives will also need to be placed within the server block. Many server configurations include this server block information in the vhosts file, while some use a separate NGINX configuration file (for more information about the NGINX configuration file, see Redirecting URLs Using NGINX). To complete this task, you will need to understand some of the basic NGINX directives I will be discussing in the next session.

Introduction to NGINX Rewrite and Return Directives

The most commonly used directives with NGINX are the return and rewrite directives. When using an NGINX directive, a client visiting a page can be directed to a different directory or a different landing page. Requests can also be redirected to an application depending on the directives you specify. For example, clients visiting the page from a smartphone can be forwarded to a script that is coded specifically for phone browsers. Another example would be to forward a client based on IP or geographical location, making your site region specific and tailored to the visitor based on location.

NGINX Return Directive

The return directive is a bit less complicated than the rewrite directive. Best practice is to use this directive over the rewrite directive whenever possible. You will typically include the return in a server context that specifies the domains to be rewritten. I have included a common example below. Clients visiting the site will be redirected to the domain specified after the 301 status code. Using this directive will forward the client that visits www.liquidwebtest.com to www.liquidweb.com.

server {
listen 80;
server_name www.liquidwebtest.com;
return 301 $scheme://www.liquidweb.com$request_uri;
}

  • Server { } is defining the block
  • Listen “what port to listen to”
  • Server_name “defining the incoming requested URL”
  • Return “what the server returns from this request”

NGINX Rewrite Directive

The rewrite directive is somewhat different that the rewrite rules in .htaccess. It needs to be placed in a specific location or server block in order to rewrite the URL. The rewrite directive is usually used to perform smaller tedious tasks. For example, it is used in some cases to capture elements in the original URL or change elements in the path. The NGINX rewrite directive can get very complicated but once you understand the basic syntax it can be a lot less intimidating. I have included the basic syntax for a NGINX rewrite directive below.

rewrite regex URL [flag];
It is important to know a rewrite directive will almost always return a HTTP 301 or 302 status code. If you need your web server to return a different status code, the return directive is needed after the rewrite. I have included an example below from NGINX’s rewrite module documentation.

server{
...
rewrite ^(/download/.*)/media/(.*)\..*$ $1/mp3/$2.mp3 last;
rewrite ^(/download/.*)/audio/(.*)\..*$ $1/mp3/$2.ra last;
return 403;
...
}

In this example the URLs that start with /download followed by /media or /audio are matched. Afterwards directories /media and /audio elements that contain /mp3 will have the extension .mp3 or .ra file extension added to the URL. This return directive will return a 403 to the client if the URL does not match the rewrite rule.

Converting .htaccess rules to NGINX directives

Hopefully by this point we have a basic understanding of the two most commonly used NGINX directives. However, learning these rules will take some time as the can be very complex. Learning regular expression is very helpful in this process. We’ll work through examples of commonly used htaccess rules that we will be converting to NGINX directives.

Example: Redirecting from example.com to www.example.com

Adding the www to a URL when a client requests content from your server can help certain sites (like those hosted on WordPress) to function more efficiently. A common .htaccess rule to accomplish this rewrite is:

RewriteCond %{HTTP_HOST} example.com
RewriteRule (.*)https://www.example.com$1

As I mentioned earlier it is best practice to use the return directive whenever possible. Below we will be creating a server block within the nginx.conf to accomplish the same task as the .htaccess rewrite rule above.

server {
listen 80;
server_name example.com;
return 301 http://www.example.com
$request_uri;
}
server {
listen 80'
server_name www.example.com;
#...
}

In this example there will be two server blocks defined with brackets” {}”. We are telling NGINX to listen on port 80 for requests to example.com. Then to return a 301 ”redirection” to www.example.com. We usually split these rules in two server blocks to make these directives as efficient as possible. The 2nd is not always needed but will serve content from the working directory if www.example.com is requested. If no exact match is found, NGINX then checks to see if there is a server_name with a starting wildcard that fits. The longest match beginning with a wildcard will be selected to fulfill the request.

You can test your syntax by running the following command:

nginx -t

This allows you to test the syntax for errors before loading the changes in the configuration file and possibly causing issues on your live site. Once you have edited the NGINX configuration file be sure to restart NGINX using a daemon or simply running the command below.

nginx -s reload

Example: WordPress Permalinks

In this example, I have included one of the most common set of .htaccess rules used today. The rules I have included below allow wordpress to utilize permalinks. This is installed by default with wordpress on an Apache server. A permalink is a URL that is intended to remain unchanged. For example, domainexample.com/blogexample.php can be loaded as domainexample.com/blog within your browsers address bar.

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L] RewriteCond ${REQUEST_FILENAME} ~ - f
RewriteCond ${REQUEST_FILENAME} ~ - d
RewriteRule . /index.php [L] <IfModule>

Belowis the NGINX equivalent. No return or rewrite directive is needed here as we are only allowing the content management system to hide the paths using permalinks. For more information on this task please see NGINX’s documentation on permalinks.

location / {
try_files $uri $uri/ /index.php?$args;
}

You can test your syntax by running the following command:

nginx -t

This allows you to test the syntax for errors before loading the changes in the configuration file and possibly causing issues on your live site. Once you have edited the NGINX configuration file be sure to restart NGINX using a daemon or simply running the command below.

nginx -s reload

Example: Forcing http to https

Another popular use for the htaccess file is to force the browser to load the site using https over http. This allows the browser to verify the site is not a security risk by confirming the site exists on the server it claims to be on (see What Is an SSL Certificate?). It can also be used to verify a business location, business ID number, and location. This helps prevent visiting malicious sites that may cause harm to your personal computer or private information.

The following .htaccess rule will force https, so that requests using port 80 for example.com will be redirected to https://www.example.com.

RewriteEngine On
RewriteCond %{HTTP_HOST} ^example\.com [NC] RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.example.com/$1 [R,L]

To accomplish this with NGINX, we will use the NGINX return directive.

server {
listen 80;
server_name example.com;
return 301 https://www.example.com$request_uri;
}

You can test your syntax by running the following command:

nginx -t

This allows you to test the syntax for errors before loading the changes in the configuration file and possibly causing issues on your live site. Once you have edited the NGINX configuration file be sure to restart NGINX using a daemon or simply running the command below.

nginx -s reload

Conclusion
We could go on and on with examples but hopefully at this point we now have a basic understanding on converting htaccess for Apache to NGINX directives. If you require further information on accomplishing these tasks you can always reach out to our support for assistance. However, we do not fully support NGINX and it is considered Beyond Scope support. This means we will assist you as much as possible but we may not be able to resolve your issue and may instead refer you to a developer for additional assistance. If you want to use NGINX web server to host your content, we have multiple options including our VPS lineup to meet your business requirements. NGINX is currently being developed for use on cPanel servers as well, although it is not currently supported or recommended in production situations. See cPanel’s blog for more information.

What is an EPP Code?

Reading Time: 4 minutes

When you have control over a domain name it grants you several extremely powerful options such as renewing it, cancelling it, and pointing inbound traffic to a location of your choosing. Because of the long-term impact all of these changes could have, Registrars (such as eNom and GoDaddy) take security very seriously. What does this mean for you? If you are wanting to transfer a domain to a new Registrar, you are going to need the authorization code from the current Registrar and that takes the form of the EPP Code.

The EPP Code (or EPP Key) is a complex system-generated password that a Registrar will create upon request and provide to the domain owner via the contact email address for the domain according to its WHOIS record. This allows the domain owner to contact their new Registrar, who will use the code to confirm ownership. These codes customarily expire within a day or two though which means if the owner does not submit the transfer request quickly, they may have to get a replacement EPP Code and try again. It should also be noted that a domain cannot be transferred within the first 60 days, nor within 60 days of a previous transfer.

How Can I Get an EPP Code?

Requesting an EPP Code is a fairly simple process, usually just requiring you to login to the Registrar that currently controls the domain name you want to transfer and pressing a “Request EPP Code” button. If you used a web host to register the domain for you, you may have to open a support ticket with your host to get the code generated for you.
For reference, here are the processes with several different Registrars and web hosts. If you are not sure who the current Registrar is for your domain name, you can do a WHOIS lookup.

At Liquid Web?

To transfer a domain name away from Liquid Web, first log into your manage portal and click on the Domains tab on the left-hand side:domains link in manage.liquidweb.com highlighted
Then click on the domain you want to transfer, which will bring up the details page featuring a “Transfer this Domain” button on the right:

transfer domain button highlighted

Clicking that will show you a confirmation prompt:

transfer confirmation dialog box

Confirm the domain and registrant email address are correct, and then press “Start Ongoing Domain Transfer” to begin. If this has been successful, you’ll see the following confirmation:

domain transfer request started information box

Once you receive the email with the code, you’re all set and ready to proceed. If you do not get an email, you’ll want to contact Liquid Web support.

At 1&1 IONOS?

To transfer a domain away from 1&1 IONOS, you’ll want to check the Domains section in your My IONOS panel. Clicking on “Rewal & Transfer” will allow you to select your domain. You’ll want to make sure the “Domain Transfer Lock” option is disabled and once it is, click “Show Activation Code.” If this does not work, you will need to contact IONOS support for more information.

At Dreamhost?

To transfer a domain from DreamHost, you’ll need to access your Panel, and unlock your domain. Check the Registrations section, and if the status for “Locked?” shows “Yes,” then simply click the word to unlock it. If it shows “no,” you’re all set. If you click “Or Transfer Away from Dreamhost,” the EPP Code generation prompt will be displayed. Click the “Reveal Auth Code” button to display the code, though depending on the TLD (Top Level Domain, such as .com or .gov), the code may be required to be emailed instead. If you do not receive a code or an email, you’ll want to contact support.

At GoDaddy?

If your domain name is currently with GoDaddy, you’ll need to log into the Domain Control Center specifically. Click on the button to “Manage” the domain in question and scroll down to “Additional Settings,” where you’ll see the option to “Get authorization code.” Once that link is pressed, the code will automatically be emailed to you. If the option shows disabled, your domain may be within 60 days of being registered or has previously been transferred or currently under a 60 day lock. Contact GoDaddy support for more information.

At HostGator?

If HostGator is your current Registrar, you’ll want to access the Domains section of your Dashboard and click on the domain in question. If you click on the ‘lock’ icon, the “Domain Locking” options are presented. Make sure the “Domain Locking” slider is in the OFF position, and press the “Request Your EPP Key” button to make the code visible. If you do not have a billing account with HostGator or your domain does not show up, you’ll want to contact HostGator’s support for assistance.

At NameCheap?

To transfer a domain name from NameCheap, you first have to log into your manage portal and click on the “Domain List” section. Select the domain you would like to transfer using the “Manage” button and using the “Sharing & Transfer” tab, set “Domain Lock” to OFF and press “Auth Code.” You will be prompted to respond with the reason for transferring, but the code will then be emailed to the registered email. If the prompt does not complete or you do not get an email, contact NameCheap support.

At Network Solutions?

If your domain is currently registered with Network Solutions, you will need to call their support line that is specifically for domain names at 1-888-642-0209 and request the EPP Code on the phone. This will be emailed to the registered contact.

What Do I Do With My EPP Code?

If you have not already started the domain transfer process with your new Registrar or web host, you’ll want to do that now. This is routinely handled in a support ticket and when the request is started, support will ask for the authorization. Simply give them the EPP Code, and the Registrars will communicate and confirm the transfer. You may be required to renew the domain name in question which will push back the expiration date, but the task is otherwise complete and your domain name should now be in the hands of your new Registrar!

How to Use the Mail Queue Manager in WHM

Reading Time: 3 minutes

The Mail Queue Manager feature in WHM allows you to view, delete, and attempt to deliver queued emails that have not yet left the server. It can be a handy tool for diagnosing a variety of issues with mail deliverability, such as spotting signs of a compromised account sending spam from the server.

Accessing Mail Queue Manager in WHM

If you are unfamiliar with how to access WebHost Manager (WHM), you can take a look at our article Getting Started with WHM.

Once logged into WHM, you can navigate to the Mail Queue Manager page by inputting the text “mail queue” into the search box above the left menu, then click the Mail Queue Manager option:

mail queue manager link in WHM

Searching for Queued Emails

From the Mail Queue Manager main page you will see a section for searching through these queued emails. You can input either a Sender, Recipient, or Message ID (a unique identifier the mail server gives each email sent and received) to filter through the queued messages.

Once you input a search for one of these options, select the corresponding option from the Select Query dropdown menu next to the text box: Search Sender, Search Recipient, or Search Message ID.

You can also select No Filter if you do not want to restrict the search to one of these specific options.

The search filter also includes a section to select a particular time frame by entering a Start Date and End Date. This will filter the search results down to emails that fall within this time frame. Please note: WHM only retains this data for 10 days, so email outside of that time frame will not be included in the search results.

Once you’ve input the text to search, and selected the filter options, click the Run Report button.

Below is an example of a search for all messages in which the sender of the email matches “user@domain.com”:

mail queue search screenshot

Viewing Queued Emails

To view an email currently in the queue, under the Actions column, click the magnifying glass icon:

example of email in the mail queue

This will display the email’s simple headers, text content, and provide you with options to delete the email, attempt delivery, download the email in .eml format (which you can open in mail client applications such as Microsoft Outlook), or view the email’s extended headers and control data:

example of email header detail in the mail queue

Delivering Queued Emails

As shown above, you can view a specific email and click Deliver Message Now to attempt delivery of the message. You can also select messages from the main page of the Mail Queue Manager and click Deliver Selected:

detailed view of the mail queue

The option Deliver All will attempt to send out all emails currently in the queue.

Deleting Queued Emails

To delete an email currently in the queue, you can view a specific email using the instructions above and then click Delete Message.
Multiple emails can be deleted from the queue using the main page of the Mail Queue Manager. You can either select each email you’d like to remove and then click Delete Selected, or you can remove all queued emails by clicking Delete All.

Unfreezing Frozen Queued Emails

You may see emails listed as Frozen under the Status column. These are emails that failed to deliver after multiple attempts, so in order to help the queue continue to run efficiently, the system will ‘freeze’ these emails. To unfreeze an email, you can click the second icon under actions:

frozen email in the mail queue

Once unfrozen, the email will attempt to send during the next queue run. Forcing a delivery attempt of a frozen email will also unfreeze the selected email.

Multiple frozen emails in the queue may indicate an issue that requires further investigation, such as a remote mail server blocking the mail transaction.

For more information on diagnosing email deliverability issues, you can take a look at our article entitled Troubleshooting: RBLs and Email Delivery Problems (Rejected Email Messages).

An Intro to React JS

Reading Time: 6 minutes

In our modern world of smartphones and apps, it is more important than ever to have a fast, responsive website that impresses your visitors. Created by the development team at Facebook, ReactJS is a JavaScript ‘framework’ or method of building web pages and apps that can ‘react’ to user interaction and external changes. ReactJS does this by way of components that can refresh themselves and their contents without a page reload. Better still, these components are modular. This concept means they can be coded quickly (called ‘hacking’ in the ReactJs community) and reused easily between projects.

While programming React JavaScript code is outside the scope of this tutorial, we are happy to help you and your development team hit the ground running by creating a development environment. Our focus is on getting the official ReactJS tutorial example running on a Liquid Web WHM/cPanel Virtual Private Server or VPS. Servers built on our VPS platform work well as development/testing servers and can be easily created. Feel free to spin one up to follow along with me today!

A dedicated server may provide more stability for your application over the long term once development has been completed. If your application is HIPAA related, you will need a dedicated server with HIPAA compliance. The differences between the VPS and Dedicated Server platforms can be found in our Knowledge Base article. Addtionally, ReactJS can also be installed on our Plesk, Windows, and Ubuntu platforms. Let’s get started!

Install NodeJS

First, we will need to install the latest version of NodeJS; This application allows our JavaScript files to import (or reference) each other, share global variables, access advanced command-line arguments, install additional modules and more.

At the time of writing, the latest version of NodeJS is 11.X Always use the most up-to-date software to ensure the stability and safety of your applications. Today we will be downloading and installing the latest version.

First, we log into our terminal and set up the NodeSource repository :

curl  -sL https://rpm.nodesource.com/setup_11.x | bash -

Next, we install NodeJS and its dependencies :

yum  install  -y nodejs  gcc-c++ make

To verify the installation and check the version, we run :

node --version

Install Serve

ReactJS Apps can be served by Apache or by the lightweight “Serve” application. Serve can run alongside Apache/Nginx accomplished by running Serve on an alternate port.

To install Serve globally, we run the following command :

npm install -g serve

In this tutorial, we will be running this application on a newly created cPanel account. but it can also run on any existing account. Our cPanel username in this example is ‘react’, and our development folder meant to house our source code will be named ‘dev’. This dev folder will sit our cPanel account’s root folder (/home/react/dev). A folder can be created via File Manager or FTP, but in this example, we are doing this via the command line :

mkdir /home/react/dev/

Install ReactJS

Our next step is to use the application ‘create-react-app’ command to build the framework and install the necessary prerequisites. No need to worry about updating ‘create-react-app’, this application updates automatically when run.

This command requires a path to build properly. In our example we are running the following to build in our dev folder.

npx create-react-app /home/react/dev

The create-react-app process can take a few minutes, but it will keep you updated of progress like so :

When completed, you’ll see an overview of some important development commands and a warm welcome into the ReactJS Community.

The create-react-app script generated the above files for us with default content. With that, you and your developers can begin the development process.

Note
Three main components of a ReactJS application are:

  • .html files; These provide the structure to your app.
  • .css files; These contain the styling for your app content.
  • .js or Javascript Files; These provide app functionality.

Run ReactJS Default Application

While Liquid Web Support is not able to assist with code-related issues, in this tutorial, we will be going a step further by launching the default application. Finally, we’ll be reviewing and launching the ReactJS ‘tic-tac-toe game’ tutorial application.

First, let’s confirm that ReactJS is running by building the default application. We want to be in the command line and our working folder.

cd /home/react/dev/Run the following code to have React build the default package contents.

npm  run build

There are a few options here, by default if we use ‘Serve’ it will use port 5000;  This is run by calling the following :

serve -s build

Note
You may notice the error regarding ‘xsel’. That application is used to copy from command line to a Linux Desktop Environment or GUI (Gnome, KDE, Mate among others). In our case, our Linux server is not using a Desktop Environment, so we do not need nor could we use the xsel library so we’ll ignore the error.

Viewing ReactJS Default App

To have our app available to the outside world, we will need to ensure that port 5000 is open in the firewall; Though this port is not open by default we can adjust that easily;

We are looking for the section called ‘Opening and Closing Ports in the Firewall’. Once implemented, you’ll see the default ReactJS application running in the browser. We do not own the domain ‘react.com’ so we will need to adjust our /etc/hosts file locally on our workstation. More information on editing your /etc/hosts files can be found here.

If you own your domain and your site’s DNS ‘A’ Record pointing to the Liquid Web server IP this should also be visible to you. The default ReactJS screen looks like this if everything is configured and running properly.

Run ReactJS

It is possible to run ReactJS applications over the typical Apache or Nginx ports; Please note, running ReactJS with Serve over port 80 means Apache or Nginx would need to be disabled. If the primary focus of your server is to handle this application, Apache can be disabled server-wide. Serve is a very lightweight application and works very well. If you do not have another service running on port 80; The following command will start ‘Serve’ on port 80.

PORT=80 serve -s build

The above can be adjusted to a different port as well. In this way, multiple ReactJS applications can be run on the same domain. In addition to this, Apache can serve your ReactJS application as static content. To do so, we would need to copy the contents of our application’s /build folder to the desired location within the directory configured for web delivery.

In our case, we will be copying /home/react/dev/build to /home/react/public_html. This can be done on the command line with :

cp -r  /home/react/dev/build  /home/react/public_html

Depending on how you built the application package, you may need to adjust the permissions of your files. In our case, the ownership of these files was set to ‘root’. In response to this, we are changing the ownership to ‘react’.

In this example, we are going to be adjusting all files in the public_html folder to be owned by the user ‘react’ with the following command :

chown  -R react.  /home/react/dev/public_html/*

To have Apache serve the above, we are going to be adjusting the .htaccess configuration in our project.

vim /home/react/public_html/.htaccess

With our text editor, we are going to be adding the following :

Options -MultiViews
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.html [QSA,L]

If everything is configured correctly, you should now see the contents of your application as delivered by the Apache service when visiting your site. You now know the basics of how to get the default ReactJS environment running on a Liquid Web Linux WHM/cPanel VPS Server!

Example ReactJS Application

We are going one step further here by going over a tutorial. This tutorial goes over every aspect of getting started with ReactJS with great detail into programming some components needed to get a running tic-tac-toe game. We advise going line by line through the code as well as the documentation. With this knowledge, you can build your own application. The best source for ReactJS syntax and concepts can be found in their documentation. Feel free to take your time going through these examples and documentation; There is a lot to go through, but you can start small and build your way up. Today we are going to copy the contents of the Final Code to see how they run on our Liquid Web Server.

There are multiple ways to adjust the contents of index.html, index.css, and index.jss files. While outside of the scope of this tutorial, you can use other command line text editing tools, FTP, cPanel ‘s File Manager tool, Git  and many more to edit files. In our case, we are using the cPanel File Manager’s Text Editor. A walkthrough of this interface and how to edit files can be found here.

First, we open the File Manager, navigate to the location of our file /home/react/dev/public/index.html, then we select the index.html file ( 1 ) and click“Edit” ( 2 )

We are going to be copy-pasting the code from CodePen into our server-side files.

Second, we do the same with /src/index.css.

Finally, we copy paste the contents of /src/index.js.

At the top of the index.js file, we add some needed code; This is necessary for referencing the React installation and our CSS files.

import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';

With all three files copied over, we are going to rebuild our project. To rebuild, we will go back to our development folder and rerun our build script. With that, we can test by serving our new build with our serve script. By visiting react.com:5000 we see our tic-tac-toe game live on our Liquid Web server!

In conclusion, we went over what ReactJS offers developers, installed ReactJS along with customization. WIth the official documentation, you now have all you need to build out your dream application. We cannot wait to see what you have in mind!

 

How to Install and Configure Puppet on CentOS, Fedora, Ubuntu or Opensuse

Reading Time: 4 minutes

What is Puppet?

Puppet: A Closer Look At Who Holds The Strings

Puppet is an intuitive, task-controlling software which provides a straightforward method to manage Linux and Windows server functions from a central master server. It can perform administrative work across a wide array of systems that are primarily defined by a “manifest” file, for the group or type of server(s) being controlled.

System Requirements

Puppet uses a master/client setup to communicate between the master and client servers. The master server will require more resources than the client servers utilize. The resources needed on the master server will mainly depend on:

  • The number of remote agents (servers) being utilized
  • How frequently those remote agents check in to the master server
  • How many resources are being managed on each remote agent
  • The complexity of the manifest files and modules in use
Note
A Puppet master server must run on UNIX variants known as  “ *nix ” operating systems. Currently, Puppet masters CANNOT run in a Windows environment.
Master Hardware Requirements

The minimum hardware requirements for the Puppet master servers will be based on multiple factors as stated above and noted in Puppet’s guidelines.

 

Client Software Platforms

The Puppet-agent (or client) packages are available for these platforms:

 

Dependencies

If you are installing the Puppet client using an official distribution package via a repository then your system’s package manager usually ensure that the proper dependencies are installed. If you install the agent on a platform without a supported package, you must also manually install the dependent packages, libraries, and gems:

  • Ruby 2.5.x
  • CFPropertyList 2.2 or later
  • Facter 2.0 or later
  • The msgpack gem from MessagePack, if you’re using msgpack serialization

 

Timekeeping and Name Resolution

Before installing the client, there are certain network requirements which you will require you to preparie, review and consider. The most important aspects include time syncing and implementing an idea for name resolution.

Timekeeping

You will want to make sure that the Network Time Protocol (NTP) service is in place to ensure that the time is in sync between the master server, (which acts as the certificate authority) and clients. This is recommended due to the issues that can develop if the servers time drifts out of sync. You may encounter odd certificate issues. A service like NTP (available on most servers) assures accurate timekeeping and will reduce the risk of error like this occurring.

Name resolution

The second part of this component is to decide on an iterable naming convention. For example, by using a master name like puppet.domain.com establishes the continuity of this naming convention. This also allows optimal master communication and that all future agents can reach the master. You can simplify this by utilizing a CNAME record (a name forwarding DNS entry) to ensure the master is always reachable.

 

Firewall Configuration

In a master/client setup, the master server must have port 8140 open to allow for incoming connections from the remote clients. You can use either of the following commands to check that the port is open and listening:

root@master [~]# netstat -tulpn | grep LISTEN |grep 8140
root@master [~]# lsof -i -P -n | grep LISTEN |grep 8140

If nothing is returned with the above command then you’ll need to open port 8140. To open the port in the UFW firewall, use the following command:

root@master:~# ufw allow 8140/tcp
Rules updated
Rules updated (v6)
root@master:~#

 

Puppet Installation

Usually, Puppet uses approximately 2 GB of RAM by default. Plan on this amount plus any additional RAM needed to run the server’s OS itself. If you plan on creating a 2 GB server, opt for one that has 4GB of RAM if you are going to use it as a Puppet master.

Puppet is available on multiple OS variants including:

  • Red Hat/CentOS/Fedora
  • Debian/Ubuntu
  • SUSE Linux Enterprise Server

The basic install steps across all of the above mentioned OS is as follows:

Available Puppet Repositories

root@master [~]# wget https://apt.puppetlabs.com/puppet-release-bionic.deb
root@master [~]# dpkg -i puppet-release-bionic.deb
root@master [~]# apt update
root@master [~]# apt install puppetserver

Note
Our Fully Managed servers (cPanel or Plesk) wouldn’t be good options for Puppet implementation since additional repositories may conflict.

Install the Puppet Master’s Software

Red Hat/CentOS/Fedora

yum install puppetserver

Debian/Ubuntu

apt-get install puppetserver

SUSE/Opensuse

zypper install puppetserver

 

Start the Puppet Master Service

Red Hat/CentOS/Fedora

systemctl start puppetserver

Debian/Ubuntu

service puppetserver start

SUSE/Opensuse

/etc/rc.d/puppetmaster start

 

Install the Puppet Client’s Software

Yum:

yum install puppet-agent

Apt:

apt-get install puppet-agent

Zypper:

zypper install puppet-agent

 

Puppet Configuration

Puppet contains around 200 different configuration settings located within the puppet.conf file. For most servers, you will only need to adjust about 20 settings or less in the file depending on your server’s setup. You can use the command below to set the needed values.

puppet config

We’ve listed the 5 most requested settings to suit your specific needs:

  • dns_alt_names – This is a list of allowed hostnames acting as the Puppet master.
  • environment_timeout – This setting is defaulted to 0 and should be untouched unless you have a particular cause to alter it. You can adjust this setting to unlimited to make master refreshes a part of your standard code deployment process.
  • environmentpath –  The environment path defines the locations where Puppet can find the specific directories for any unique environments. T
  • basemodulepath – This is a list of directories that contains the Puppet modules used in various environments.
  • reports – Directs which report handlers, listed below, to use.
    • HTTPS – Sends reports via HTTP/HTTPS as a POST request to the address defined in the reporturl setting.
    • Log – Sends reports to the local default log destination (usually syslog)
    • Store – Hosts will send a YAML dump of data to a local directory (defined by the reportdir setting in the puppet.conf)

The config reference provides a more comprehensive array of available options in modifying your server to suit your specific needs

 

More Information

Overall, Puppet is an attractive addition to your everyday toolset for managing and automating tedious tasks. Once it is installed and configured, it will maintain your day to day servers tasks with ease. You may want to consult the Puppet documentation for more in-depth information on this topic or consult the following resources for additional info.

 

How Can We Help?

If you would like more information on how this software can benefit your current setup, simply reach out to us via a phone call, chat or ticket, and one of our Most Helpful Humans in Hosting will follow up with you to advise on how best you can integrate this process into your existing infrastructure! We are looking forward to speaking with you!

 

 

How to Install MySQL on Windows

Reading Time: 2 minutes

If you’re using a Windows-based server to host your content, you may using Microsoft’s database server product, MSSQL. However, licensing restrictions can make using MSSQL difficult, especially for small businesses. Microsoft offers a free version of MSSQL called MSSQL Express that will be suitable for many users, but this version does have limitations on database size and memory usage. If you need a more robust database solution but want to try something with a lower cost (like a free, open-source database server), you could try MySQL database server.

MySQL is a standard part of the typical Linux server build (or LAMP stack) but is also available for use on Windows operating systems. Depending on your needs, you could fully develop your database in MySQL. Many popular Content Management Systems (CMS) also use MySQL by default, so using MySQL to manage those applications may be beneficial. MySQL and MSSQL can be run on the same server at the same time, so you’re free to use both or to experiment as needed.

Installing MySQL on your Windows server is as simple as downloading an MSI Installer package and clicking through a few options.

  1. Download the MySQL Installer from dev.mysql.com. The two download options are a web-community version and a full version. The web-community version will only download the server, by default, but you can select other applications (like Workbench) as desired. The full installer will download the server and all the recommended additional applications. (You’ll also be asked to create a user account, but you skip this part by scrolling down to the bottom and clicking “No thanks, just start my download”.)

  2. Run the installer that you downloaded from its location on your server, generally by double-clicking.
    Note
    You can use this same MSI Installer to upgrade currently installed versions of MySQL as well! As is typical, the first step is accepting the license agreement, then click Next.

  3. Determine which setup type you would like to use for the installation:
    1. Developer Default: this is the full installation of MySQL Server and the other tools needed for development. If you are building your database from the ground up or will be managing the data directly in the database, you’ll want to use this setup type.
    2. Server Only: if you only need MySQL Server installed for use with a CMS or other application and will not be managing the database directly, you can install just the server (you can always install additional tools later).
    3. Custom: this setup type will allow you to customize every part of the installation from the server version to whichever additional tools you select.

  4. Install the server instance and whichever additional products you selected. Then begin the configuration process by selecting the availability level (most users will use the default, standalone version).
  5. Complete the configuration process by following the onscreen instructions. You’ll want to make sure to install MySQL as a Service so that Windows can automatically start the service after a reboot or can restart the service if it fails. For additional, step-by-step instructions, see MySQL Server Configuration with MySQL Installer.

How to Install Nextcloud 15 on Ubuntu 18.04

Reading Time: 2 minutes

Similar to Dropbox and Google Drive, Nextcloud is self-hosting software that allows you to share files, contacts, and calendars. But, unlike Dropbox and Google Drive, your files will be private and stored on your server instead of a third party server. Nextcloud is HIPAA and GDPR compliant, so your files will be encrypted along with the ability to audit. For this tutorial, we’ll be installing our Nextcloud instance on our Ubuntu 18.04 LTS server. Continue reading “How to Install Nextcloud 15 on Ubuntu 18.04”

How to Install phpMyAdmin on Ubuntu 18.04

Reading Time: 1 minute

Working with a database can be intimidating at times, but phpMyAdmin can simplify tasks by providing a control panel to view or edit your MySQL or MariaDB database.  In this quick tutorial, we’ll show you how to install phpMyAdmin on an Ubuntu 18.04 server. Continue reading “How to Install phpMyAdmin on Ubuntu 18.04”