How to Sync Two CentOS 8 Servers Using File Replication

Reading Time: 8 minutes


All online businesses need to account for growth. As a business receives more visitors to its site, the underlying infrastructure needs to scale to provide the same level of performance that the visitors are accustomed to. Horizontal scaling, the addition of more servers rather than increasing the power of the existing servers, is an easy way to build our web servers’ ability to handle a more significant amount of traffic and protect us against hardware failure. Ensuring that the additional web servers have the same files and data is a potentially time-consuming and challenging task. Automating that task using free, open-source software, such as lsyncd, is a way to ensure that we have a safe, secure, and repeatable method of copying files from one server to another.


This article assumes that we are utilizing two or more core-managed CentOS 8 servers running Apache in a default configuration. Installation of software may differ depending on the OS and default software configuration used. We also assume that there is a basic understanding of the functionality of Vim. Any text editor will work in place of Vim.

Software Used

This article explores a method of synchronizing data between two web servers running Apache utilizing open-source software called Lsyncd. To quote from the description on the GitHub page:

Lsyncd watches the local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default, this process uses rsync.

This denotes that lsyncd runs in the background and tracks changes we make to files in a specified folder. It will collect those changes over a short amount of time and then processes them by using rsync and SSH to “push” the file changes to the remote servers. The main benefits for us are:

  • Lsyncd is free – this free software that can be downloaded and configured with no charge for the software or use. 
  • Setup is simple – we only need to install a single package, and the configuration file uses Lua (though the syntax is rather straightforward).
  • Comprised of reliable technology – rsync and SSH are old, well-used utilities that are readily available on every Linux based machine.

Because of the free usage rights and ease of setup, Lsyncd makes for a perfect utility to synchronize data across two or more hosts. Several examples of when we might want to synchronize data across hosts include:

  • Load-balancing incoming requests[1] – this works best when the traffic levels are relatively low (or intermittent), or new and modified content is not frequently accessed. 
  • High availability – keeping in mind that there are multiple aspects of high availability. Using lsyncd to push data to another host that can take over in the event of a hardware failure is an excellent use-case.
  • Live / Running backups – a great way to keep a running record of the files and folders that have changed will ensure we push the changes to a second host for backup purposes.

[1] – If we have a high traffic site, we are better off using a shared file system that our web nodes can access simultaneously.

Unfortunately, there are some drawbacks to using Lsyncd that we need to consider when determining if this is the best way to synchronize data across multiple servers. 

Lsyncd is a one-way push-based utility. This means we have a master server where we can create or edit files, and then the master server “pushes” the changes to the attached slave nodes. 

Any changes made on a slave node are not picked up or shared with the master or other nodes. Additionally, not all changes are pushed out of the master. Files that are created, deleted, or have the content modified are pushed out; however, ownership and permission changes are not transmitted to the slave nodes. 

  • Lsyncd is not a real-time synchronization mechanism. The default timeframe for pushing changes is every 15 seconds (although this can be modified in the configuration settings if necessary).
  • Lsyncd is a one-way push-based utility. This means we have a master server where we can create or edit files, and then the master server “pushes” the changes to the attached slave nodes. 
  • Any changes made on a slave node are not picked up or shared with the master or other nodes. Additionally, no changes are pushed out of the master. Files that are created, deleted, or have the content modified are pushed out; however, ownership and permission changes are not transmitted to the slave nodes. 

Basic Configuration

Add EPEL Repo

To begin setting up Lsyncd, we need to add the software repository that contains the Lsyncd package. This is easily performed with the following command: 

root@alt ~]# yum -y install epel-release

The installation will take a moment, and we will see a decent amount of output. Once it says “Complete!” we can move on.

[root@alt ~]# yum -y install epel-release
Loaded plugins: fastestmirror, langpacks, priorities
Loading mirror speeds from cached hostfile
1 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package epel-release.noarch 0:7-11 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
 Package Arch Version Repository Size
 epel-release noarch 7-11 system-extras 15 k
Transaction Summary
Install 1 Package
Total download size: 15 k
Installed size: 24 k
Downloading packages:
epel-release-7-11.noarch.rpm | 15 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : epel-release-7-11.noarch 1/1
  Verifying : epel-release-7-11.noarch 1/1
  epel-release.noarch 0:7-11

Now, we need to make sure that the repository we just set up is enabled. To accomplish this, we want to review the repo file itself. 

[root@alt ~]# vim /etc/yum.repos.d/epel.repo

We then need to ensure that the repo is set to enabled=1.

name=Extra Packages for Enterprise Linux 7 - $basearch

Install the Lsyncd Software

Now we can install the Lsyncd software using the following command.

root@alt ~]# yum -y install lsyncd

This process will take a few moments to complete, and then we can continue the set up once we see “Complete!”.

[root@alt ~]# yum -y install lsyncd
Loaded plugins: fastestmirror, langpacks, priorities
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 16 kB 00:00:00
 * epel:
epel | 5.4 kB 00:00:00
(1/3): epel/x86_64/group_gz | 90 kB 00:00:00
(2/3): epel/x86_64/updateinfo | 1.0 MB 00:00:00
(3/3): epel/x86_64/primary_db | 6.9 MB 00:00:00
166 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package lsyncd.x86_64 0:2.2.2-1.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
 Package Arch Version Repository Size
 lsyncd x86_64 2.2.2-1.el7 epel 83 k
Transaction Summary
Install 1 Package
Total download size: 83 k
Installed size: 227 k
Downloading packages:
warning: /var/cache/yum/x86_64/7/epel/packages/lsyncd-2.2.2-1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY] 0.0 B/s | 0 B --:--:-- ETA
Public key for lsyncd-2.2.2-1.el7.x86_64.rpm is not installed
lsyncd-2.2.2-1.el7.x86_64.rpm | 83 kB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Importing GPG key 0x352C64E5:
 Userid : "Fedora EPEL (7) <>"
 Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5
 Package : epel-release-7-11.noarch (@system-extras)
 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : lsyncd-2.2.2-1.el7.x86_64 1/1
  Verifying : lsyncd-2.2.2-1.el7.x86_64 1/1
  lsyncd.x86_64 0:2.2.2-1.el7

Configure SSH on Master

Now that the Lsyncd package is installed, we need to ensure the master host can push files to the slave hosts without requiring user intervention. We will accomplish this using SSH keys. For the purposes of this tutorial, we will assume that no other SSH keys are currently installed. To begin this process, we can create the SSH key with the following command.

root@host # ssh-keygen -t rsa (or)
root@host # ssh-keygen -t rsa -b 4096 -C "$(whoami)@$(hostname)-$(date -u +%Y-%m-%d-%H:%M:%S%z)" 
We can use the second command to generate a stronger key.

This SSH key creation process will ask several questions. For this tutorial, we will use the defaults and no added password.

Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/username/.ssh/id_rsa.
Your public key has been saved in /home/username/.ssh/
The key fingerprint is:
The key's randomart image is:
+---[RSA 4096]----+
|. . . . |
| = o . . |
|o.= . . . . |
|oDo . . . . . |
|B. .. Y . |
|O+.. o . . |
|O++.o o . . . |
|=*. . ... . . o. |
|.o.=+.++. . . |
root@alt [~]#

After the SSH keys are generated, we will transfer the public key over to our slave host. This process will allow us to authenticate and access that host without needing to enter a password. We can transfer the key over with the following command.

root@alt [~]# ssh-copy-id

Next, we will need to provide the password one time (since the SSH key is not yet in place), and then we will then be ready to use our new SSH key to access the slave.

[root@alt ~]# ssh-copy-id
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/"
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:R+KfXlPf2mvWLCYs89sobGJZ/1IUsHvO9fne4/4EvJ0.
ECDSA key fingerprint is MD5:9d:bd:7d:d2:66:6d:cd:8b:d2:ba:dc:d5:bc:6a:02:71.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh ''"
and check to make sure that only the key(s) you wanted were added.
[root@alt ~]#

To ensure that lsyncd can utilize the SSH keys we created, we need to modify the SSH config file on the master to add a few snippets of information.

root@alt [~]# vim ~/.ssh/config

Edit the config file using vim, and add the following info.

Host dest_host
 User root
 IdentityFile ~/.ssh/id_rsa

As you can see, we created an entry that specifies the destination host, giving it a name (in this case, dest_host), the IP associated with the hostname (, the user we’ll authenticate as (we are using root), and the location of the SSH private key on the master host (~/.ssh/id_rsa).

Configure Lsyncd on Master

Next, we need to modify the Lsyncd configuration file. We will need to specify the following settings:

  • General log file location
  • Status log file location
  • Frequency to write status file

We also need to define the following specific settings to sync the data:

  • Method of synchronization
  • Source folder for files we want to sync (we are using /var/www/html)
  • Destination host (/var/www/html)
  • The target folder for files we want to sync

To begin, edit the config file using the vim command.

root@alt [~]# vim /etc/lsyncd.conf

From there, we modify the configs to match the parameters noted above.
Things to keep in mind:

  • We are using the defaults for most of these settings. We have also increased the statusInterval option to write more frequently. The default is 10 seconds, but we have chosen 1 second.
  • The “host= option” should specify the name we gave the host above when editing our SSH config file. In this case, it is dest_host.
  • The configuration file is written in Lua. Be mindful of spacing and the “” character(s). That formatting is needed and serves a purpose.
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd-status.log",
statusInterval = 10

-- Slave server configuration

sync {

rsync = {
compress = true,
acls = true,
verbose = true,
owner = true,
group = true,
perms = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"

Now that we have Lsyncd installed, configured, and our SSH keys in place, (to allow for user-free authentication), run the following commands to  start and enable the service.

[root@alt lsyncd]# systemctl start lsyncd
[root@alt lsyncd]# systemctl enable lsyncd
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/lsyncd.service.
[root@alt lsyncd]#

Now that Lsyncd is running, we can verify that it is monitoring the folder for any changes in the status log.

root@alt [~]# cd /var/log/lsyncd
[root@alt lsyncd]# tail lsyncd.status
Lsyncd status report at Thu Feb 6 10:07:37 2020

Sync1 source=/var/www/html/
There are 0 delays

Inotify watching 1 directories
  1: /var/www/html/

We will note in the log that the files are being syncing from the master to the slave server. Prior to initializing the lsyncd service, these were the contents of the /var/www/html folder on each server:

[root@alt ~]# cd /var/www/html
[root@alt html]# ll
total 20
drwxr-xr-x 2 root root 4096 Feb 6 09:45 .
drwxr-xr-x 4 root root 4096 Feb 1 03:20 ..
-rw-r--r-- 1 root root 420 Feb 6 09:45 index.html
-rw-r--r-- 1 root root 7528 Feb 6 09:45 styles.css
[root@alt html]#

[root@opt ~]# cd /var/www/html
[root@opt html]# ll
total 0
[root@opt html]#

After a few moments, the Lsyncd service will pick up the changes in the alt folder and compare it to the destination folder on opt. If changes are noted, it will push over those file modifications to the slave node. We can review the primary Lsyncd log to verify that the transfer has occurred, and what files were transferred across.

[root@alt ~]# cd /var/log/lsyncd
[root@alt lsyncd]# cat lsyncd.log

Tue Feb 11 08:07:28 2020 Normal: Rsyncing list
Tue Feb 11 08:07:28 2020 Normal: Finished (list): 0
[root@alt lsyncd]#

In reviewing the folder on our slave node, we can see the transferred files are now in the destination directory.

[root@opt ~]# hostname
[root@opt ~]# ll /var/www/html
total 12
-rw-r--r-- 1 root root 420 Feb 6 09:45 index.html
-rw-r--r-- 1 root root 7528 Feb 6 09:45 styles.css
[root@opt ~]#


Now that we have the service running, Lsyncd will start when the system is rebooted and will continuously monitor for any changes and then push the changes across to the slave node. This method is a simple system that works well for specific use cases and is easily configured. Ultimately, the problem we are trying to solve is something that cannot depend on instant replication. However, the benefits of Lsyncd are such that for most people not running a load-balanced setup or needing a secondary copy of files on another machine, the benefits far outweigh any disadvantages.

How to Install Istio

Reading Time: 5 minutes

What Is Istio?

Istio is an open-source service mesh that makes it easier for a team to create a network of deployed services. Istio provides several vital services consistently across a mesh network such as:

  • Traffic Management: Istio simplifies the configuration of service-level properties like circuit breakers, timeouts, and retries.
  • Security: Istio provides an underlying secure communication channel between various endpoints.
  • Policies: Istio enforces specific policies to dynamically rate-limit the traffic to a service. It also applies whitelists, blacklists, and denials to restrict access to services, header rewrites, and redirects.
  • Observability: This includes comprehensive tracking, monitoring, and logging features.
  • Platform Support This encompasses support for Kubernetes, Consul, and other services running on individual virtual machines.
  • Integration and Customization: This includes solutions for ACLs, logging, monitoring, quotas, etc.

To summarize, Istio helps with the rollout of multi-cloud deployments by letting you deliver, secure, control, and monitor services on your multi-cloud implantation. In this installation scenario, Istio is considered platform independent so no reference point is provided to a specific operating system.


Before you can start the installation of Istio, you need to set up a Kubernetes cluster with a version of K8s that is compatible with Istio. Istio 1.4 works with Kubernetes versions 1.13 – 1.15. Liquid Web offers several types of multi-node clusters that can be adjusted for your needs. 

If you are unsure precisely what Kubernetes is, you can read more about it in this article. A containerization system is a fast, secure, and lightweight platform for running microservices. More benefits of containerization can be found in this article

After we have Kubernetes installed and a cluster set up, we can begin the installation of Istio.

Download Istio & Prepare the Installation 

First, we will download the latest version of Istio.

[root@host /]# curl -L | sh -

The output should look like this

root@host:~# curl -L | sh -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current    Dload  Upload   Total   Spent    Left  Speed 100   107  100   107    0     0    312      0 --:--:-- --:--:-- --:--:--   313
100  2804  100  2804    0     0   6258      0 --:--:-- --:--:-- --:--:-- 35493
Downloading istio-1.4.3 from ...  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current  Dload  Upload   Total   Spent    Left  Speed
100   614    0   614    0     0   2951      0 --:--:-- --:--:-- --:--:--  2951
100 32.7M  100 32.7M    0     0  25.2M      0  0:00:01  0:00:01 --:--:-- 39.8M
Istio 1.4.3 Download Complete!

Istio has been successfully downloaded into the istio-1.4.3 folder on your system.

Next Steps:
See to add Istio to your Kubernetes cluster.

To configure the istioctl client tool for your workstation,
add the /root/istio-1.4.3/bin directory to your environment path variable with:
	 export PATH="$PATH:/root/istio-1.4.3/bin"

Begin the Istio pre-installation verification check by running:
	 istioctl verify-install 

Need more information? Visit 

Then, cd into the Istio package directory with command below

root@host:~# cd istio-1.4.3/

Your installation directory should contain the following directories and files.

root@host:~/istio-1.4.3# ll
total 48
drwxr-x---  6 root root  4096 Jan  6 14:45 ./
drwx------ 10 root root  4096 Feb 11 12:56 ../
-rw-r--r--  1 root root 11348 Jan  6 14:45 LICENSE
-rw-r--r--  1 root root  6080 Jan  6 14:45
drwxr-x---  2 root root  4096 Jan  6 14:45 bin/
drwxr-xr-x  6 root root  4096 Jan  6 14:45 install/
-rw-r-----  1 root root   657 Jan  6 14:45 manifest.yaml
drwxr-xr-x 19 root root  4096 Jan  6 14:45 samples/
drwxr-x---  3 root root  4096 Jan  6 14:45 tools/

Next, add the istioctl client to your path

[root@host istio-1.4.3]# export PATH=$PWD/bin:$PATH

Install Istio

You are now ready to begin the installation itself. We will demonstrate the installation process using a demo profile. First, we will apply the demo manifest

[root@host istio-1.4.3]# istioctl manifest apply --set profile=demo

After this process has completed, verify the installation, and make sure that Kubernetes services have an appropriate IP address assigned to the cluster (except for the jaeger-agent service) with the following command.

[root@host istio-1.4.3]# kubectl get svc -n istio-system

The output should look like this. (the output has been reformatted for easier viewing)

[root@host kubectl get svc -n istio-system

Grafana       ClusterIP  <none>       3000/TCP  2m

Istio-citadel ClusterIP  <none>  8060/TCP,15014/TCP 2m

Istio-egressgateway  ClusterIP <none> 80/TCP,443/TCP,15443/TCP 2m

Istio-galley  ClusterIP   <none> 443/TCP,15014/TCP,9901/TCP 2m

Istio-ingressgateway loadBalancer 15020:31831/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:30318/TCP,15030:32645/TCP,15031:31933/TCP,15032:31188/TCP,15443:30838/TCP   2m

Istio-pilot   ClusterIP <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 2m

Istio-policy  ClusterIP  <none> 9091/TCP,15004/TCP,15014/TCP 2m

Istio-sidecar-injector  ClusterIP  <none> 443/TCP,15014/TCP 2m

Istio-telemetry ClusterIP   <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 2m

Jaeger-agent ClusterIP  None  <none>  5775/UDP,6831/UDP,6832/UDP  2m

Jaeger-collector  ClusterIP  <none> 14267/TCP,14268/TCP 2m

Jaeger-query  ClusterIP  <none>  16686/TCP  2m

Kiali  ClusterIP  <none>  20001/TCP  2m

Prometheus  ClusterIP  <none>  9090/TCP  2m

Tracing  ClusterIP  <none>  80/TCP  2m

Zipkin   lusterIP <none> 9411/TCP

In addition to this, the following Kubernetes pods should be deployed and have a status of running:

[root@host istio-1.4.3]# kubectl get pods -n istio-system

Grafana-f8467cc6-rbjlg 1/1     Running     0          1m

Istio-citadel-78df5b548f-g5cpw 1/1     Running     0          1m

Istio-egressgateway-78569df5c4-zwtb5 1/1     Running     0          1m

Istio-galley-74d5f764fc-q7nrk 1/1     Running     0          1m

Istio-ingressgateway-7ddcfd665c-dmtqz 1/1     Running     0          1m

Istio-pilot-f479bbf5c-qwr28 1/1     Running     0          1m

Istio-policy-6fccc5c868-xhblv 1/1     Running     2          1m

Istio-sidecar-injector-78499d85b8-x44m6 1/1     Running     0          1m

Istio-telemetry-78b96c6cb6-ldm9q 1/1     Running     2          1m

Istio-tracing-69b5f778b7-s2zvw 1/1     Running     0          1m

Kiali-99f7467dc-6rvwp 1/1     Running     0          1m

Prometheus-67cdb66cbb-9w2hm 1/1     Running     0          1m

This completes the installation phase if using the demo profile. Please keep in mind that the demo profile should not be used for performance evaluation. The purpose of the demo profile is to simply show the functionalities of Istio. You can now deploy your application, but keep in mind that that applications must use HTTP/1.1 or HTTP/2.0 since HTTP/1.0 is no longer supported by Istio. 


These commands will help you with the deployment of your applications:

root@host istio-1.4.3]# kubectl label namespace <namespace> istio-injection=enabled

And then:

[root@host istio-1.4.3]# kubectl create -n <namespace> -f <your-app-spec>.yaml

Uninstall Istio

To uninstall Istio, use the following command.

[root@host istio-1.4.3]# istioctl manifest generate --set profile=demo | kubectl delete -f -

It is usually safe to ignore any non-existent resource errors during the uninstallation process because the resources are removed in a hierarchical manner. The uninstallation process removes all the RBAC permissions, the Istio related namespaces, and all other resources in the hierarchy under it. 


We hope that you enjoy using Istio because it is a truly powerful and useful service. For more information, articles, and guides about Istio visit the official page at

Talk To An Expert…

Do you have questions about running Istio, Kubernetes or other modern platforms on a server cluster? Need a small staging cluster to review how you are going to deploy your application?

Give us a call at 800.580.4985, or open a chat or ticket with us to speak with one of our knowledgeable Solutions Team or an experienced Hosting Advisors today! Liquid Web has the technological savvy to offer assistance or advice in how best to set up this type of platform to deploy your apps on!

How To Setup A Domain In Cloudflare

Reading Time: 5 minutes

Full Cloudflare Website Integration

In this article, we will discuss how to set up our domain in a full Cloudflare configuration. This will allow us to take full advantage of their many DNS features, increased speed, Railgun options, and other amazing features that full domain integration allows. 

It should be noted that these instructions do not require you to interact with your Cloudflare account directly. Liquid Web Support works with Cloudflare via your Manage account interface to implement many of these changes. Unfortunately, support cannot directly access external providers, so any changes that need to be made outside the Manage interface will need to be addressed by you or your team. This being said, we find the benefits of a full Cloudflare domain setup to be well worth the effort of some minor DNS modifications.

Liquid Web Manage Dashboard

Let’s get started. Log in to your Liquid Web manage account at and go the Domain’s navigation menu in the left menu. Once there, click on the Cloudflare tab. 

In the Cloudflare tab, click on the “Add Website” button.

Next, we will enter our domain name in the box at the top and then select the radio button in the bottom section “Make Cloudflare my DNS provider (Full).”

Next, select the plan you would like to use at the bottom and then click “Activate Service.” Shortly, a new domain will appear in your account. You will see this screen once it has completed.

Now, click “Go to Dashboard” and then we can see the domain, and note its pending status.

This screen will also show you the nameservers that will be assigned to our domain, but these nameservers will vary for different users so the ones you see here may not match the ones you have assigned to you. 


Cloudflare Dashboard

Next, we click the three dots on the right side of our new domain name screen, and then select “Cloudflare Dashboard.”

This will take us to the Cloudflare site where we will need to log in to access our account. Now, we need to locate our domain name and it will still show pending, but we can still click on it. 


Next, we will be taken to a screen where we can complete our nameserver setup, but before we do, we need to click on the DNS Icon as we need to add our DNS records. 


Adding DNS Records

Now, we will need to add all of our DNS records for the domain. This information will vary by domain, but at a minimum, you will need to add an A record that points to your Liquid Web server IP and a www.domain.tld cname record. 

If this is a new domain, or you are currently using Liquid Web nameservers, we can create a new DNS zone file entry in your Liquid Web account as described here, or we can use one we have already set up and then use the export feature as described in the link. Once you have the zone file downloaded, go back to your Cloudflare account DNS Icon for your domain. Then click Advanced.

From here, select the zone file to upload and leave the defaults in place.


Next, click upload. You may get an error stating that some records failed to upload, so we need double-check to make sure the records are correct, but most of the information should be imported. Make sure to remove any NS records that are imported by clicking on the X to the right of them and confirming that choice.


Once you have confirmed that all your DNS records look correct, we can remove the proxying option as we do not want interruptions in our service (as will be explained in the SSL section below.) 

Proxy Status

To remove Proxying for your records, click on the Proxied Cloud.


Next, it will turn to DNS Only.


Once these settings are all adjusted to “DNS Only”, we can move on.
Note: when we are at the “DNS Only” setting, no traffic is passing through Cloudflare’s CDN network yet.


Change Nameservers

Now we are ready to change our nameserver information to the ones provided to us by Cloudflare on the overview page. Cloudflare provides clear directions to accomplish this.


We will have to wait until propagation takes place after changing our nameservers for our domain. Tip: A good way to check on the progress of propagation is to review your domain name at

Once the propagation of your new nameserver has taken place, we should verify that our site loads correctly. At this point, our DNS is running through the Cloudflare service, but the other DNS records like MX, cname and other records should still be pointing at your server.

SSL Setup

Next, let’s take a look at our SSL setup. In the Cloudflare main menu, click on the SSL/TLS icon.


We can now select any of the four options available here with “Flexible” being the most lenient if you want SSL support, Full, or Full (Strict) being most stringent, as it indicates. Generally, Full is a good option here, or if needed, Flexible will work also. If there are issues with Flexible, we can switch back over to Full. 


Cloudflare will issue a universal SSL to cover our domains, but sometimes, this can take time after we change our nameserver information. We want to try to avoid proxying our traffic until we confirm that the universal certificate has been issued as we may see insecure warnings if it has not been completed. To confirm this change, click on the “Edge Certificates” tab under the SSL/TLS button.


The first box is Edge Certificates, and in this area, we want to see a certificate present for our domain. If we do not see this certificate, we will have to wait until we do see the certificate to enable proxying.


When there is a certificate in place, it will look like this.

An alternative could be to use Let’s Encrypt for our full certificate, as explained in this article.

Enable DNS

Once we see that certificate in place, (which might take a little while) we are now ready to proxy traffic through Cloudflare, so you get all the benefits of the service. Let’s finish these final few settings. In your Cloudflare account go back to the DNS tab


and click on the “DNS only” clouds


to toggle them back to “Proxied.”



Now we have set up our site to run on Cloudflare and have our traffic proxied through them.


Cloudflare has some other great features as well. There are two which stand out that we should consider employing. The first of which is Railgun.  


The Railgun service helps cache content and improves the overall speed of our site. We also provide this service as part of the Liquid Web Cloudflare plan. If you do not have our Cloudflare service, the cost to utilize this service would be close to $200/month for a similar plan from Cloudflare. Enabling this service is very easy. Simply log in to your Cloudflare account in manage and click on the Speed Button, 


then the Optimization tab, 


and then scroll down to the Railgun heading. Here we will see the Liquid Web Central and Liquid Web Staging areas. For this selection, we are only concerned with the Liquid Web Central Option.


To enable it, toggle the off button to the on position,


Then click the Test button. We will want to ensure we see a Success message after the test runs.


Now, let’s check our site to ensure it is working as expected. In rare cases, there could be issues with Railgun, but generally, enabling this service will make your website faster. That’s it! Railgun is enabled.

BOT Fight Mode

This feature helps to stop malicious bots from weighing your server down and is an excellent option to stop most of them. To enable this setting in your Cloudflare account, go to the Firewall button

and then click the settings tab on the right.

You will see the Bot Fight Mode box, which needs to be toggled to the on position. Now we are fighting back against the malicious bots.

Try It Now!

Utilizing Cloudflare is an excellent way to optimize any site in multiple ways. Would you like to give it a try?

Give us a call at 800.580.4985, or open a chat or ticket with us to speak with one of our knowledgeable system administrators or Experienced Linux technicians to learn how you can take advantage of this system today!

Installing Microsoft Powershell on Ubuntu 18.04

Reading Time: 5 minutes

If you are a Windows administrator who has recently been tasked with administering a Linux-based Ubuntu server, you may find that utilizing Microsoft Powershell may help ease the transition into Linux, and allow you to be more productive. If you are a Linux administrator who is interested in exploring the options that Powershell provides, then this tutorial is for you as well.

Continue reading “Installing Microsoft Powershell on Ubuntu 18.04”

How to Install Python on CentOS 8

Reading Time: 3 minutes
python logo

In this tutorial, we will consider how to enable both Python 2 and Python 3 for use on CentOS 8. In earlier distributions of CentOS, an unversioned Python command was available by default. 

When the CentOS installation was complete, it was possible to drop into a Python shell by simply running the “python” command in a terminal.

Paradoxically, CentOS 8 does not have an unversioned Python command by default.  This begs the question, why? RedHat states that this choice is by design “to avoid locking users into a specific version of Python.” Currently, RedHat 8 utilizes Python 3.6 implicitly by default, although Python 2.7 is additionally provided to maintain existing software.

Continue reading “How to Install Python on CentOS 8”

How To Use Kill Commands In Linux

Reading Time: 9 minutes

Although Linux is considered to be a robust operating system that does not have many issues with applications, programs sometimes become unresponsive or fail to respond. When this happens, they can consume a great deal of system resources, and ultimately take down the entire system. These applications usually cannot be restarted automatically; however, as the process in question may still be running and will not get shut down completely, we need to use a command to terminate the process. When this happens, it is necessary to either restart the whole system or kill off the specific application process. Since restarting the entire system takes time and can create significant inconvenience for the clients, it is much easier to simply kill the process itself.

Continue reading “How To Use Kill Commands In Linux”

How to Install Tomcat 9 on CentOS 8

Reading Time: 8 minutes

What is Tomcat?

In this article, we will be demonstrating how to install Apache Tomcat on CentOS 8. Before we begin, let’s define exactly what Apache Tomcat is. Apache defines Tomcat as: “An open-source, servlet container, JavaServer Pages, Java Expression Language, and WebSocket technology that also acts as a web server. It affords a “pure Java” based HTTP server environment in which Java can be executed.” Tomcat works with the Java programming language and is associated with web applications written in Java.

Continue reading “How to Install Tomcat 9 on CentOS 8”

How To Use The YUM History Command

Reading Time: 6 minutes

Have you ever wanted to review past updates or roll back an update that broke your sites or negatively affected some aspect of your server’s operations? Well, you can accomplish this easily by using the yum history command.

Continue reading “How To Use The YUM History Command”

Introduction to Machine Learning

Reading Time: 6 minutes

Two of the biggest catchphrases being thrown around today in computing are Artificial Intelligence (AI) and Machine Learning. Many times people use them interchangeably. The truth is that AI encompasses a lot more than just Machine Learning, but Machine Learning is one of the most promising aspects of AI.

Continue reading “Introduction to Machine Learning”