Using a Cron Wrapper Script

Reading Time: 4 minutes

This tutorial is intended to do two things: to expand on the Cron Troubleshooting article; and to give an overview of a simple scripting concept that uses the creation of a file as a flag to signify something is running. This is primarily useful when you need to run something continuously, but not more than one copy at a time. You can create a file as a flag to check if a job is already running, , and in turn, check for that flag before taking further action.

The direct application of this is when you have a cron job that runs every minute, or every few minutes. With a rapidly repeating cron, if the previous job takes any longer than the scheduled time, these tasks can pile up causing load on the server, or exacerbating other issues. To avoid this, a simple script can be set up in the crontab (in place of the intended cron task). When the cron is run, it only runs the actual task if there is not a competing process already running.

Why Use a Cron Wrapper?

A cron wrapper is used when you have a cron job that needs to run back to back but needs to not step on itself. This is good for tasks that you want to setup to run continuously. Jobs that should be run anywhere between every minute and every five minutes should be utilizing a wrapper like this.

If you do not use a wrapper on a cron job that runs too frequently, you can get multiple jobs running at the same time trying to do the same thing. These competing tasks slow down the whole works. These “stacking cron jobs” can even get so out of hand that it overloads a server and causes the server to stop responding normally.

What is a Cron Wrapper?

The reason this is called a cron wrapper is that it is a script that wraps around the cron job, and checks if another instance of the cron is already running. If there is another copy running, the wrapper will make the cron skip this run, and wait until the next run to check again. There are a few ways that the cron wrappers ensures no overlap.

 

Process Check Method

One way is to check all the running processes for the user and double checks that there isn’t already another process with the same name or attributes as the one you want to run. This is how Magento’s cron.sh file works, it checks for another instance of cron.php being run as the user, and if there is one running, it exits. This can be complicated to do reliably, and so is not something that we would recommend for just starting out.

 

Lockfile Method

A straightforward method is to use what is called a lockfile. The cron wrapper checks if the lockfile (any file with a specific name/location) exists at the start of the run. If the lockfile is missing, the script creates that file and continues. The creation of the lockfile signals the start of the cron job. When the cron job completes the wrapper script then removes the lock file.

So long as the lockfile exists, a new wrapper will not run the rest of the cron job while another one is running. Once the first run completes and the lock is removed another wrapper will be able to create a new lock file again and process normally.

 

A Wrapper Script Example

To start, we want to create a simple bash script. Within a file we state the script to be interpreted by the program /bin/bash

#!/bin/bash

Then we want to define the name and location of the lockfile we’ll be using as our flag.

# Set lockfile name and location
lockfile="~/tmp/cronwrapper.lock"

 

Next, the script needs to check if that lockfile exists. If it does exist, then another copy of the script is already running, and we should exit the script.

# Check if the lockfile exists
if [[ -f $lockfile ]]; then
# If the lockfile exists quit
exit;

Else, if the lockfile does not exist, then we should create a new lock file to signify that we are continuing with the rest of the script. Creating the lockfile also tells any future copies that might be run to hold off until the lockfile is removed. We also want to include the actual job to be run, whether that’s a call to a URL through the web, running a PHP file on the command line, or anything else.

# If the lockfile is missing continue
else
# Create the lockfile
touch $lockfile
# Insert cron task here/code>

Once the intended job is run and completes, we want to clean up our lockfile, so that the next run of the cron job knows that the last run completed and everything is ready to go again.

# Cleanup the lockfile
rm -f $lockfile
fi

In the example above, it is convenient to define the lock file as a variable ($lockfile) so that it can be referenced easily later on. Also if you want to change the location, you only have to change it one place in the script.

This example also uses a “~” in the path to the lock file as a shortcut. This tells the shell to assume the user’s home directory. As such, the full path would look something more like this: /home/username/tmp/cron.lock.

However, by using the “~” you can use copies of the same script for many users on the same server, and not have to modify the full path for each user. The shell will automatically use the home directory for each user when the wrapper script is run.

Putting It All Together (cronwrapper.sh)

You can copy and paste the following into your text editor of choice on your server. You can name it whatever you want, but here are all the parts put together.

#!/bin/bash
lockfile="~/tmp/cronwrapper.lock"
if [[ -f $lockfile ]]; then
exit;
else
touch $lockfile
# Insert cron task here
rm -f $lockfile
fi

This is a very simple example and could be expanded much further. Ideally, you might add a check  to ignore a lock file older than an hour and to run a new instance of cron job anyway. This would account for an interrupted job that failed to clean up after itself. Another extension might be to confirm that the previous job completed cleanly,. Or yet another suggestion, would check for errors from the cron job being run and make decisions or send alerts based on those errors.  The world is your oyster when it comes to cron wrappers! Take a look at our Liquid Web’s VPS servers, for tasks like these to run smoothly.

How To Install Docker on Ubuntu 16.04

Reading Time: 7 minutes

Adding Docker to an Ubuntu server.

Docker is an open-source software tool designed to automate and ease the process of creating, packaging, and deploying applications using an environment called a container. The use of Linux containers to deploy applications is called containerization. A Container allows us to package an application with all of the parts needed to run an application (code, system tools, logs, libraries, configuration settings and other dependencies) and sends it out as a single standalone package deployable via Ubuntu (in this case 16.04 LTS). Docker can be installed on other platforms as well. Currently, the Docker software is maintained by the Docker community and Docker Inc. Check out the official documentation to find more specifics on Docker. Docker Terms and Concepts

Docker is made up of several components:

  • Docker for Linux: Software which runs Docker containers on the Ubuntu Linux OS.
  • Docker Engine: Used for building Docker images and creating Docker containers.
  • Docker Registry: Used to store various Docker images.
  • Docker Compose: Used to define applications using multiple Docker containers.

 

Some of the other essential terms and concepts you will come into contact with are:

  • Containerization: Containerization is a lightweight alternative to full machine virtualization (like VMWare) that involves encapsulating an application within a container with its own operating environment.

Docker also uses images and containers. The two ideas are closely related, but very distinct.

  • Docker Image: A Docker Image is the basic unit for deploying a Docker container. A Docker image is essentially a static snapshot of a container, incorporating all of the objects needed to run a container.  
  • Docker Container: A Docker Container encapsulates a Docker image and when live and running, is considered a container. Each container runs isolated in the host machine.
  • Docker Registry: The Docker Registry is a stateless, highly scalable server-side application that stores and distributes Docker images. This registry holds Docker images, along with their versions and, it can provide both public and private storage location. There is a public Docker registry called Docker Hub which provides a free-to-use, hosted Registry, plus additional features like organization accounts, automated builds, and more. Users interact with a registry by using Docker push or pull commands. Example:

docker pull registry-1.docker.io/distribution/registry:2.1.

  • Docker Engine: The Docker Engine is a layer which exists between containers and the Linux kernel and runs the containers. It is also known as the Docker daemon. Any Docker container can run on any server that has the Docker-daemon enabled, regardless of the underlying operating system.
  • Docker Compose: Docker Compose is a tool that defines, manages and controls multi-container Docker applications. With Compose, a single configuration file is used to set up all of your application’s services. Then, using a single command, you can create and start all the services from that file.
  • Dockerfiles: Dockerfiles are merely text documents (.yaml files) that contains all of the configuration information and commands needed to assemble a container image. With a Dockerfile, the Docker daemon can automatically build the container image.

    Example: The following basic Dockerfile sets up an SSHd service in a container that you can use to connect to and inspect other containers volumes, or to get quick access to a test container.

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin
yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional
pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Docker Versions

There are three versions of Docker available, each with its own unique use:

  • Docker CE is the simple, classic Docker Engine.
  • Docker EE is Docker CE with certification on some systems and support by Docker Inc.
  • Docker CS (Commercially Supported) is kind of the old bundle version of Docker EE for versions <= 1.13.

We will be installing Docker CE.

 

Docker logo

Step 1 — Checking Prerequisites

To begin, start with the following server environment: 

  1. 64-bit Ubuntu 16.04 server
  2. Logged in as the root user
Important:
Docker on Ubuntu requires a 64-bit architecture for installation and, the Linux Kernel version must be 3.10 or above.

Before installing Docker, we need to set up the repository which contains the latest version of the software (Docker is unavailable in the standard Ubuntu 16.04 repository). Adding the repository allows us to easily update the software later as well.

Step 2 — Installing Docker

The next step is to remove any default Docker packages from the existing system before installing Docker on a Linux VPS. Execute the following commands to start this process:

root@test:~# apt-get remove docker docker-engine docker.io lxc-docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'docker-engine' is not installed, so not removed
Package 'docker' is not installed, so not removed
Package 'docker.io' is not installed, so not removed
E: Unable to locate package lxc-docker

Note:
In certain instances, a specific variant of the linux kernel is slimmed down by removing less common modules (or drivers). If this is the case, the “linux-image-extra” package contains all of the “extra” kernel modules which were left out. Use this command to re-add them: root@test:~# sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

Step 3 — Add required packages

Now, we need to install some required packages on your system. Run the commands below to accomplish this:

root@test:~# apt-get install curl apt-transport-https ca-certificates software-properties-common

Note:
If you get the error: “E: Unable to locate package curl”, Use the commands “curl -V” to see if curl is already installed; if so, move on to step 4.

Step 4 — Verify, Add and Update Repositories

Add the Docker GPG key to your system:

root@test:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK

Next, update the APT sources to add the source:

root@test:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" | tee /etc/apt/sources.list.d/docker.list

Run the update again so the Docker packages are recognized:

root@test:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease                              
Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]             
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]                 
Fetched 323 kB in 0s (827 kB/s)                             
Reading package lists... Done
E: The method driver /usr/lib/apt/methods/https could not be found.
N: Is the package apt-transport-https installed?
E: Failed to fetch https://download.docker.com/linux/ubuntu/dists/xenial/InRelease  
E: Some index files failed to download. They have been ignored, or old ones used instead.

Note:
If you get the error seen above: “N: Is the package apt-transport-https installed?”, Use the following command to correct this. root@test:~# sudo apt-get install apt-transport-https

Let’s rerun the update:

root@test:~# apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]        
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]                 
Hit:5 https://download.docker.com/linux/ubuntu xenial InRelease
Fetched 323 kB in 0s (656 kB/s)
Reading package lists... Done

Success! Now, verify we are installing Docker from the correct repo instead of the default Ubuntu 16.04 repo:

root@test:~# apt-cache policy docker-ce
docker-ce:
 Installed: (none)
 Candidate: 18.06.0~ce~3-0~ubuntu
 Version table:
    18.06.0~ce~3-0~ubuntu 500
       500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages

Step 5 — Install Docker

Finally, let’s start the Docker install:

root@test:~# apt-get install -y docker-ce
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
 aufs-tools cgroupfs-mount libltdl7 pigz
Suggested packages:
 mountall
The following NEW packages will be installed:
 aufs-tools cgroupfs-mount docker-ce libltdl7 pigz
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 40.3 MB of archives.
After this operation, 198 MB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 pigz amd64 2.3.1-2 [61.1 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 cgroupfs-mount all 1.2 [4,970 B]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libltdl7 amd64 2.4.6-0.1 [38.3 kB]
Get:5 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 18.06.0~ce~3-0~ubuntu [40.1 MB]
Fetched 40.3 MB in 1s (38.4 MB/s)    
...
...

Docker should now be installed, the daemon started, and the process enabled to start on boot. Let’s check to see if it’s running:

root@test:~# systemctl status docker
* docker.service - Docker Application Container Engine
  Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Active: active (running) since Wed 2018-08-08 13:51:22 EDT; 2min 13s ago
    Docs: https://docs.docker.com
Main PID: 6519 (dockerd)
  CGroup: /system.slice/docker.service
          |-6519 /usr/bin/dockerd -H fd://
          `-6529 docker-containerd --config /var/run/docker/containerd/containerd.toml

Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192600502-04:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192630873-04:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42020f6a0, CONNECTING" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192854891-04:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42020f6a0, READY" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192867421-04:00" level=info msg="Loading containers: start."
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.340349000-04:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.397715134-04:00" level=info msg="Loading containers: done."
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.424005987-04:00" level=info msg="Docker daemon" commit=0ffa825 graphdriver(s)=overlay2 version=18.06.0-ce
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.424168214-04:00" level=info msg="Daemon has completed initialization"
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.448805942-04:00" level=info msg="API listen on /var/run/docker.sock"
Aug 08 13:51:22 test.docker.com systemd[1]: Started Docker Application Container Engine.
~
~
~
(press q to quit)

Excellent! Good to go!

If Docker is not started automatically after the installation, run the following commands:

root@test:~# systemctl start docker.service
root@test:~# systemctl enable docker.service

Step 6 — Test Docker

Let’s check the new Docker build by downloading the hello-world test image.
To start testing, issue the following command:

 


root@test:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
   (amd64)
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

Step 7 — The ‘Docker’ Command

With Docker installed and working, now is the time to become familiar with the command line utility. The ‘Docker’ command consists of using Docker with a chain of options followed by arguments. The syntax takes this form:

root@test:~# docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Run 'docker COMMAND --help' for more information on a command.


To view all of the available Options and Management Commands, simply type:

docker

To view the switches available for a specific command, type:

docker docker-subcommand --help

Lastly, To view system-wide information about Docker, use:

docker info

Docker is a dynamic, robust and responsive tool that makes it very simple to run applications within a containerized environment. It is portable, less resource-intensive, and more reliant on the host operating system which allows for multiple uses. Overall, Docker is a ‘must have’ system and should be included in your toolkit for automation, deployment, and scaling of your applications!

Our Support Teams are filled with talented admins with an intimate knowledge of multiple web hosting technologies, especially those discussed in this article. If you are uncomfortable walking through the steps outlined here, we are a phone call, chat or ticket away from assisting you with this process. If you’re running one of our fully Managed Cloud VPS Servers, we can provide more information on directly implementing the software described in this article.

 

How to Upgrade / Update Docker on Fedora 22

Reading Time: 1 minute

Introduction

Docker is a container-based software framework commonly used for automating deployment of applications. “Containers” are encapsulated, lightweight, and portable application modules.

Pre-Flight Check

  • These instructions are intended for upgrading / updating Docker.
  • I’ll be working from a Liquid Web Self Managed Fedora 22 server, and I’ll be logged in as root.

Continue reading “How to Upgrade / Update Docker on Fedora 22”

How to Install Docker on Ubuntu 15.04

Reading Time: 1 minute

Introduction

Docker is a container-based software framework for automating deployment of applications. “Containers” are encapsulated, lightweight, and portable application modules.

Pre-Flight Check

  • These instructions are intended for installing Docker.
  • I’ll be working from a Liquid Web Self Managed Ubuntu 15.04 server, and I’ll be logged in as root.

Continue reading “How to Install Docker on Ubuntu 15.04”

How to Install Ansible on Fedora 21 via Yum

Reading Time: 1 minute

Ansible is an automation engine, similar to Chef or Puppet, that can be used to ensure deployment and configuration consistency across many servers, and keep servers and applications up-to-date. Though, unlike some other tools, Ansible does not require a client component/agent.

Pre-Flight Check
  • These instructions are intended specifically for installing Ansible, an automation tool, on Fedora 21.
  • I’ll be working from a Liquid Web Self Managed Fedora 21 server, and I’ll be logged in as non-root user. If you need more information then visit our tutorial on How to Add a User and Grant Root Privileges on Fedora 21.

Continue reading “How to Install Ansible on Fedora 21 via Yum”

How to Install Ansible on Fedora 20 via Yum

Reading Time: 1 minute

Ansible is an automation engine, similar to Chef or Puppet, that can be used to ensure deployment and configuration consistency across many servers, and keep servers and applications up-to-date. Though, unlike some other tools, Ansible does not require a client component/agent.

Pre-Flight Check
  • These instructions are intended specifically for installing Ansible, an automation tool, on Fedora 20.
  • I’ll be working from a Liquid Web Self Managed Fedora 20 server, and I’ll be logged in as non-root user. If you need more information then visit our tutorial on How to Add a User and Grant Root Privileges on Fedora 20.

Continue reading “How to Install Ansible on Fedora 20 via Yum”

How to Install Ansible on CentOS 7 via Yum

Reading Time: 1 minute

Ansible is an automation engine, similar to Chef or Puppet, that can be used to ensure deployment and configuration consistency across many servers, and keep servers and applications up-to-date. Though, unlike some other tools, Ansible does not require a client component/agent.

Pre-Flight Check
  • These instructions are intended specifically for installing Ansible, an automation tool.
  • I’ll be working from a Liquid Web Core Managed CentOS 7 server, and I’ll be logged in as non-root user. If you need more information then visit our tutorial on How to Add a User and Grant Root Privileges on CentOS 7.

Continue reading “How to Install Ansible on CentOS 7 via Yum”

How to Install WordPress in cPanel / WHM with Softaculous

Reading Time: 2 minutes

WordPress is a very popular option for running a website or blog and can be used to get your content online quickly. This guide will walk you through installing the WordPress server software via cPanel / WHM and Softaculous. Liquid Web makes WordPress hosting easy and painless, for every level of customer, especially since all of our managed plans are backed by our 24/7/365 Heroic Support®.

Pre-Flight Check
  • These instructions are intended for installing WordPress in cPanel / WHM with Softaculous.
  • I’ll be working from a Liquid Web cPanel / WHM CentOS 6.6 server, and I’ll be logged into my cPanel account.

Continue reading “How to Install WordPress in cPanel / WHM with Softaculous”

How to Install Docker on Fedora 21

Reading Time: 1 minute

Introduction

Docker is a container-based software framework commonly used for automating deployment of applications. “Containers” are encapsulated, lightweight, and portable application modules.

Pre-Flight Check

  • These instructions are intended for installing Docker.
  • I’ll be working from a Liquid Web Self Managed Fedora 21 server, and I’ll be logged in as root.

Continue reading “How to Install Docker on Fedora 21”

How To Install Docker on Fedora 20

Reading Time: 1 minute

Introduction

Docker is a container-based software framework for automating deployment of applications. “Containers” are encapsulated, lightweight, and portable application modules.

Pre-Flight Check
  • These instructions are intended for installing Docker.
  • I’ll be working from a Liquid Web Self Managed Fedora 20 server, and I’ll be logged in as root.

Continue reading “How To Install Docker on Fedora 20”