Windows Firewall Basics

A firewall is a program installed on your computer or a piece of hardware that uses a rule set to block or allow access to a computer, server or network. It separates your internal network from the external network (the Internet).

Firewalls can permit traffic to be routed through a specific port to a program or destination while blocking other malicious traffic. A firewall can be a hardware, software, or a blending of both.

Continue reading “Windows Firewall Basics”

Install the LAMP Stack Using Tasksel on Ubuntu 16.04

There are multiple ways of installing software on Debian based systems like Ubuntu and Mint. Tools like apt, apt-get, aptitude and/or synaptic are usually used to install single applications into the desktop editions of those OS. Alternatively, Tasksel is a command line app for installing a “group” of related packages onto a server. Tasksel is not installed by default on the desktop editions of the ‘nix’ versions that contain the above-mentioned package managers but, it is installed on later versions of Debian and Ubuntu server editions.

How Does Tasksel Work?

Continue reading “Install the LAMP Stack Using Tasksel on Ubuntu 16.04”

Kubernetes RBAC Authorization

What is RBAC?

Kubernetes Role-Based Access Control or the (RBAC) system describes how we define different permission levels of unique, validated users or groups in a cluster. It uses granular permission sets defined within a .yaml file to allow access to specific resources and operations.

Starting with Kubernetes 1.6, RBAC is enabled by default and users start with no permissions, and as such, permissions must be explicitly granted by an admin to a specific service or resource. These policies are crucial for effectively securing your cluster. They permit us to specify what types of actions are allowed, depending on the user’s role and their function within the organization.

Prerequisites for using Role-Based Access Control

To take advantage of RBAC, you must allow a user the ability to create roles by running the following command:

root@test:~# kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user

Afterwards, to start a cluster with RBAC enabled, we would use the flag:

--authorization-mode=RBAC

 

The RBAC Model

Basically, the RBAC model is based on three components; Roles, ClusterRoles and Subjects. All k8s clusters create a default set of ClusterRoles, representing common divisions that users can be placed within.

The “edit” role lets users perform basic actions like deploying pods.
The “view” lets users review specific resources that are non-sensitive.
The “admin” role lets a user manage a namespace.
The “cluster-admin” allows access to administer a cluster.

Roles

A Role consists of rules that define a set of permissions for a resource type. Because there is no default deny rules, a Role can only be used to add access to resources within a single virtual cluster. An example would look something like this:

kind: Role
apiVersion: rbac.authorization.domain.com/home
metadata:
namespace: testdev
name: dev1
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"] verbs: ["get", "watch", "list"]

In this case, the role defines that a user (dev1) can use the “get”, “watch” or “list” commands for a set of pods in the “testdev” namespace.

ClusterRoles

A ClusterRole can be used to grant the same permissions as a Role but, because they are cluster-scoped, they can also be used to grant wider access to:

  • cluster-scoped resources (like nodes)
  • non-resource endpoints (like a folder named “/test”)
  • namespaced resources (like pods) in and across all namespaces. We would need to run kubectl get pods --all-namespaces

It contains rules that define a set of permissions across resources like nodes or pods.
An example would look something like this:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]

The default ClusterRole command looks like this:

root@test:~# kubectl create clusterrole [Options]

A command line example to create a ClusterRole named “pod“, that allows a user to perform “get“, “watch” and “list” on a pod would be:

root@test:~# kubectl create clusterrole pod --verb=get,list,watch --resource=pod

 

RoleBinding

A RoleBinding is a set of configuration rules that designate a permission set. It binds a role to subjects (Subjects are simply users, a group of users or service accounts). Those permissions can be set for a single namespace (virtual cluster), with a RoleBinding or, cluster-wide with a ClusterRoleBinding.A RoleBinding is a set of config rules that designate a permission set.

Let’s allow the group “devops1” the ability to modify the resources in the “testdev” namespace:

root@test:~# kubectl create rolebinding devops1 --clusterrole=edit --group=devops1 --namespace=dev rolebinding "devops1" created

Because we used a RoleBinding, these functions only apply within the RoleBinding’s namespace. In this case, a user within the “devops1” group can view resources in the “testdev” namespace but not in a different namespace.

ClusterRoleBindings

A ClusterRoleBinding defines which users have permissions to access which ClusterRoles. Because a “role” contains rules that represent a set of permissions, ClusterRoleBindings extend the defined permissions for:

  • unique namespace in resources like nodes
  • resources in all namespaces in a cluster
  • undefined resource endpoints like ”/foldernames”

The default ClusterRoles (admin, edit, view) can be created using the command:

root@test:~# kubectl create clusterrolebinding [options]

An example of creating a ClusterRoleBinding for user1, user2,and group1 using the cluster-admin ClusterRole

root@test:~# kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1

 

ConfigMaps

What is a configMap?

A configMap is a resource that easily attaches configuration data to a pod. The information that is stored in a ConfigMap is used to separate config data from other content to keep images/pods portable.

How Do I Create A ConfigMap?

To create a configmap, we simply use the command:
kubectl create configmap <map-name> <data-source>

Let’s create a default.yaml file to create a ConfigMap:
kubectl create -f default.yaml /configmaps/location/basic.yaml

Basic RBAC Commands For Kubectl

Note
These commands will give errors if RBAC isn’t configured correctly.

kubectl get roles
kubectl get rolebindings
kubectl get clusterroles
kubectl get clusterrolebindings
kubectl get clusterrole system:node -o yaml

 

Namespaces

Namespaces are used to define, separate and identify a cluster of resources among a large number of users or spaces. You should only use namespaces when you have a very diverse set of clusters, locations or users. They are used in settings where companies have multiple users or teams that are spread across various projects, locations or departments. A Namespaces also provides a way to prevent naming conflicts across a wide array of clusters.
Inside a Namespace, an object can be identified by a shorter name like ‘cluster1’ or it can be as complex as ‘US.MI.LAN.DC2.S1.R13.K5.ProdCluster1’ but, there can only be a single ‘cluster1’ or a ‘US.MI.LAN.DC2.S1.R13.K5.ProdCluster1’ inside of that namespace. So, the names of resources within a namespace must be unique but, not spanning namespaces. You could have several namespaces which are different, and they can all contain a single ‘cluster1’ object.

You can get a list of namespaces in a cluster by using this command:

root@test:~# kubectl get namespaces
NAME STATUS AGE
cluster2dev Active 1d
cluster2prod Active 4d
cluster3dev Active 2w
cluster3prod Active 4d

Kubernetes always starts with three basic namespaces:

  • default: This is the default namespace for objects that have no specifically identified namespace (eg. the big barrel o’ fun).
  • kube-system: This is the default namespace for objects generated by the Kubernetes system itself.
  • kube-public: This namespace is created automatically and is world readable by everyone. This namespace is primarily reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

Finally, the essential concept of role-based access control (RBAC) is to ensure that users who require specific access to a resource can be assigned to those Roles, Clusterroles, and ClusterRolebindings as needed or desired. The granularity of these permission sets is structured and enabled to allow for increased security, ease of security policy modification, simplified security auditing, increased productivity (RBAC cuts down on onboarding time for new employees). Lastly, RBAC allows for increased cost reduction via removing unneeded applications and licensing costs for less used applications. All in all, RBAC is a needed addition to secure your Kubernetes infrastructure.

Ready to Learn More?

Reach out to one of our Solutions team via chat to decide if a Private Cloud service from Liquid Web will meet your Kubernetes needs!

How to Use Ansible

Ansible symbolAnsible is an easy to use automation software that can update a server, configure tasks, manage daily server functions and deploys jobs as needed on a schedule of your choosing. It is usually administered from a single location or control server and uses SSH to connect to the remote servers. Because it employs SSH to connect, it is very secure and, there is no software to install on the servers being managed. It can be run from your desktop, laptop or other platforms to assist with automating the tedious tasks which every server owner faces.

Continue reading “How to Use Ansible”

Kubernetes Tutorial

What is Kubernetes?

The name Kubernetes has its origins from the original Greek term for helmsman or pilot. Kubernetes, or ‘k8s’ (pronounced “Kate’s”) as it’s sometimes referred to, is an open-source software tool that was originally created by Google and is now being maintained by the Cloud Native Computing Foundation. Kubernetes is used for arranging and coordinating containers that an application needs to run into easy to handle groups.

In order to manage your Kubernetes cluster effectively, we recommend using kubectl as the command-line tool of choice. Basically, kubectl communicates with the master node (or server) which in turn submits those commands to the worker nodes to manage the cluster.

The Kubernetes cluster consists of two basic types of resources;

  • Master server – a master server organizes the cluster
  • Node server – Nodes are the workers that contain and run the applications

Each node contains a Kubelet, which is the agent for managing the node and communicating with the master. You can use kubectl to deploy, explore, review and remove Kubernetes objects (like nodes, images or containers).

Let’s next look at setting up kubectl.

The Master communicates with containers through the worker node.

Note:
This tutorial assumes you have a Kubernetes cluster already setup and running.

In order to setup kubectl, we will need the following:

Prerequisites

  1. A working internet connection
  2. The cURL or wget utilities installed
  3. Basic knowledge of the Linux command line

Installing kubectl

On an Ubuntu 16.04 LTS server, here are the commands to use if logged in as root to install kubectl:

apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
touch /etc/apt/sources.list.d/kubernetes.list
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubectl

Kubectl Commands for Basic Kubernetes Functions

Now that we have kubectl up and running, let’s review a few of the basic commands available. Here are the five most basic commands we will be reviewing along with their fundamental definition:

  • kubectl create – the create command constructs a resource from a configuration file or stdin (or standard input). A resource is defined as “something that can be “requested by”, “allocated to”, “or consumed by” a pod or container.”
  • kubectl get – the get command displays a table of the most relevant information about one or multiple relevant resources.
  • kubectl run –  the run command will kickoff one or more instances of a container in the cluster.
  • kubectl expose – the expose command will start to load balance inbound traffic across your running instances. This command can also create a High Availability proxy for the client to access the running containers from outside the cluster.
  • kubectl delete – the delete command removes defined resources by
    • filenames
    • stdin
    • resources and names
    • resources and label selector

Kubectl App Management

  • kubectl edit – Alters the characteristics of a resource on a server using the default editor.
  • kubectl apply – Applies a change to a resource from a file or stdin.
  • kubectl label – Adds or updates a specific attribute to specifically identify an object

Working with Apps Using Kubectl

  • kubectl exec – Runs a command on a container in a pod
  • kubectl logs – Prints a container log
  • kubectl describe – Displays the status or state of the resources.

Kubectl Cluster Management

  • kubectl cluster-info – Displays information about the master and services in the cluster.
  • kubectl drain – removes pods in preparation for maintenance
  • kubectl certificate – Approves a CSR or certificate signing request

Kubectl Settings and Usage

  • kubectl api-resources (eg. Pods and Services) – Lists all of the supported resources and their shortnames, API grouping, if namespaced, and Kind
  • kubectl config –  Changes or alters kubeconfig files
  • kubectl version – Displays the Kubernetes version

 

These are just some of the many basic command examples that are available for use in setting up and maintaining your Kubernetes environment.

 

How To Install Docker on Ubuntu 16.04

Adding Docker to an Ubuntu server.

Docker is an open-source software tool designed to automate and ease the process of creating, packaging, and deploying applications using an environment called a container. The use of Linux containers to deploy applications is called containerization. A Container allows us to package an application with all of the parts needed to run an application (code, system tools, logs, libraries, configuration settings and other dependencies) and sends it out as a single standalone package deployable via Ubuntu (in this case 16.04 LTS). Docker can be installed on other platforms as well. Currently, the Docker software is maintained by the Docker community and Docker Inc. Check out the official documentation to find more specifics on Docker. Docker Terms and Concepts

Docker is made up of several components:

  • Docker for Linux: Software which runs Docker containers on the Ubuntu Linux OS.
  • Docker Engine: Used for building Docker images and creating Docker containers.
  • Docker Registry: Used to store various Docker images.
  • Docker Compose: Used to define applications using multiple Docker containers.

 

Some of the other essential terms and concepts you will come into contact with are:

  • Containerization: Containerization is a lightweight alternative to full machine virtualization (like VMWare) that involves encapsulating an application within a container with its own operating environment.

Docker also uses images and containers. The two ideas are closely related, but very distinct.

  • Docker Image: A Docker Image is the basic unit for deploying a Docker container. A Docker image is essentially a static snapshot of a container, incorporating all of the objects needed to run a container.  
  • Docker Container: A Docker Container encapsulates a Docker image and when live and running, is considered a container. Each container runs isolated in the host machine.
  • Docker Registry: The Docker Registry is a stateless, highly scalable server-side application that stores and distributes Docker images. This registry holds Docker images, along with their versions and, it can provide both public and private storage location. There is a public Docker registry called Docker Hub which provides a free-to-use, hosted Registry, plus additional features like organization accounts, automated builds, and more. Users interact with a registry by using Docker push or pull commands. Example:

docker pull registry-1.docker.io/distribution/registry:2.1.

  • Docker Engine: The Docker Engine is a layer which exists between containers and the Linux kernel and runs the containers. It is also known as the Docker daemon. Any Docker container can run on any server that has the Docker-daemon enabled, regardless of the underlying operating system.
  • Docker Compose: Docker Compose is a tool that defines, manages and controls multi-container Docker applications. With Compose, a single configuration file is used to set up all of your application’s services. Then, using a single command, you can create and start all the services from that file.
  • Dockerfiles: Dockerfiles are merely text documents (.yaml files) that contains all of the configuration information and commands needed to assemble a container image. With a Dockerfile, the Docker daemon can automatically build the container image.

    Example: The following basic Dockerfile sets up an SSHd service in a container that you can use to connect to and inspect other containers volumes, or to get quick access to a test container.

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin
yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional
pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Docker Versions

There are three versions of Docker available, each with its own unique use:

  • Docker CE is the simple, classic Docker Engine.
  • Docker EE is Docker CE with certification on some systems and support by Docker Inc.
  • Docker CS (Commercially Supported) is kind of the old bundle version of Docker EE for versions <= 1.13.

We will be installing Docker CE.

 

Docker logo

Step 1 — Checking Prerequisites

To begin, start with the following server environment: 

  1. 64-bit Ubuntu 16.04 server
  2. Logged in as the root user
Important:
Docker on Ubuntu requires a 64-bit architecture for installation and, the Linux Kernel version must be 3.10 or above.

Before installing Docker, we need to set up the repository which contains the latest version of the software (Docker is unavailable in the standard Ubuntu 16.04 repository). Adding the repository allows us to easily update the software later as well.

Step 2 — Installing Docker

The next step is to remove any default Docker packages from the existing system before installing Docker on a Linux VPS. Execute the following commands to start this process:

root@test:~# apt-get remove docker docker-engine docker.io lxc-docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'docker-engine' is not installed, so not removed
Package 'docker' is not installed, so not removed
Package 'docker.io' is not installed, so not removed
E: Unable to locate package lxc-docker

Note:
In certain instances, a specific variant of the linux kernel is slimmed down by removing less common modules (or drivers). If this is the case, the “linux-image-extra” package contains all of the “extra” kernel modules which were left out. Use this command to re-add them: root@test:~# sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

Step 3 — Add required packages

Now, we need to install some required packages on your system. Run the commands below to accomplish this:

root@test:~# apt-get install curl apt-transport-https ca-certificates software-properties-common

Note:
If you get the error: “E: Unable to locate package curl”, Use the commands “curl -V” to see if curl is already installed; if so, move on to step 4.

Step 4 — Verify, Add and Update Repositories

Add the Docker GPG key to your system:

root@test:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK

Next, update the APT sources to add the source:

root@test:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" | tee /etc/apt/sources.list.d/docker.list

Run the update again so the Docker packages are recognized:

root@test:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease                              
Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]             
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]                 
Fetched 323 kB in 0s (827 kB/s)                             
Reading package lists... Done
E: The method driver /usr/lib/apt/methods/https could not be found.
N: Is the package apt-transport-https installed?
E: Failed to fetch https://download.docker.com/linux/ubuntu/dists/xenial/InRelease  
E: Some index files failed to download. They have been ignored, or old ones used instead.

Note:
If you get the error seen above: “N: Is the package apt-transport-https installed?”, Use the following command to correct this. root@test:~# sudo apt-get install apt-transport-https

Let’s rerun the update:

root@test:~# apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]        
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]                 
Hit:5 https://download.docker.com/linux/ubuntu xenial InRelease
Fetched 323 kB in 0s (656 kB/s)
Reading package lists... Done

Success! Now, verify we are installing Docker from the correct repo instead of the default Ubuntu 16.04 repo:

root@test:~# apt-cache policy docker-ce
docker-ce:
 Installed: (none)
 Candidate: 18.06.0~ce~3-0~ubuntu
 Version table:
    18.06.0~ce~3-0~ubuntu 500
       500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages

Step 5 — Install Docker

Finally, let’s start the Docker install:

root@test:~# apt-get install -y docker-ce
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
 aufs-tools cgroupfs-mount libltdl7 pigz
Suggested packages:
 mountall
The following NEW packages will be installed:
 aufs-tools cgroupfs-mount docker-ce libltdl7 pigz
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 40.3 MB of archives.
After this operation, 198 MB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 pigz amd64 2.3.1-2 [61.1 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 cgroupfs-mount all 1.2 [4,970 B]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libltdl7 amd64 2.4.6-0.1 [38.3 kB]
Get:5 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 18.06.0~ce~3-0~ubuntu [40.1 MB]
Fetched 40.3 MB in 1s (38.4 MB/s)    
...
...

Docker should now be installed, the daemon started, and the process enabled to start on boot. Let’s check to see if it’s running:

root@test:~# systemctl status docker
* docker.service - Docker Application Container Engine
  Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Active: active (running) since Wed 2018-08-08 13:51:22 EDT; 2min 13s ago
    Docs: https://docs.docker.com
Main PID: 6519 (dockerd)
  CGroup: /system.slice/docker.service
          |-6519 /usr/bin/dockerd -H fd://
          `-6529 docker-containerd --config /var/run/docker/containerd/containerd.toml

Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192600502-04:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192630873-04:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42020f6a0, CONNECTING" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192854891-04:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42020f6a0, READY" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192867421-04:00" level=info msg="Loading containers: start."
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.340349000-04:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.397715134-04:00" level=info msg="Loading containers: done."
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.424005987-04:00" level=info msg="Docker daemon" commit=0ffa825 graphdriver(s)=overlay2 version=18.06.0-ce
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.424168214-04:00" level=info msg="Daemon has completed initialization"
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.448805942-04:00" level=info msg="API listen on /var/run/docker.sock"
Aug 08 13:51:22 test.docker.com systemd[1]: Started Docker Application Container Engine.
~
~
~
(press q to quit)

Excellent! Good to go!

If Docker is not started automatically after the installation, run the following commands:

root@test:~# systemctl start docker.service
root@test:~# systemctl enable docker.service

Step 6 — Test Docker

Let’s check the new Docker build by downloading the hello-world test image.
To start testing, issue the following command:

 


root@test:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
   (amd64)
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

Step 7 — The ‘Docker’ Command

With Docker installed and working, now is the time to become familiar with the command line utility. The ‘Docker’ command consists of using Docker with a chain of options followed by arguments. The syntax takes this form:

root@test:~# docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Run 'docker COMMAND --help' for more information on a command.


To view all of the available Options and Management Commands, simply type:

docker

To view the switches available for a specific command, type:

docker docker-subcommand --help

Lastly, To view system-wide information about Docker, use:

docker info

Docker is a dynamic, robust and responsive tool that makes it very simple to run applications within a containerized environment. It is portable, less resource-intensive, and more reliant on the host operating system which allows for multiple uses. Overall, Docker is a ‘must have’ system and should be included in your toolkit for automation, deployment, and scaling of your applications!

Our Support Teams are filled with talented admins with an intimate knowledge of multiple web hosting technologies, especially those discussed in this article. If you are uncomfortable walking through the steps outlined here, we are a phone call, chat or ticket away from assisting you with this process. If you’re running one of our fully Managed Cloud VPS Servers, we can provide more information on directly implementing the software described in this article.