How to Install and Configure Puppet on CentOS, Fedora, Ubuntu or Opensuse

Reading Time: 4 minutes

What is Puppet?

Puppet: A Closer Look At Who Holds The Strings

Puppet is an intuitive, task-controlling software which provides a straightforward method to manage Linux and Windows server functions from a central master server. It can perform administrative work across a wide array of systems that are primarily defined by a “manifest” file, for the group or type of server(s) being controlled.

System Requirements

Puppet uses a master/client setup to communicate between the master and client servers. The master server will require more resources than the client servers utilize. The resources needed on the master server will mainly depend on:

  • The number of remote agents (servers) being utilized
  • How frequently those remote agents check in to the master server
  • How many resources are being managed on each remote agent
  • The complexity of the manifest files and modules in use
Note
A Puppet master server must run on UNIX variants known as  “ *nix ” operating systems. Currently, Puppet masters CANNOT run in a Windows environment.
Master Hardware Requirements

The minimum hardware requirements for the Puppet master servers will be based on multiple factors as stated above and noted in Puppet’s guidelines.

 

Client Software Platforms

The Puppet-agent (or client) packages are available for these platforms:

 

Dependencies

If you are installing the Puppet client using an official distribution package via a repository then your system’s package manager usually ensure that the proper dependencies are installed. If you install the agent on a platform without a supported package, you must also manually install the dependent packages, libraries, and gems:

  • Ruby 2.5.x
  • CFPropertyList 2.2 or later
  • Facter 2.0 or later
  • The msgpack gem from MessagePack, if you’re using msgpack serialization

 

Timekeeping and Name Resolution

Before installing the client, there are certain network requirements which you will require you to preparie, review and consider. The most important aspects include time syncing and implementing an idea for name resolution.

Timekeeping

You will want to make sure that the Network Time Protocol (NTP) service is in place to ensure that the time is in sync between the master server, (which acts as the certificate authority) and clients. This is recommended due to the issues that can develop if the servers time drifts out of sync. You may encounter odd certificate issues. A service like NTP (available on most servers) assures accurate timekeeping and will reduce the risk of error like this occurring.

Name resolution

The second part of this component is to decide on an iterable naming convention. For example, by using a master name like puppet.domain.com establishes the continuity of this naming convention. This also allows optimal master communication and that all future agents can reach the master. You can simplify this by utilizing a CNAME record (a name forwarding DNS entry) to ensure the master is always reachable.

 

Firewall Configuration

In a master/client setup, the master server must have port 8140 open to allow for incoming connections from the remote clients. You can use either of the following commands to check that the port is open and listening:

root@master [~]# netstat -tulpn | grep LISTEN |grep 8140
root@master [~]# lsof -i -P -n | grep LISTEN |grep 8140

If nothing is returned with the above command then you’ll need to open port 8140. To open the port in the UFW firewall, use the following command:

root@master:~# ufw allow 8140/tcp
Rules updated
Rules updated (v6)
root@master:~#

 

Puppet Installation

Usually, Puppet uses approximately 2 GB of RAM by default. Plan on this amount plus any additional RAM needed to run the server’s OS itself. If you plan on creating a 2 GB server, opt for one that has 4GB of RAM if you are going to use it as a Puppet master.

Puppet is available on multiple OS variants including:

  • Red Hat/CentOS/Fedora
  • Debian/Ubuntu
  • SUSE Linux Enterprise Server

The basic install steps across all of the above mentioned OS is as follows:

Available Puppet Repositories

root@master [~]# wget https://apt.puppetlabs.com/puppet-release-bionic.deb
root@master [~]# dpkg -i puppet-release-bionic.deb
root@master [~]# apt update
root@master [~]# apt install puppetserver

Note
Our Fully Managed servers (cPanel or Plesk) wouldn’t be good options for Puppet implementation since additional repositories may conflict.

Install the Puppet Master’s Software

Red Hat/CentOS/Fedora

yum install puppetserver

Debian/Ubuntu

apt-get install puppetserver

SUSE/Opensuse

zypper install puppetserver

 

Start the Puppet Master Service

Red Hat/CentOS/Fedora

systemctl start puppetserver

Debian/Ubuntu

service puppetserver start

SUSE/Opensuse

/etc/rc.d/puppetmaster start

 

Install the Puppet Client’s Software

Yum:

yum install puppet-agent

Apt:

apt-get install puppet-agent

Zypper:

zypper install puppet-agent

 

Puppet Configuration

Puppet contains around 200 different configuration settings located within the puppet.conf file. For most servers, you will only need to adjust about 20 settings or less in the file depending on your server’s setup. You can use the command below to set the needed values.

puppet config

We’ve listed the 5 most requested settings to suit your specific needs:

  • dns_alt_names – This is a list of allowed hostnames acting as the Puppet master.
  • environment_timeout – This setting is defaulted to 0 and should be untouched unless you have a particular cause to alter it. You can adjust this setting to unlimited to make master refreshes a part of your standard code deployment process.
  • environmentpath –  The environment path defines the locations where Puppet can find the specific directories for any unique environments. T
  • basemodulepath – This is a list of directories that contains the Puppet modules used in various environments.
  • reports – Directs which report handlers, listed below, to use.
    • HTTPS – Sends reports via HTTP/HTTPS as a POST request to the address defined in the reporturl setting.
    • Log – Sends reports to the local default log destination (usually syslog)
    • Store – Hosts will send a YAML dump of data to a local directory (defined by the reportdir setting in the puppet.conf)

The config reference provides a more comprehensive array of available options in modifying your server to suit your specific needs

 

More Information

Overall, Puppet is an attractive addition to your everyday toolset for managing and automating tedious tasks. Once it is installed and configured, it will maintain your day to day servers tasks with ease. You may want to consult the Puppet documentation for more in-depth information on this topic or consult the following resources for additional info.

 

How Can We Help?

If you would like more information on how this software can benefit your current setup, simply reach out to us via a phone call, chat or ticket, and one of our Most Helpful Humans in Hosting will follow up with you to advise on how best you can integrate this process into your existing infrastructure! We are looking forward to speaking with you!

 

 

How to Install Squid Proxy Server on Ubuntu 16.04

Reading Time: 6 minutes

A Squid Proxy Server is a feature rich web server application that provides both reverse proxy services and caching options for websites. This provides a noticeable speed up of sites and allows for reduced load times when being utilized.

Squids reverse proxy is a service that sits between the Internet and the web server (usually within a private network) that redirects inbound client requests to a server where data is stored for easier retrieval. If the caching server (proxy) does not have the cached data, it then forwards the request on to the web server where the data is actually stored. This type of caching allows for the collection of data and reproducing the original data values stored in a different location to provide for easier access. A reverse proxy typically provides an additional layer of control to smooth the flow of inbound network traffic between your clients and the web server.

Squid can be used as a caching service to SSL requests as well as DNS lookups. It can also provide a wide variety of support to multiple other types of caching protocols, such as ICP, HTCP, CARP, as well as WCCP. Squid is an excellent choice for many types of setups as it provides very granular controls by offering numerous system tools, as well as a monitoring framework using SNMP to provide a solid base for your caching needs.

When selecting a computer system for use as a dedicated Squid caching proxy server, many users ensure it is configured with a large amount of physical memory (RAM) as Squid maintains an in-memory cache for increased performance.

Installing Squid

Let’s start by ensuring our server is up to date:
[root@test ~]# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InReleaseHit:3 http://ppa.launchpad.net/libreoffice/ppa/ubuntu xenial InReleaseGet:4 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]Get:5 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]Fetched 325 kB in 0s (567 kB/s)Reading package lists... Done

Next, at the terminal prompt, enter the following command to install the Squid server:
[root@test ~]# apt install squid
Reading package lists... DoneBuilding dependency treeReading state information... DoneThe following packages were automatically installed and are no longer required:linux-headers-4.4.0-141 linux-headers-4.4.0-141-generic linux-image-4.4.0-141-genericUse 'apt autoremove' to remove them.
The following additional packages will be installed:
libecap3 squid-common squid-langpack ssl-cert
Suggested packages:
squidclient squid-cgi squid-purge smbclient ufw winbindd openssl-blacklist
The following NEW packages will be installed:
libecap3 squid squid-common squid-langpack ssl-cert
0 upgraded, 5 newly installed, 0 to remove and 64 not upgraded.
Need to get 2,672 kB of archives.
After this operation, 10.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Fetched 2,672 kB in 0s (6,004 kB/s)
Preconfiguring packages ...
Selecting previously unselected package libecap3:amd64.
(Reading database ... 160684 files and directories currently installed.)
Preparing to unpack .../libecap3_1.0.1-3ubuntu3_amd64.deb ...
Unpacking libecap3:amd64 (1.0.1-3ubuntu3) ...
Selecting previously unselected package squid-langpack.
Preparing to unpack .../squid-langpack_20150704-1_all.deb ...
Unpacking squid-langpack (20150704-1) ...
Selecting previously unselected package squid-common.
Preparing to unpack .../squid-common_3.5.12-1ubuntu7.6_all.deb ...
Unpacking squid-common (3.5.12-1ubuntu7.6) ...
Selecting previously unselected package ssl-cert.
Preparing to unpack .../ssl-cert_1.0.37_all.deb ...
Unpacking ssl-cert (1.0.37) ...
Selecting previously unselected package squid.
Preparing to unpack .../squid_3.5.12-1ubuntu7.6_amd64.deb ...
Unpacking squid (3.5.12-1ubuntu7.6) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for systemd (229-4ubuntu21.16) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libecap3:amd64 (1.0.1-3ubuntu3) ...
Setting up squid-langpack (20150704-1) ...
Setting up squid-common (3.5.12-1ubuntu7.6) ...
Setting up ssl-cert (1.0.37) ...
Setting up squid (3.5.12-1ubuntu7.6) ...
Skipping profile in /etc/apparmor.d/disable: usr.sbin.squid
Processing triggers for libc-bin (2.23-0ubuntu10) ...
Processing triggers for systemd (229-4ubuntu21.16) ...
Processing triggers for ureadahead (0.100.0-19) ...

That’s it! The install is complete!

 

Configuring Squid

The default Squid configuration file is located in the ‘/etc/squid/ directory, and the main configuration file is called “squid.conf”. This file contains the bulk of the configuration directives that can be modified to change the behavior of Squid. The lines that begin with a “#”, are commented out or not read by the file. These comments are provided to explain what the related configuration settings mean.

To edit the configuration file, let’s start by taking a backup of the original file, in case we need to revert any changes if something goes wrong or use it to compare the new file configurations.

[root@test ~]# cp /etc/squid/squid.conf /etc/squid/squid.conf.bak

 

Change Squid’s Default Listening Port

Next, the Squid proxy servers default port is 3128. You can change or modify this setting to suit your needs should you wish to modify the port for a specific reason or necessity. To change the default Squid port, we will need to edit the Squid configuration file and change the “http_port” value (on line 1599) to a new port number.

[root@test ~]# vim /etc/squid/squid.conf
http_port 2946

(Keep the file open for now…)

 

Change Squid’s Default HTTP Access Port

Next, to allow external access to the HTTP proxy server from all IP addresses, we need to edit the “http_access” directives. By default, the HTTP proxy server will not allow access to anyone at all unless we explicitly allow it!

Caution!:
Multiple settings mention http_access. We want to modify the last entry.

1164 # Deny requests to certain unsafe ports
1165 http_access deny !Safe_ports
...
1167 # Deny CONNECT to other than secure SSL ports
1168 http_access deny CONNECT !SSL_ports
...
1170 # Only allow cachemgr access from localhost
1171 http_access allow localhost manager
1172 http_access deny manager
...
1186 #http_access allow localnet
1187 http_access allow localhost
...
1189 # And finally deny all other access to this proxy
1190 http_access deny all
# > change to “allow all” <

Now, let’s save and close the configuration file using vim’s :wq command.

 

Define the Default NIC Card Squid Listens On

If you would like Squid to listen on a specific NIC (in a server with multiple NIC cards), you can update the configuration file with the NIC’s IP address that Squid will listen on.

For example, we can change it to an internal IP of 10.1.1.5:3128

 

Define Who Can Access the Proxy Server

Next, we’ll setup who is allowed access to our Squid proxy. Locate the http_access section (which should begin around line 1860) and uncomment the following two lines:#acl our_networks src 10.1.1.0/16 10.1.2.0/16
#http_access allow our_networks
-- VVV change to VVV --
acl our_networks src 10.1.1.0/16 10.1.2.0/16
http_access allow our_networks

You will need to modify the IP ranges (10.1.1.0/16 10.1.2.0/16) to your own internal IP’s to match what your network uses unless you have several subnets you can use. (Netmasks are further explained here.)

 

Define the Hours that are Available to Access the Proxy

You can literally control the hours of access to the proxy server! The ACL section starts about line 673:
671 # none
672
673 #  TAG: acl
674 #  Defining an Access List
675 #
To set this up, let’s add this info to the bottom of the ACL section of the /etc/squid/squid.conf file:

acl liquidweb src 10.1.10.0/24
acl liquidweb time M T W T F 9:00-17:00
Granted, this is an example using liquidweb as the business name, but you can use any name.

 

Other ACL options include:

***** ACL TYPES AVAILABLE *****
711 #
712 #       acl aclname src ip-address/mask ...     # clients IP address [fast] 713 #       acl aclname src addr1-addr2/mask ...    # range of addresses [fast] 714 #       acl aclname dst [-n] ip-address/mask ...        # URL host's IP address [slow] 715 #       acl aclname localip ip-address/mask ... # IP address the client connected to [fast] 717 #       acl aclname arp      mac-address ... (xx:xx:xx:xx:xx:xx notation)
...
730 #       acl aclname srcdomain   .foo.com ...
731 #         # reverse lookup, from client IP [slow] 732 #       acl aclname dstdomain [-n] .foo.com ...
733 #         # Destination server from URL [fast] 734 #       acl aclname srcdom_regex [-i] \.foo\.com ...
735 #         # regex matching client name [slow] 736 #       acl aclname dstdom_regex [-n] [-i] \.foo\.com …

… (all the way down to line 989)

968 # Example rule allowing access from your local networks.
969 # Adapt to list your (internal) IP networks from where browsing
970 # should be allowed
971 #acl localnet src 10.0.0.0/8    # RFC1918 possible internal network
972 #acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
973 #acl localnet src 192.168.0.0/16        # RFC1918 possible internal network
974 #acl localnet src fc00::/7       # RFC 4193 local private network range
975 #acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines
976
977 acl SSL_ports port 443
978 acl Safe_ports port 80          # http
979 acl Safe_ports port 21          # ftp
980 acl Safe_ports port 443         # https
981 acl Safe_ports port 70          # gopher
982 acl Safe_ports port 210         # wais
983 acl Safe_ports port 1025-65535  # unregistered ports
984 acl Safe_ports port 280         # http-mgmt
985 acl Safe_ports port 488         # gss-http
986 acl Safe_ports port 591         # filemaker
987 acl Safe_ports port 777         # multiling http
988 acl CONNECT method CONNECT
989

All Squid Configuration Options

A full accounting of Squid’s available configurations can be found here:

(Make sure you plan to take some time because there is a lot of info there)

Restart Squid

After making those changes, let’s restart the Squid service to reload the configuration file.

[root@test ~]# systemctl restart squid.service

 

Other Important File Locations for Squid

 

Even More Information about Squid

How Can We Help?

Our Most Helpful Humans in Hosting can provide clarity and further information about Squid and how it can be utilized in our specific environments.  Our Support team contains many talented individuals with intimate knowledge of web hosting technologies, especially like those discussed in this article. If you are uncomfortable walking through the steps outlined here, we are just a phone call, chat or ticket away from providing you info to walk you through the process. Let us assist you today!

 

 

How Do I Secure My Linux Server?

Reading Time: 6 minutes

Our last article on Ubuntu security suggestions touched on the importance of passwords, user roles, console security, and firewalls. We continue with our last article and while the recommendations below are not unique to Ubuntu specifically (nearly all discussed are considered best practice for any Linux server) but they should be an important consideration in securing your server. Continue reading “How Do I Secure My Linux Server?”

Best Practices for Security on Your New Ubuntu Server: AppArmor, Certs, eCryptfs, and Encrypted LVM

Reading Time: 5 minutes

When security is paramount to your business a few security implementations can go a long way.  In the first article of our Ubuntu security series you’ll find effective tactics that can be easily enforced.

We continue with our second article in our Ubuntu Security series, where we will be suggesting further options to consider.

 

What is AppArmor?

AppArmor is a Linux kernel security module which allows an administrator to implement program based restrictions (as opposed to user based restrictions) to limit resources and control access. It is installed and loaded by default and uses a profile of a program to determine what access and/or permissions it requires. To install the apparmor-profiles package from a terminal prompt:

apt install apparmor-profilesAppArmor profiles have two modes: enforcement and complain.

  • Enforcement mode: This profile enforces the policy defined in the profile and will report any policy violation attempts (either in the syslog or in auditd).
  • Complain mode: Profiles in this mode will not enforce policy, but instead simply report policy violation attempts. This model is mainly used for testing and development.

 

What are SSL Certificates?

When a web browser connects to a website, the HTTP protocol is used to communicate with the web server where the site is hosted at. Typically, this transmission of data is unguarded. This means that the data can be viewed by any interested third party. As you can imagine, if you’re sending any important personal or credit care information, having it out in the open is not ideal or secure at all. Because of this, SSL/TLS certificates are used on the server to ensure the communication between the browser and the server are secure.

According to Wikipedia:

Transport Layer Security (TLS), and its now-deprecated predecessor, Secure Sockets Layer (SSL) are cryptographic protocols designed to provide communications security over a computer networ.

To clarify, SSL’s provide a method to protect communications between a user and a server. This allows for private info to pass between the client and the website. Newer TLS encryptions version cover:

  • TLS v1.0
  • TLS v1.1
  • TLS v1.2
  • TLS v1.3

What is TLS?

TLS or Transport Layer Security is the newest offering that is paired with OpenSSL which provides improved security and stability for the current protocols. TLS is usually implemented on or alongside SSL via Nginx, Apache, Exim or other services on the server.

Ubuntu proves an easy way to implement this type of SSL/TLS security measures. If more info is needed, O’Reilly’s Network Security with OpenSSL is a very good in-depth reference.

 

What is eCryptfs?

eCryptfs is software which encrypts a file, folder or partition to secure its contents. It sounds a lot more complicated than it is, although the functionality can get deep, the usual focus is on creating a space which is protected and cannot be read unless specifically allowed by an admin.

The basic steps to use eCryptfs are:

  1. Mount an encrypted directory
  2. Add info to that directory
  3. Unmount the directory
  4. Profit! Data is secure…!

eCryptfs can be found in the standard Ubuntu repositories and can be installed with apt-get:

apt-get -y install ecryptfs-utilsNext, we will create a directory called brobdingnagian (…as in gigantic. Pulled from the name of a country in Jonathan Swift’s Gulliver’s Travels called Brobdingnag)

root@server:/home/david# mkdir /home/brobdingnagian

Afterward, let’s encrypt the directory /home/brobdingnagian/ by mounting it with the file system type eCryptfs:

mount -t ecryptfs /home/brobdingnagian /home/brobdingnagian

Passphrase:
Select cipher:
1) aes: blocksize = 16; min keysize = 16; max keysize = 32
2) blowfish: blocksize = 8; min keysize = 16; max keysize = 56
3) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24 <<<<<
4) twofish: blocksize = 16; min keysize = 16; max keysize = 32
5) cast6: blocksize = 16; min keysize = 16; max keysize = 32
6) cast5: blocksize = 8; min keysize = 5; max keysize = 16
Selection [aes]: 3
Select key bytes:
1) 24
Selection [24]: 1
Enable plaintext passthrough (y/n) [n]: <-- Press ENTER
Enable filename encryption (y/n) [n]: <-- Press ENTER
Attempting to mount with the following options:
ecryptfs_unlink_sigs
ecryptfs_key_bytes=24
ecryptfs_cipher=des3_ede
ecryptfs_sig=daab07b0664284a2
WARNING: Based on the contents of [/root/.ecryptfs/sig-cache.txt],
it looks like you have never mounted with this key
before. This could mean that you have typed your
passphrase wrong.
Would you like to proceed with the mount (yes/no)? : yes
Would you like to append sig [daab07b0664284a2] to [/root/.ecryptfs/sig-cache.txt] in order to avoid this warning in the future (yes/no)? : yes
Successfully appended new sig to user sig cache file
Mounted eCryptfs

Verify the secure folder is mounted.

mount |grep brobdingnagian
/home/brobdingnagian on /home/brobdingnagian type ecryptfs (rw,relatime,ecryptfs_sig=daab07b0664284a2,ecryptfs_cipher=des3_ede,ecryptfs_key_bytes=24,ecryptfs_unlink_sigs)
Add a file to the folder:

cp /home/david/Desktop/rsync.content.txt /home/brobdingnagian/rsync.content.txtIs the file actually in there? Yes!

ls /home/brobdingnagian/rsync.content.txt
/home/brobdingnagian/rsync.content.txt

Did it retain the content of the file?
cat /home/brobdingnagian/rsync.content.txt
Lorem ipsum dolor sit amet, nostro everti pri ad, eam saepe nemore in, id maiorum interesset vim. Ex voluptatum necessitatibus sit, augue aeterno vituperatoribus no mel. Ut possim percipitur definitionem qui, graeco corpora efficiantur id sit. Ex has nisl virtute eloquentiam. Pri cu veritus recusabo indoctum. Ut invenire referrentur pri, voluptaria sententiae vix at. In mel oblique imperdiet definiebas, salutandi constituam sadipscing at pri, ut eleifend cotidieque eam.
Yes!

Now, let’s unmount the folder…

umount /home/brobdingnagian/
umount: /home/brobdingnagian/: not mounted.
Again, let’s check to see if we can read the file

cat /home/brobdingnagian/rsync.content.txt
'1'C5'N''%"3DUfw`A_''&'?`'='''/'g'!>'_CONSOLEګ'fB'''''v'X''\''1''>AL]C'3L'''G''9x''~''3''''''e?'~'''H'''''~''')'x'Oj'\'_<''''''l╹'`c'q''''Z'''8+!o''vb'''&'mw','''<^<'GF&''''''I?'''!'>{%;'T'--'''''''i'Z''"J_B['''Xa''9'"'''''ă'm,l4=';}'''iUsG'&R;'*'4Fi''>'7"1j4'X''~'''a''_ۘ8,'o&A'_F'''_Cư<#v''s'b'}''5''1''d'c7'''ayZ6

(…and so on for another hundred lines)

So, the file is encrypted and unreadable in the folders unmounted state. The only way to review the encrypted file again is to remount the folder. This is accomplished in the same way as above. You will have to re-enter the passphrase you created to encrypt the folder in the first place, or the re-mount will fail with the error:
Could not unlink the key(s) from your keying. Please use `keyctl unlink` if you wish to remove the key(s). Proceeding with umount” Once you re-mount the drive, your data will be available again.

Since we’re talking about encryption, let’s discuss LVM for a minute.

 

What is LVM?

LVM stands for Logical Volume Management. The Ubuntu documentation describes LVM as

a system of managing logical volumes, or filesystems, that is much more advanced and flexible than the traditional method of partitioning a disk into one or more segments and formatting that partition with a filesystem.

All of the versions of Ubuntu from 12.10 forward include the ability to install Ubuntu onto an encrypted LVM, which allows all partitions in the logical volume, including swap, to be encrypted.

Some of the perks of LVM are:

  • You can create as many LVM’s as you want
  • Operations on the partition can be done live and
  • You can move, expand and shrink partitions as needed
  • You also have the ability to freeze an existing Logical Volume in time, at any moment, even while the system is running. You can then continue to work from the original volume as you would normally, but the snapshot will remain as a static image of the original, frozen in time at the moment it was created. (How cool is that?!)

See the Ubuntu wiki for more info on how to leverage these features.

This concludes our second article in our Ubuntu security series, on our next and last article we will be introducing other security options related to SSH keys, SELinux, 2-Factor Auth and IPv6.  If configuring these options is outside your wheelhouse, consider adding our Server Protection Package to your Linux environment. It comes at an affordable price with routine vulnerability scans, hardened server configurations, anti-virus and malware remediation. Keep yourself ahead of the game so you can get back to focusing on growing your business.

Best Practices for Security on Your New Ubuntu Server: Users, Console and Firewall

Reading Time: 4 minutes

Thank you for taking the time to review this important information. You will find this guide broken down into six major sections that coincide with Ubuntu’s security policy guide. The major topics we talk on throughout these articles are as follows:

User Management

User management is one of the most important aspects of any security plan. Balancing your users’ access requirements against their everyday needs, versus the overall security of the server will demand a clear view of those goals to ensure users have the tools they need to get the job done as well as protect the other users’ privacy and confidentiality. We have three types or levels of user access:

  1. Root: This is the main administrator of the server. The root account has full access to everything on the server.  The root user can lock down or, loosen users roles, set file permissions, and ownership, limit folder access, install and remove services or applications, repartition drives and essentially modify any area of the server’s infrastructure. The phrase “with great power comes great responsibility” comes to mind in reference to the root user.
  2. A sudoer (user): This is a user who has been granted special access to a Linux application called sudo.  The “sudoer” user has elevated rights to run a function or program as another user. This user will be included in a specific user group called the sudo group. The rules this user has access to are defined within the “visudo” file which defines and limits their access and can only be initially modified by the root user.
  3. A user: This is a regular user who has been set up using the adduser command, given access to and, who owns the files and folders within the user /home/user/ directory as defined by the basic settings in the /etc/skel/.profile file.

Linux can add an extreme level of granularity to defined user security levels. This allows for the server’s (root user) administrator to outline and delineate as many roles and user types as needed to meet the requirements set forth by the server owner and its assigned task.

 

Enforce Strong Passwords

Because passwords are one of the mainstays in the user’s security arsenal, enforcing strong passwords are a must. In order to enact this guideline, we can modify the file responsible for this setting located here:  /etc/pam.d/common-password.

To enact this guideline, we can modify the file responsible for this setting by using the ‘chage’ command:

chage -m 90 username

This command simply states that the user’s password must be changed every 90 days.

/lib/security/$ISA/pam_cracklib.so retry=3 minlen=8 lcredit=-1 ucredit=-2 dcredit=-2 ocredit=-1

 

Restrict Use of Old Passwords

Open ‘/etc/pam.d/common-password‘ file under Ubuntu/Debian/Linux Mint.
vi /etc/pam.d/common-passwordAdd the following line to ‘auth‘ section.

auth        sufficient  pam_unix.so likeauth nullok

Add the following line to ‘password‘ section to disallow a user from re-using last five of his or her passwords.

sufficient    pam_unix.so nullok use_authtok md5 shadow remember=5Only the last five passwords are remembered by the server. If you tried to use any of five old passwords, you would get an error like:
Password has been already used. Choose another.

 

Checking Accounts for Empty Passwords

Any account having an empty password means its opened for unauthorized access to anyone on the web and it’s a part of security within a Linux server. So, you must make sure all accounts have strong passwords, and no one has any authorized access. Empty password accounts are security risks, and that can be easily hackable. To check if there were any accounts with an empty password, use the following command.

cat /etc/shadow | awk -F: '($2==""){print $1}'

 

What is Console Security?

Console security simply implies that limiting access to the physical server itself is key to ensuring that only those with the proper access can reach the server. Anyone who has access to the server can gain entry to the server, reboot it, remove hard drives, disconnect cables or even power down the server! To curtail malicious actors with harmful intent, we can make sure that servers are kept in a secure location. Another step we can take is to disable the Ctrl+Alt+Delete function. To accomplish this run the following commands:

systemctl mask ctrl-alt-del.target
systemctl daemon-reload
This forces attackers to take more drastic measures to access the server and also limits accidental reboots.

What is UFW?

UFW is simply a front end for a program called iptables which is the actual firewall itself and, UFW provides an easy means to set up and design the needed protection. Ubuntu provides a default firewall frontend called UFW (Uncomplicated firewall). This is another line of defense to keep unwanted or malicious traffic from actually breaching the internal processes of the server.

 

Firewall Logs

The firewall log is a log file which creates and stores information about attempts and other connections to the server. Monitoring these logs for unusual activity and/or attempts to access the server maliciously will aid in securing the server.

When using UFW, you can enable logging by entering the following command in a terminal:

ufw logging on

To disable logging, simply run the following command:

ufw logging off

To learn more about firewalls, visit our Knowledge Base articles.

We’ve covered the importance of passwords, user roles, console security and firewalls all of which are imperative to protecting your Linux server. Let’s continue onto the next article where we’ll cover AppArmor, certificates, eCryptfs and Encrypted LVM.

 

Installing Redis on Ubuntu 16.04/18.04

Reading Time: 4 minutes

What is Redis? 

Redis or “REmote DIctionary Server” is defined as an open source, “key-value” database storage medium, which is additionally known as a data structure server. At its heart, Redis works with key-value pairs and stores data in a location that’s easily referenceable by two specific values. These key-value associations are usually a set of two linked data entries which are made up by a key, which is a unique identifier for a type of data and, the value, which can be either the particular data that is identified or, an indicator to the location of that data.

Redis has five main data types it can utilize:

  • Strings – Strings are a basic value in Redis. They can contain any kind of data size up to 512Mb including jpegs or other objects like blobs.
  • Lists – Lists are exactly as the name implies; simply lists of strings, sorted by the order in which they are applied
  • Sets – Sets are simply a group of unordered strings
  • Sorted Sets – Sorted Sets are akin to regular sets. The main difference is that sorted set items are associated with, and sorted by a weighted score field. This allows for priority items to be set when entered data into the sorted set
  • Hashes – Hashes map the string fields and values themselves. They are capable of defining multiple elements and can store more than 4 billion field-value pairs

Redis holds the database entries entirely in memory, and will only use the hard disk for persistent storage. These key-value pair values are often used in hash tables, lookup tables and configuration files. Redis can accept key-values for a wide variety of formats so operations can be run on the server with a reduced server workload. Redis can also replicate data to any number of slave servers which makes it a prime candidate for large database replication setups.

 

What Are the Advantages of Redis?

  1. Redis is extremely fast − Redis can perform hundreds of thousands of (set, get) commands per second.
  2. It supports well know data types − As noted above, Redis supports most of the data types normally used by developers such as strings, lists, sets, sorted sets, and hashes.
  3. Operations are protected (or atomic) which means:
    1. All operations in a transaction are chronological and executed in sequence
    2. All operations in a transaction are performed as a single unit of work which limits interference from other operations
  4. .Multifunction database − Redis is a multifunction, noSQL database that can be used in a wide variety of use cases including caching, large dataset, full-text searches, spark data processing or any other short-lived data manipulation.

All of these options place Redis firmly in the middle of the NoSQL ecosystem.

 

What is NoSQL?

NoSQL is a type of database design that takes into consideration a wide group of data models, including key-value, document, columnar and graph formats.

NoSQL stands for “not only SQL” and is an alternative to the more traditional relational databases like MySQL in which data is laid out in tables, and the data scheme is carefully constructed before the actual database is created. NoSQL databases are especially useful for working with very large distributed datasets

A quick breakdown of how NoSQL stacks up against other database schemes:

 

Install Redis on Ubuntu

To install Redis on Ubuntu, SSH into your server, once at the command prompt type the following commands. This will install Redis on your server.

apt-get update

apt-get install redis-server

 

Start Redis

redis-server

Next, let’s ensure Redis starts at boot:

systemctl enable redis-server.service

Also, let’s set one of the main memory variables in the Redis config (this value will depend on your servers available memory)

vim /etc/redis/redis.conf

maxmemory 256mb

maxmemory-policy allkeys-lru

Finally, let’s restart Redis to ensure the values are retained:

systemctl restart redis-server.service

 

Check If Redis is Active

Run the following command at the servers command prompt:

redis-cli

This will open a Redis prompt.

redis 10.0.0.1:6379

After running the above command, your servers IP address (10.0.0.1) and the port Redis is running on will be shown (6379).

Now type in the following command at the Redis prompt:

redis 10.0.0.1:6379> ping
PONG
PONG” shows that Redis is successfully installed on your machine.

 

Install Redis via Source

To install Redis manually via source, simply SSH into your server and run the following command:

wget http://download.redis.io/redis-stable.tar.gz && tar xvzf redis-stable.tar.gz && cd redis-stable && make && make install

The Redis configuration file will be in the current install directory. Let’s copy it to a better location:

mkdir /etc/redis
cp redis.conf /etc/redis/

Now, let start Redis:

redis-server /etc/redis/redis.conf &
redis-cli ping
PONG

Lastly, here is a fun way to test Redis out. Try it!  Overall, if you need a fast, robust, and highly scalable NoSQL solution for use with your application or as a project adjunct Redis can meet your needs! Try it out on one of our Private Cloud product offerings or one of our stable, reliable Dedicated servers!

 

Meetups and Contacts for Redis

We’d like to send a shout out to the people over at https://redislabs.com/ who have provided some of the best and most excellent support over the years, awesome job!

For enterprise support, contact:
Blake Lipps-midwest Redis account rep/consultant
Drake Albee -west coast Redis consultant

For individual support, see the Redis community pages. The areas in which you can find active support or interact with the Redis community are noted here:

  • The HQ of the Redis community is on Reddit in the subreddit. You can use that community to ask for help, post new ideas for new features, link to articles of interest for the Redis community, and/or have other questions answered
  • Join the mailing list by subscribing via email
  • Meet up in the #redis channel on Freenode (web access link)
  • Check the Redis tag on Stack Overflow
  • Follow Redis news feed on Twitter

If you happen to live in one of the larger cities listed below, there are Local Redis meetup groups as well! Local Redis meetup info:

 

Windows Firewall Basics

Reading Time: 6 minutes

A firewall is a program installed on your computer or a piece of hardware that uses a rule set to block or allow access to a computer, server or network. It separates your internal network from the external network (the Internet).

Firewalls can permit traffic to be routed through a specific port to a program or destination while blocking other malicious traffic. A firewall can be a hardware, software, or a blending of both.

Continue reading “Windows Firewall Basics”

Install the LAMP Stack Using Tasksel on Ubuntu 16.04

Reading Time: 4 minutes

There are multiple ways of installing software on Debian based systems like Ubuntu and Mint. Tools like apt, apt-get, aptitude and/or synaptic are usually used to install single applications into the desktop editions of those OS. Alternatively, Tasksel is a command line app for installing a “group” of related packages onto a server. Tasksel is not installed by default on the desktop editions of the ‘nix’ versions that contain the above-mentioned package managers but, it is installed on later versions of Debian and Ubuntu server editions.

How Does Tasksel Work?

Continue reading “Install the LAMP Stack Using Tasksel on Ubuntu 16.04”

Kubernetes RBAC Authorization

Reading Time: 4 minutes

What is RBAC?

Kubernetes Role-Based Access Control or the (RBAC) system describes how we define different permission levels of unique, validated users or groups in a cluster. It uses granular permission sets defined within a .yaml file to allow access to specific resources and operations.

Starting with Kubernetes 1.6, RBAC is enabled by default and users start with no permissions, and as such, permissions must be explicitly granted by an admin to a specific service or resource. These policies are crucial for effectively securing your cluster. They permit us to specify what types of actions are allowed, depending on the user’s role and their function within the organization.

Prerequisites for using Role-Based Access Control

To take advantage of RBAC, you must allow a user the ability to create roles by running the following command:

root@test:~# kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin --user

Afterwards, to start a cluster with RBAC enabled, we would use the flag:

--authorization-mode=RBAC

 

The RBAC Model

Basically, the RBAC model is based on three components; Roles, ClusterRoles and Subjects. All k8s clusters create a default set of ClusterRoles, representing common divisions that users can be placed within.

The “edit” role lets users perform basic actions like deploying pods.
The “view” lets users review specific resources that are non-sensitive.
The “admin” role lets a user manage a namespace.
The “cluster-admin” allows access to administer a cluster.

Roles

A Role consists of rules that define a set of permissions for a resource type. Because there is no default deny rules, a Role can only be used to add access to resources within a single virtual cluster. An example would look something like this:

kind: Role
apiVersion: rbac.authorization.domain.com/home
metadata:
namespace: testdev
name: dev1
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"] verbs: ["get", "watch", "list"]

In this case, the role defines that a user (dev1) can use the “get”, “watch” or “list” commands for a set of pods in the “testdev” namespace.

ClusterRoles

A ClusterRole can be used to grant the same permissions as a Role but, because they are cluster-scoped, they can also be used to grant wider access to:

  • cluster-scoped resources (like nodes)
  • non-resource endpoints (like a folder named “/test”)
  • namespaced resources (like pods) in and across all namespaces. We would need to run kubectl get pods --all-namespaces

It contains rules that define a set of permissions across resources like nodes or pods.
An example would look something like this:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: secret-reader
rules:
- apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]

The default ClusterRole command looks like this:

root@test:~# kubectl create clusterrole [Options]

A command line example to create a ClusterRole named “pod“, that allows a user to perform “get“, “watch” and “list” on a pod would be:

root@test:~# kubectl create clusterrole pod --verb=get,list,watch --resource=pod

 

RoleBinding

A RoleBinding is a set of configuration rules that designate a permission set. It binds a role to subjects (Subjects are simply users, a group of users or service accounts). Those permissions can be set for a single namespace (virtual cluster), with a RoleBinding or, cluster-wide with a ClusterRoleBinding.A RoleBinding is a set of config rules that designate a permission set.

Let’s allow the group “devops1” the ability to modify the resources in the “testdev” namespace:

root@test:~# kubectl create rolebinding devops1 --clusterrole=edit --group=devops1 --namespace=dev rolebinding "devops1" created

Because we used a RoleBinding, these functions only apply within the RoleBinding’s namespace. In this case, a user within the “devops1” group can view resources in the “testdev” namespace but not in a different namespace.

ClusterRoleBindings

A ClusterRoleBinding defines which users have permissions to access which ClusterRoles. Because a “role” contains rules that represent a set of permissions, ClusterRoleBindings extend the defined permissions for:

  • unique namespace in resources like nodes
  • resources in all namespaces in a cluster
  • undefined resource endpoints like ”/foldernames”

The default ClusterRoles (admin, edit, view) can be created using the command:

root@test:~# kubectl create clusterrolebinding [options]

An example of creating a ClusterRoleBinding for user1, user2,and group1 using the cluster-admin ClusterRole

root@test:~# kubectl create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1

 

ConfigMaps

What is a configMap?

A configMap is a resource that easily attaches configuration data to a pod. The information that is stored in a ConfigMap is used to separate config data from other content to keep images/pods portable.

How Do I Create A ConfigMap?

To create a configmap, we simply use the command:
kubectl create configmap <map-name> <data-source>

Let’s create a default.yaml file to create a ConfigMap:
kubectl create -f default.yaml /configmaps/location/basic.yaml

Basic RBAC Commands For Kubectl

Note
These commands will give errors if RBAC isn’t configured correctly.

kubectl get roles
kubectl get rolebindings
kubectl get clusterroles
kubectl get clusterrolebindings
kubectl get clusterrole system:node -o yaml

 

Namespaces

Namespaces are used to define, separate and identify a cluster of resources among a large number of users or spaces. You should only use namespaces when you have a very diverse set of clusters, locations or users. They are used in settings where companies have multiple users or teams that are spread across various projects, locations or departments. A Namespaces also provides a way to prevent naming conflicts across a wide array of clusters.
Inside a Namespace, an object can be identified by a shorter name like ‘cluster1’ or it can be as complex as ‘US.MI.LAN.DC2.S1.R13.K5.ProdCluster1’ but, there can only be a single ‘cluster1’ or a ‘US.MI.LAN.DC2.S1.R13.K5.ProdCluster1’ inside of that namespace. So, the names of resources within a namespace must be unique but, not spanning namespaces. You could have several namespaces which are different, and they can all contain a single ‘cluster1’ object.

You can get a list of namespaces in a cluster by using this command:

root@test:~# kubectl get namespaces
NAME STATUS AGE
cluster2dev Active 1d
cluster2prod Active 4d
cluster3dev Active 2w
cluster3prod Active 4d

Kubernetes always starts with three basic namespaces:

  • default: This is the default namespace for objects that have no specifically identified namespace (eg. the big barrel o’ fun).
  • kube-system: This is the default namespace for objects generated by the Kubernetes system itself.
  • kube-public: This namespace is created automatically and is world readable by everyone. This namespace is primarily reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

Finally, the essential concept of role-based access control (RBAC) is to ensure that users who require specific access to a resource can be assigned to those Roles, Clusterroles, and ClusterRolebindings as needed or desired. The granularity of these permission sets is structured and enabled to allow for increased security, ease of security policy modification, simplified security auditing, increased productivity (RBAC cuts down on onboarding time for new employees). Lastly, RBAC allows for increased cost reduction via removing unneeded applications and licensing costs for less used applications. All in all, RBAC is a needed addition to secure your Kubernetes infrastructure.

Ready to Learn More?

Reach out to one of our Solutions team via chat to decide if a Private Cloud service from Liquid Web will meet your Kubernetes needs!

How to Use Ansible

Reading Time: 8 minutes

Ansible symbolAnsible is an easy to use automation software that can update a server, configure tasks, manage daily server functions and deploys jobs as needed on a schedule of your choosing. It is usually administered from a single location or control server and uses SSH to connect to the remote servers. Because it employs SSH to connect, it is very secure and, there is no software to install on the servers being managed. It can be run from your desktop, laptop or other platforms to assist with automating the tedious tasks which every server owner faces.

Continue reading “How to Use Ansible”