Top 4 Lessons Learned Using Ubuntu

Reading Time: 5 minutes

When choosing a server operating system, there are a number of factors and choices that must be decided. An often talked about and referenced OS, Ubuntu, is a popular choice and offers great functionality with a vibrant and helpful community. However; if you’re unfamiliar with Ubuntu and have not worked with either the server or desktop versions, you may encounter differences in common tasks and functionality from previous operating systems you’ve worked with. I’ve been a system administrator and running my own servers for a number of years, almost all of which were Ubuntu, here are the top four lessons I’ve learned while running Ubuntu on my server.

  1. Understanding the “server” vs. “desktop” model
  2. Deciphering the naming scheme/release schedule
  3. Figuring out the package manager
  4. Networking considerations on Ubuntu

 

Understanding “Server” vs. “Desktop” Model

Ubuntu comes in a number of images and released versions. One of the first choices to consider is whether you want the “desktop” or “server” release. When running an application or project on a server the choice to make is the “server” image. You may be wondering what the difference is, and it comes down to two main differences:

  • The “desktop” image has a graphical user interface (GUI) where the “server” image does not
  • The “desktop” image has default applications more suited for a user’s workstation, whereas the “server” image has default applications more suited for a web server.

It is really that simple. Currently, the majority of the differences relate to the default applications offereed when installing Ubuntu and whether or not you need a GUI. When I first started using Ubuntu, there were a number of differences between the two (such as a slightly tweaked Linux kernel on the “server” image designed to perform better on server grade hardware), but today it is a much more simplified decision process.

The choice comes down to saving time for the individual by either adding or removing applications (also called “packages”) that they do or don’t need. At Liquid Web we offer “server” images since it’s best designed for operating on a web server. You do have the choice of the Ubuntu version, which brings me to my next lesson learned:

 

Deciphering the Naming Scheme/Release Schedule

Both Ubuntu and CentOS are based on what is called an “upstream” provider, or another operating system that then CentOS/Ubuntu tweak and re-publish. For Ubuntu, this upstream provider is the OS, Debian, whereas CentOS is based on Red Hat (which is further based on Fedora). It was confusing at first for me to try and understand what the names for Ubuntu releases meant: “Vivid Vervet”, “Xenial Xerus”, “Zesty Zapus”; but that’s just a cute “name” the developers give a particular version until it is released.

The company that oversees Ubuntu, called Canonical, has a regular and set release schedule for Ubuntu. This is a distinct difference between CentOS, which relies on Red Hat, which further relies on Fedora. Every six months a release of Ubuntu is published, and every fourth release (or once every two years) a “long-term support” or LTS release is published. The numbering they use for releases is fairly straight-forward: YY.MM where YY is the two digit year and MM is the month of the release. The latest release is 19.04, meaning it was released in April of 2019.

This was tricky for me to initially navigate and created a headache since the different versions have different levels of support and updates provided to them. For some people, this is not an issue. As a developer, you may want to build your application on the latest and greatest OS that takes advantage of new technologies. Receiving support may not matter because you won’t be using the server for very long. You may also be in a position where you need to know that your OS won’t have any server-crashing bugs.

Which version do you choose? How do you know whether your OS will be stable and receive patches/updates? I ran into these issues with when accidentally creating a server that lost support just a few months later; however, I learned an easy-to-use trick to ensure that in the future I chose an LTS release:

  • LTS releases start with an even number and end in 04

Liquid Web offers a variety of Ubuntu releases within our Core-Managed and Self-Managed support tiers:

Notice that all the version of Ubuntu that we offer are the .04 releases. These are all LTS releases, ensuring that you receive the longest possible lifespan and vital security updates for your OS.

 

Ubuntu’s Package Manager

Prior to working with Ubuntu, I had the most experience with CentOS. Moving to Ubuntu meant having to learn how to install packages since the CentOS package manager, yum, was no longer available. I discovered that the general process is the same with a few minor differences.

The first difference I encountered is that Ubuntu uses the apt-get command. This is part of the Apt package manager and follows a very similar syntax to the yum command for the most basic command: installing packages. To install a package I had to use:

apt-get install $package

Where $package was the name of the package. I didn’t always know the name of the package I wanted to find though so I thought I could search much like I could with yum search $package; however there is another utility that Apt uses for searching: apt-cache search $package

If you forget,  you’ll get an error that you’ve run an illegal operation:

One similarity to yum that is super useful: the “y” flag to bypass any prompts about installing the package. This was my favorite “trick” on CentOS to speed up installing packages and improve my efficiency. Knowing that it works the same on Ubuntu was a major relief:

Of course installing packages meant I had to add the repositories I wanted. This was a significant change since I was use to the CentOS location of /etc/yum.repos.d/ for adding .repo files. With the shift in the Apt system on Ubuntu, I learned that I needed to add a .list file, pointing to the repository file in question in the /etc/apt/sources.list.d/ folder. Take Nginx for example; it was quite easy to add once I realized what had to go where:

 

From here it was as straight-forward as installing Nginx with the apt-get install nginx command:

 

There was a time when adding a repository did not immediately work for me; however, it was quickly resolved by telling Apt that it needed to refresh/update the list of repos it uses with the apt-get update command.

Note
Don’t worry, the apt-get update command does not update any packages needlessly, this simply runs through the configured repositories and ensures that your system is aware of them as well as the packages offered.

 

Networking Consideration on Ubuntu

The last major lesson I want to talk about was one learned painfully. Networking on Ubuntu, by default, is handled from the /etc/network/ directory:

There is a flat file called interfaces, and this will contain all the networking information for your host (configured IPs/gateways, DNS servers, etc…):

Notice in the above configuration that both the primary eth0 interface and the sub-interface, eth0:1, are configured in the same file (/etc/network/interfaces). This caught me in a bad spot once when I made a configuration syntax mistake, restarted networking, and suddenly lost all access to my host. That interfaces file contained all the networking components for my host, and when I restarted networking while it contained a mistake, it prevented ANY networking from coming back up. The only way to correct the issue was to console the server, correct the mistake, and restart networking again.

Though this is the default configuration for Ubuntu, there is now a way to ensure that you have the configuration setup for individual interfaces so, a simple mistake does not break all of the networking: /etc/network/interfaces.d/

To properly use this folder and have individual configuration files for each interface, you need to modify the /etc/network/interface file to tell it to look in the interfaces.d/ folder. This is a simple change of adding one line:

source /etc/network/interfaces.d/*

This will tell the main interfaces configuration file to also review /etc/network/interfaces.d/ for any additional configuration files and use those as well. This would have saved me a lot of hassle had I known about it earlier.

Hopefully, you can use my mistakes and struggles to your advantage. Ubuntu is a great operating system, is supported by a fantastic community, and is covered by “the most helpful humans in hosting” of Liquid Web. If you have any questions about Ubuntu, our support or how you might best use it in your environment feel free to contact us at support@liquidweb.com or via phone at 1-800-580-4985.

 

 

How to Redirect URLs Using Nginx

Reading Time: 3 minutes

What is a Redirect?

A redirect is a web server function that will redirect traffic from one URL to another. Redirects are an important feature when the need arises. There are several different types of redirects, but the more common forms are temporary and permanent. In this article, we will provide some examples of redirecting through the vhost file, forcing a secure HTTPS connection, redirection to www and non-www as well as the difference between temporary and permanent redirects.

Note
As this is an Nginx server, any .htaccess rules will not apply. If your using the other popular web server, Apache, you’ll find this article useful.

Common Methods for Redirects

Temporary redirects (response code: 302 Found) are helpful if a URL is temporarily being served from a different location. For example, these are helpful when performing maintenance and can redirect users to a maintenance page.

However, permanent redirects (response code: 301 Moved Permanently) inform the browser there was an old URL that it should forget and not attempt to access anymore. These are helpful when content has moved from one place to another.

 

How to Redirect

When it comes to Nginx, that is handled within a .conf file, typically found in the document root directory of your site(s), /etc/nginx/sites-available/directory_name.conf. The document root directory is where your site’s files live and it can sometimes be in the /html if you have one site on the server. Or if your server has multiple sites it can be at /domain.com.  Either way that will be your .conf file name. In the /etc/nginx/sites-available/ directory you’ll find the default file that you can copy or use to append your redirects. Or you can create a new file name html.conf or domain.com.conf.

Note
If you choose to create a new file be sure to update your symbolic links in the /etc/nginx/sites-enabled. With the command:

ln -s /etc/nginx/sites-available/domain.com.conf /etc/nginx/sites-enabled/domain.com.conf

The first example we’ll cover is redirection of a specific page/directory to the new page/directory.

Temporary Page to Page Redirect

server {
# Temporary redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage redirect;
}

Permanent Page to Page Redirect

server {
# Permanent redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage permanent;
}

Permanent www to non-www Redirect

server {
# Permanent redirect to non-www
server_name www.domain.com;
rewrite ^/(.*)$ http://domain.com/$1 permanent;
}

Permanent Redirect to www

server {
# Permanent redirect to www
server_name domain.com;
rewrite ^/(.*)$ http://www.newdomain.com/$1 permanent;
}

Sometimes the need will arise to change the domain name for a website. In this case, a redirect from the old sites URL to the new sites URL will be very helpful in letting users know the domain was moved to a new URL.

The next example we’ll cover is redirecting an old URL to a new URL.

Permanent Redirect to New URL

server {
# Permanent redirect to new URL
server_name olddomain.com;
rewrite ^/(.*)$ http://newdomain.com/$1 permanent;
}

We’ve added the redirect using the rewrite directive we discussed earlier. The ^/(.*)$ regular expression will use everything after the / in the URL. For example, http://olddomain.com/index.html will redirect to http://newdomain.com/index.html. To achieve the permanent redirect, we add permanent after the rewrite directive as you can see in the example code.

When it comes to HTTPS and being fully secure it is ideal for forcing everyone to use https:// instead of http://.

Redirect to HTTPS

server {
# Redirect to HTTPS
listen      80;
server_name domain.com www.domain.com;
return      301 https://example.com$request_uri;
}

After these rewrite rules are in place, testing the configuration prior to running a restart is recommended. Nginx syntax can be checked with the -t flag to ensure there is not a typo present in the file.

Nginx Syntax Check

nginx -t

If nothing is returned the syntax is correct and Nginx has to be reloaded for the redirects to take effect.

Restarting Nginx

service nginx reload

For CentOS 7 which unlike CentOS 6, uses systemd:

systemctl restart nginx

Redirects on Managed WordPress/WooCommerce

If you are on our Managed WordPress/WooCommerce products, redirects can happen through the /home/s#/nginx/redirects.conf file. Each site will have their own s# which is the FTP/SSH user per site. The plugin called ‘Redirection’ can be downloaded to help with a simple page to page redirect, otherwise the redirects.conf file can be utilized in adding more specific redirects as well using the examples explained above.

Due to the nature of a managed platform after you have the rules in place within the redirects.conf file, please reach out to support and ask for Nginx to be reloaded. If you are uncomfortable with performing the outlined steps above, contact our support team via chat, ticket or a phone call.  With Managed WordPress/WooCommerce you get 24/7 support available and ready to help you!

How To Set Up Multiple PHP Versions in Webmin

Reading Time: 4 minutes

What is Webmin?

Webmin is a browser-based graphical interface to help you administrate your Linux server.  Much like cPanel or Plesk, Webmin allows you to set up and manage accounts, Apache, DNS zones, users and configurations.  As these configurations can get somewhat complicated Webmin works to simplify this process. The result is fewer issues during server and domain setup.  Which results in a stable server and a pleasant administration experience. Unlike Plesk or cPanel, Webmin is completely free and open to the public. Unfortunately, here at Liquid Web, we do not offer managed support for Webmin, but we are always willing to assist as much as possible when issues arise.   You can download Webmin from their site. Also, you can find some excellent documentation on this interface.

 

Installing Webmin

Before beginning “if you have not already” you will need to install Webmin on your server.  For this article, we will mainly be working with Webmin installed on a Ubuntu server. However, it is very similar to CentOS therefore we have included instructions for both operating systems below.

  • First, you will need to access your server SSH. If you are not sure how to SSH into your server, please visit our link on the subject.  
  • Once you are logged into your server SSH, please run the following commands in order or copy and paste the entire syntax.
Debian/Ubuntu

sudo sh -c 'echo "deb http://download.webmin.com/download/repository sarge contrib" > /etc/apt/sources.list.d/webmin.list'wget -qO - http://www.webmin.com/jcameron-key.asc | sudo apt-key add -
sudo apt-get updatesudo apt-get install webmin

CentOS/RedHat/Fedora

(echo "[Webmin] name=Webmin Distribution Neutral
baseurl=http://download.webmin.com/download/yum
enabled=1
gpgcheck=1
gpgkey=http://www.webmin.com/jcameron-key.asc" >/etc/yum.repos.d/webmin.repo;
yum -y install webmin)

 

Accessing Webmin

Webmin is a web-based application.  So once Webmin is installed, you can access Webmin by using a browser of your choice.   Be sure to make sure port 10000 is open on your server as Webmin utilizes this port to function.  We have included steps below to ensure the correct port is open for iptables and firewalld.

IPTABLES

iptables-save > /tmp/tabsav
vi /tmp/tabsav
iptables-restore < /tmp/tabsav
You should be able to use the command above to alter you iptables to look something like what we have included below.
# Generated by iptables-save v1.4.7 on Thu Jan 3 00:02:49 2019
*filter
:INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [3044:1198306] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Thu Jan 3 00:02:49 2019

FirewallD

firewall-cmd --zone=public --add-port=10000/tcp --permanent
firewall-cmd --reload

Once you have made sure port 10000 is open, you should be able to access the Webmin interface by entering in your servers IP address followed by the port number “10000”

Example:   https://192.168.1.100:10000             <—— 192.168.1.100 should be replaced with your server IP.

Installing PHP Versions in Webmin

There is a lot of situations where we may need to use multiple PHP versions.  For example, you may have multiple domains or applications on your server that require an older version of PHP while at the same time you may have newer domains that are configured for newer versions of PHP.   For this article, we will be installing PHP7 and PHP5.6 on Debian.

Step 1: First, you will want to SSH into your server and run the following command.
apt-get install php7.0-cli php7.0-fpmYou can check the installation after it has completed by running php –v in your terminal.

Step 2: Now here is where things tend to get tricky.  By default, Debian only offers a single PHP version in the official repository. So, we will have to add an additional repository for Debian. While adding this repository, it is good practice to enable HTTPS for APT and register the APT key. You can accomplish this by executing the commands we have included below.

apt-get install apt-transport-https
curl https://packages.sury.org/php/apt.gpg | apt-key add -
echo 'deb https://packages.sury.org/php/ stretch main' > /etc/apt/sources.list.d/deb.sury.org.list
apt-get update

Once the repository is added, we can go ahead and add our second PHP version to the server.

apt-get install php5.6-cli php5.6-fpmWe can now check both PHP versions on the server by running these commands.

php7.0 -V

Output:


php5.6 -V

Output:

Now that we have confirmed both PHP versions are installed you can access their configuration files in the following directories.

  • /etc/php/5.6/cli/php.ini
  • /etc/php/7.0./cli/php.ini

Step 3: To make things easier, later on, we will want to add the location of the configuration files to Webmin.  This can be done from within the Webmin interface.

  1. Log into Webmin
  2. Navigate to Others >> PHP Configuration
  3. Add the PHP configuration file location
  4. Click Save

You can use this tool to add and edit directives for different PHP versions. For example, you’ll be able to edit PHP’s memory limit, timeout length, extensions and more.  This simply helps consolidate configurations within one interface. From here we can just use a .htaccess file to specify what version of PHP a site should use.

Step 4: If you do not have this file already within your document root you can add this file by navigating to /var/www/exampledomain/  and running the following command to indicate which PHP version you are going to use.

echo "AddHandler application/x-httpd-php56 .php" >  .htaccess  | chown exampleuser. .htaccess

echo "AddHandler application/x-httpd-php70.php" >  .htaccess  | chown exampleuser. .htaccess

Step 5: Once you have completed this, you can test to see if your site is running on the desired PHP version.  You can accomplish this by creating a PHP information page. by making a file in your document root, usually in the path of /var/www/html/

You will want to insert the code below and save the file.

<? phpinfo(); ?>   After you have created this file, you can view the page by visiting your domain followed by the name of the file you created.  For example, www.example.com/phpinfo.php.

Congratulations you can now use Webmin to accomplish your daily admin tasks!  Take a look at our Cloud VPS servers for 24/7 support and lightening speed servers!

How To Set Up FTP isolation in CentOS or Ubuntu

Reading Time: 4 minutes

Configuring Multi-User FTP with User Isolation

This article is intended to give an overview of a chroot environment and configuring your FTP service for user isolation. This is done with a few lines within the main configuration file of the FTP service.

This article is also intended as a guide for our Core-Managed servers running CentOS or Ubuntu without a control panel. Our Fully Managed servers that utilize the cPanel software already have the FTP user isolation configured by default and also provide utilities for creating FTP users.

What is Chroot?

Chroot or change-root is the implementation of setting a new root directory for the environment that a user has access to. By doing this, from the user’s perspective, there will appear to be no higher directory that the user could escape to. They would be limited to the directory they start in and only see the contents inside of that directory.

If a user were to try and list the contents of the root (/) of the system, it would return the contents of their chroot environment and not the actual root of the server. Read more about this at the following link.

 

Installing ProFTPd

As there are many FTP options available, ProFTPd, Pure-FTPd, vsftpd, to name a few, this article will only focus on the use of ProFTPd for simplicity and brevity. This is also not intended to be a guide for installing an FTP service as it’s covered in our Knowledge Base articles below.

https://www.liquidweb.com/kb/how-to-install-proftpd-on-centos-7/

https://www.liquidweb.com/kb/how-to-install-and-configure-proftpd-on-ubuntu-14-04-lts/

 

User Isolation with ProFTPd

User Setup

By default, ProFTPd will read the system /etc/passwd file. These users in this file are the normal system users and are not required to be created outside of normal user creation. There are many ways to create additional FTP users, but this is one way to get started.

Here are some typical entries from the system passwd file. From left to right, you can see the username the user and group IDs, the home directory and the default shell configured for that user.

user1:x:506:521::/home/user1:/bin/bashuser2:x:505:520::/home/user2:/bin/bash

To create these users, you would use the useradd command from the command line or whatever other methods you would typically use to create users on the server.

Create the user

useradd -m -d /home/homedir newuser

Set the user password

passwd newuser

If you are setting up multiple users that all need to have access to the same directory, you will need to make sure that the users are all in the same group. Being in the same group means that each user can have group level access to the directory and allow everyone in the group to access the files that each user uploads. This level of user management is beyond the scope of this article, but be aware that things of this nature are possible.

ProFTPd User Configuration

To jail a user to their home directory within ProFTPd, you have to set the DefaultRoot value to ~.

vim /etc/proftpd.conf

DefaultRoot ~

With this set, it tells the FTP service to only allow the user to access their home directory. The ~ is a shortcut that tells the system to read whatever the user’s home directory is from the /etc/passwd file and use that value.

Using this functionality in ProFTPd, you can also define multiple DefaultRoot directives and have those restrictions match based on some criteria. You can jail some users, and not others, or jail a set of users all to the same directory if desired. This is done by matching the group that a user belongs to.

When a new user is created, as shown above, their default group will be the same as their username. You can, however, add or modify the group(s) assigned to the user after they are created if necessary.

Jail Everyone Not in the “Special-Group”

DefaultRoot ~ !special-group

Jail Group1 and Group2 to the Same Directory

DefaultRoot /path/to/uploads group1,group2

After making these changes to the proftpd.conf file you’ll need to restart the FTP service.

CentOS 6.x (init)

/etc/init.d/proftpd restart

CentOS 7.x (systemd)

systemctl restart proftpd

 

User Isolation with SFTP (SSH)

You can also isolate SFTP users or restrict a subset of SSH users to only have SFTP access. Again, this pertains to regular system users created using the useradd command.

While you can secure FTP communications using SSL, this is an extra level of setup and configuration. SFTP, by contrast, is used for file transfers over an SSH connection. SSH is an encrypted connection to the server and is secure by default. If you are concerned about security and are unsure about adding SSL to your FTP configuration, this may be another option to look into.

 

SFTP User Setup

Create the user and their home directory just like with the FTP user, but here we make sure to set the shell to not allow normal SSH login. We are presuming that you are looking for SFTP-only users and not just regular shell users, so we add the restriction on the shell to prevent non-SFTP logins.

useradd -m -d /home/homedir/ -s /sbin/nologin username

passwd username

We need to make sure that permissions and ownership are set for the home directory to be owned by root, and the upload directory is owned by the user.

chmod 755 /home/homedir/

chown root. /home/homedir/

mkdir -p /home/homedir/upload-dir/

chown username. /home/homedir/upload-dir/

 

SFTP Configuration

Hereby setting the ChrootDirectory to the %h variable, we are confining the user to their home directory as set up when the user was created. Using the ForceCommand directive also limits the commands the user is allowed to execute to only SFTP commands used for file transfers, again eliminating the possibility that the users will be able to break out of the jail and into a normal shell environment.

/etc/ssh/sshd_config
Subsystem sftp internal-sftp
Match User user1,user2,user3
ChrootDirectory %h
ForceCommand internal-sftp

Jail Multiple FTP Users to a Location

Alternatively, if you wanted to have multiple users all jailed to the same location, you can set them all to be in the same group, have the same home directory, and then use a Match Group directive within the SSH configuration.

vim /etc/ssh/sshd_config

Subsystem sftp internal-sftp
Match Group groupname
ChrootDirectory %h
ForceCommand internal-sftp

After making these changes to the sshd_config file, restart the SSH service. One of the following commands should work for you.

CentOS 6.x (init)

/etc/init.d/sshd restart

CentOS 7.x (systemd)

systemctl restart sshd

Further Reading can be found at:

 

Install Rsync and Lsync on CentOS, Fedora or Red Hat

Reading Time: 4 minutes

Have you ever needed to copy files from your local computer over to your web server? You may have previously used File Transfer Protocol (FTP) applications for this task, but FTP is prone to being insecure and can be challenging to work with over the command line. What if there was a better way? In this tutorial, we’ll be covering two popular utilities in the Linux world to securely assist in file transfers, rsync and lsyncd. We’ll show you how to install and use both in this article. Let’s dig in!

Continue reading “Install Rsync and Lsync on CentOS, Fedora or Red Hat”

Setup a Development Environment for CentOS using cPanel

Reading Time: 4 minutes

Editing a website’s code is often needed to update a site, but doing this to the live website could create downtime and other unwanted effects. Instead, its ideal to create an environment especially for developing new ideas.  In this tutorial, we will explore creating a development site specifically for CentOS servers. Continue reading “Setup a Development Environment for CentOS using cPanel”

Protecting against CVE-2018-14634 (Mutagen Astronomy)

Reading Time: 2 minutes

There is a new exploit, rated as 7.8 severity level,  that affects major Linux distributions of RedHat Enterprise Linux, Debian 8 and CentOS named Mutagen Astronomy. Mutagen Astronomy exploits an integer overflow vulnerability in the Linux kernel and supplies root access (admin privileges) to unauthorized users on the intended server. This exploit affects Linux kernel version dating back from July 2007 to July 2017.  Living in the kernel, the memory table can be manipulated to overflow using the create_tables_elf() function. After overwhelming the server, the hacker can then overtake the server with its malicious intents. Continue reading “Protecting against CVE-2018-14634 (Mutagen Astronomy)”

How to Install and Configure vsftpd on CentOS 7

Reading Time: 2 minutes

FTP (File Transfer Protocol) is one of the most popular methods to upload files to a server. There exist a wide array of FTP servers, such as vsftpd, you can use and FTP clients exist for every platform.

Essentially no matter what OS you use you can find an easy to use FTP client, so it makes for a great solution to transfer files. On CentOS based servers before you can connect via FTP you’ll have to setup an FTP server. Here we’re gonna setup vsftpd which is a great option since it has a focus on security and speed.

Continue reading “How to Install and Configure vsftpd on CentOS 7”

How to enable EPEL repository?

Reading Time: 2 minutes

The EPEL repository is an additional package repository that provides easy access to install packages for commonly used software. This repo was created because Fedora contributors wanted to use Fedora packages they maintain on RHEL and other compatible distributions.

To put it simply the goal of this repo was to provide greater ease of access to software on Enterprise Linux compatible distributions.

What’s an ‘EPEL repository’?

The EPEL repository is managed by the EPEL group, which is a Special Interest Group within the Fedora Project. The ‘EPEL’ part is an abbreviation that stands for Extra Packages for Enterprise Linux. The EPEL group creates, maintains and manages a high quality set of additional packages. These packages may be software not included in the core repository, or sometimes updates which haven’t been provided yet.
Continue reading “How to enable EPEL repository?”

Change a Password for PostgreSQL on Linux via Command Line

Reading Time: 1 minute

PostgreSQL supports many client authentication methods, but in this case we’re only going to concern ourselves with two: password and md5.

Note: The default authentication method for PostgreSQL is ident. If you’d like to change the PostgreSQL authentication method from ident to md5, then visit the linked tutorial!

Continue reading “Change a Password for PostgreSQL on Linux via Command Line”