How to Install phpMyAdmin on Ubuntu 18.04

Reading Time: 1 minute

Working with a database can be intimidating at times, but phpMyAdmin can simplify tasks by providing a control panel to view or edit your MySQL or MariaDB database.  In this quick tutorial, we’ll show you how to install phpMyAdmin on an Ubuntu 18.04 server.

 

Pre-flight

 

Step 1: Update the apt package tool to ensure we are working with the latest and greatest.

apt update && upgrade

 

Step 2: Install phpMyAdmin and PHP extensions for managing non-ASCII string and necessary tools.

apt install phpmyadmin php-mbstring php-gettext

During this installation you’ll be asked for the web server selection, we will select Apache2 and select ENTER.

In this step, you have the option for automatic setup or to create the database manually. For us, we will do the automatic installation by pressing ENTER for yes.

At this setup, you’ll be asked to set the phpMyAdmin password. Specifically for the phpMyAdmin user, phpmyadmin,  you’ll want to save this in a secure spot for later retrieval.

Step 3:  Enable PHP extension.

phpenmod mbstring

Note
If you’re running multiple domains on one server then you’ll want to configure your /etc/apache2/apache2.conf to enable phpMyAdmin to work.

vim /etc/apache2/apache2.conf

Add:

Include /etc/phpmyadmin/apache.conf

Step 4:  Restart the Apache service to recognize the changes made to the system.

systemctl restart apache2

 

Step 5: Verify phpMyAdmin installation by going to http://ip/phpmyadmin (username phpmyadmin).

Still having issues installing?  Our Liquid Web servers come with 24/7 technical support, contact us for a support team members help!

How to Setup Let’s Encrypt on Ubuntu 18.04

Reading Time: 3 minutes

Sites with SSL are needed more and more every day. It’s ubiquitious enforcement challenges website encryption and is even an effort that Google has taken up. Certbot and Let’s Encrypt are popular solutions for big and small businesses alike because of the ease of implementation.  Certbot is a software client that can be downloaded on a server, like our Ubuntu 18.04, to install and auto-renew SSLs. It obtains these SSLs by working with the well known SSL provider called Let’s Encrypt. In this tutorial, we’ll be showing you a swift way of getting HTTPS enabled on your site.  Let’s get started!

Pre-flight

 

Step 1: Update apt to ensure we are working with the latest package tool.

apt update && upgrade

 

Step 2: We’ll install the Certbot software, as this will aid in obtaining the SSL (certificates) from Let’s Encrypt.  Type Y when prompted to continue.

sudo apt install certbot

 

Step 3: Installing Certbot’s Apache package is also required. Type Y when prompted to continue.

apt install python-certbot-apache

 

Step 4: Time to attain the SSL from Let’s Encrypt.  Enter your email address and go through the prompts.  This step will look through your /etc/apache2/sites-available/yourdomain.com.conf file, specifically the website name set with the ServerName directive.

Note
If your installation gives the “Failed authorization procedure” message ensure you have followed the steps in the Apache Configuration article and that the A record is set for your domain.

certbot --apache

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: A

Your choice to opt in to their newsletter.
-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
-------------------------------------------------------------------------------
(Y)es/(N)o:

 

Jumping off of our Apache Configuratio tutorial, we want both of our domains covered with the option of www and non www for our visitors. We’ll leave the input blank and hit ENTER.

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: domain.com
2: www.domain.com
3: domain2.com
4: www.domain2.com
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):

 

In our tutorial we will select the Redirect option, you may choose No redirect if you would still like your site reachable through HTTP.

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
-------------------------------------------------------------------------------
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):2

 

A congratulation message will appear as well as instructions of where your SSL certificates are, just in case you need them later on.

- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/domain.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/domain.com/privkey.pem
Your cert will expire on 2019-07-16. To obtain a new or tweaked
version of this certificate in the future, simply run certbot again
with the "certonly" option. To non-interactively renew *all* of
your certificates, run "certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
Donating to EFF:                    https://eff.org/donate-le

 

Step 5: Verify your domain was issued the Let’s Encrypt SSL by visiting your site in the browser.  Be sure to clear your browser if you don’t readily see the SSL lock.

You now have an SSL encrypting the traffic to your site.  A few things to point out:

  • SSLs are valid for 90 days at a time
  • Let’s Encrypt will automatically renew
  • Any notifications from Let’s Encrypt will be sent to the email address specified in the .conf file

Get our fully Managed VPS servers, and you can control Let’s Encrypt through your WHM control panel.  Not only will you get a clean control panel to adjust server aspects but you also get 24/7 support at your fingertips.  See how our servers can make admin tasks easier!

How to Install MariaDB on Ubuntu 18.04

Reading Time: 1 minute

MariaDB is a drop in replacement for MySQL, and its popularity makes for several other applications to work in conjunction with it. If you’re interested in a MariaDB server without the maintenance, then check out our high-availability platform. Otherwise, we’ll be installing MariaDB 10 onto our Liquid Web Ubuntu server, let’s get started! Continue reading “How to Install MariaDB on Ubuntu 18.04”

How to Install Apache 2 on Ubuntu 18.04

Reading Time: 2 minutes

Apache is the most popular web server software being used today.  Its popularity is earned through its stability, fast service, and security.  Most likely if you are building out a web page or any public facing app, you’ll be using Apache to display it. At the time of writing, the most current offering of Apache is 2.4.38, and it is the version we will be using to install on our Ubuntu 18.04 LTS server.  Let’s get started! Continue reading “How to Install Apache 2 on Ubuntu 18.04”

Things to Do After Installing a Ubuntu Server

Reading Time: 5 minutes

After spinning up a new Ubuntu server you may find yourself looking for a guide of what to do next.  Many times the default setting do not provide the top security that your server should have. Throughout this article we provide you security tips and pose questions to help determine the best kind of setup for your environment.

 

1. Secure the Root User

This should be the very first thing you do when setting up a fresh install of Ubuntu server. Typically setting up a password for the root user is done during the installation process. However, if you should ever find yourself in a position where you have assumed the responsibility of a Ubuntu server, it’s best to reset the password keeping in mind the best practices for passwords.

  • Don’t use English words
  • Use a mixture of symbols and alphanumeric characters
  • Length – based on probability and odds of guessing or cracking a password you can provide the best security after a password gets to a certain length. More than ten characters long is good practice, but even longer passwords with complex characters is a safer way to go.

You can also lock the root user password to effectively keep anything from running as root.

Warning:
Please be sure you already have another administrative user on the system with root or “sudo” privileges before locking the root user.

Depending on your version of Ubuntu the root account may be disabled, simply setting or changing the password for root will enable it with the following.

sudo passwd root

Now we can lock the root account by locking the password with the “-l” flag like the following. This will prevent the root user from being used.

sudo passwd -l root

To unlock the root account, again, just change the password for root to enable it.

sudo passwd root

 

2. Secure SSH Access

Many times, once a server is up and running the default configuration for SSH remote logins are set to allow root to log in. We can make the server more secure than this.

You only need to use the root user to run root or administration level commands on the server. This can still be accomplished by logging into a server over SSH with a regular user, and then switching to the root user after you are already logged into the server.
ssh spartacus@myawesomeserver.com

Once logged in you can switch from the user “spartacus” to the root user.

su -

You can disable SSH login for the root user by making some adjustments in the sshd_config file. Be sure to run all of the following commands as root or with a user with sudo privileges.

vim /etc/ssh/sshd_config

Within this file find the Authentication section and look for the following line:

PermitRootLogin yes

Just change that to:

PermitRootLogin no

For the changes to take effect you will need to restart the SSH service with:

/etc/init.d/ssh restart

You can now test this by logging out of the server and then log in again over SSH with the root user and password. It should deny your attempts to do so. This provides a lot more security as it requires a different user (one that others won’t know and probably cannot guess) to log in to the server over SSH. This provides two values that an attacker would need to know, instead of one vaule, as most hackers know that the root user exists on a Linux server.

Also, the following can also be changed to make SSH access more secure.

vim /etc/ssh/sshd_config

PermitEmptyPasswords no

Make sure that directive is set to “no” so that users without a password can’t log in. Otherwise, the attacker would need only one piece of information while also giving them the ability to get in with just knowledge of a user. This, of course, would also mean they could keep attempting guesses at users as well and very easily log in.

A final caution is to adjust any router or firewall settings to make sure that remote SSH access is forwarded to port 22 and does not directly access port 22. This will eliminate a lot of bots or scripts that will try to log in over SSH directly on port 22 with random usernames and passwords. You may need to refer to your router or server firewall documentation on making sure you forward a higher port than port 22.

 

3. Install a Firewall

By default, later versions of Ubuntu should come with Uncomplicated Firewall or UFW. You can check to see if UFW is installed with the following:

sudo ufw status

That will return a status of active or inactive. If it is not installed you can install it with:

sudo apt-get install ufw

It’s a good idea to think through a list of components that will require access to your server. Is SSH access needed? Is web traffic needed? You will want to enable the services through the firewall that are needed so that incoming traffic can access the server in the way you want it to.

In our example let’s allow SSH and web access.

sudo ufw allow ssh

sudo ufw allow http

Those commands will also open up the ports. You can alternatively use the port method to allow services through that specific port.

sudo ufw allow 80/tcp

That will essentially be the same as allowing the HTTP service. Once you have the services you want listed you can enable the firewall with this.

sudo ufw enable

This may interrupt the current SSH connection if that is how you are logged in so be sure your information is correct, so you don’t get logged out.

Also, ensure you have a good grasp on who really needs access to the server and only add users to the Linux operating system that really need access.

 

4. Understand What You Are Trying to Accomplish

It’s important to think through what you will be using your server for. Is it going to be just a file server? Or a web server? Or a web server that needs to send an email out through forms?

You will want to make a clear outline of what you will be using the server for so you can build it to suit those specific needs. It’s best to only build the server with the services that it will require. When you end up putting extra services that are not needed you run the risk of having outdated software which will only add more vulnerability to the server.

Every component and service you run will need to be secured to it’s best practices. For example, if you’re strictly running a static site, you don’t want to expose vulnerabilities due to an outdated email service.

 

5. Keep the File System Up-To-Date

You will want to make sure your server stays up to date with the latest security patches. While a server can run for a while without much maintenance and things will “just work” you will want to be sure not to adapt a “set it and forget it” mentality.

Regular updates on a Ubuntu server can make sure the system stays secure and up to date. You can use the following to do that.

sudo apt-get update

While installing an Ubuntu server is a great way to learn how to work with a Linux it’s a good idea to learn in an environment that is safe. Furthermore, it’s best not to expose the server to the Internet until you are ready.

A great way to get started is at home where you can access the server from your own network without allowing access to the server through the Internet or your home router.

If and when you do deploy a Ubuntu server you’ll want to keep the above five things in mind. It’s important to know the configuration of the server once it’s deployed so you know what type of access the public can get to and what yet needs to be hardened.

Enjoy learning and don’t be afraid to break something in your safe environment, as the experience can be a great teacher when it’s time to go live.

How to Install Apache Tomcat 9 on Ubuntu 18.04

Reading Time: 3 minutes

Apache Tomcat is an accessible, open-source application server used to house many of today applications. It’s free, stable, lightweight and is utilized to render Java coding as well a range of other applications.

Today we will be focusing on how to install Apache Tomcat 9 on our Liquid Web Ubuntu server, specifically Ubuntu 18.04 LTS.

Pre-flight
  • Open the terminal and log in as root. If you are logged in as another user, you will need to add sudo before each command.
  • Working on a Linux Ubuntu 18.04 server

Step 1: Install or Verify Java 8 is installed

java -version

If nothing is returned or you get the message “No such file or directory” then you’ll need to install Java 8, in this case, we’ll be installing OpenJDK 8 because it’s free and the OracleJDK recently reached its EOL in January 2019.

apt install default-jdk -y

If Java was installed successfully, you’d see something like this when using the java -version command

java -versionOutput:

openjdk version "10.0.2" 2018-07-17
OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4)
OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4, mixed mode)

 

Step 2: Install Tomcat 9

wget https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.8/bin/apache-tomcat-9.0.8.tar.gz

 

Step 3: Extract Tomcat 9 Tarball

However, fun to think about, this tarball doesn’t belong to any cat, it belong to Tomcat!  We’ll be moving this tarball into the /opt/tomcat directory.

mkdir /opt/tomcat

mv apache-tomcat-9.0.8.tar.gz /opt/tomcat

tar -xvzf /opt/tomcat/apache-tomcat-9.0.8.tar.gz

 

Step 4: Create a Tomcat user

We create the tomcat user as extra security as this user will be the one who has group ownership to the Tomcat files.

groupadd tomcat

useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat

 

Step 5: Update permissions to Tomcat

Since Tomcat belongs to the group category, we have to update the permission for some of the directories of Tomcat.

chgrp -R tomcat /opt/tomcat

cd /opt/tomcat/apache-tomcat-9.0.8

chmod -R g+r conf

chmod g+x conf

chown -R tomcat webapps/ work/ temp/ logs/

 

Step 6: Create a Systemd Service File

Copy the path of Tomcat’s home by running this command:

update-java-alternatives -lOutput:
java-1.11.0-openjdk-amd64      1101       /usr/lib/jvm/java-1.11.0-openjdk-amd64</b

Your path may look different but, that’s alright, take the highlighted path and put it into your /etc/systemd/system/tomcat.service file, as the JAVA_HOME variable (shown below).

vim /etc/systemd/system/tomcat.service

Just as the path may look different from our example, you’ll focus on providing the variables in this file the path to our Tomcat installation(a.k.a. Tomcat’s root directory). For example, we’ll let the server know that the startup.sh and shutdown.sh files are located in the /bin directory of Tomcat. At any rate, be sure that your paths (bolded in the example) are the correct paths.

 

[Unit] Description=Apache Tomcat
After=syslog.target network.target
[Service] Type=forking
Environment=JAVA_HOME=/usr/lib/jvm/java-1.11.0-openjdk-amd64
Environment=CATALINA_PID=/opt/tomcat/apache-tomcat-9.0.8/temp/tomcat.pid
Environment=CATALINA_HOME=/opt/tomcat/apache-tomcat-9.0.8
Environment=CATALINA_BASE=/opt/tomcat/apache-tomcat-9.0.8
Environment='CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC'
Environment='JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom'
WorkingDirectory=/opt/tomcat/apache-tomcat-9.0.8
ExecStart=/opt/tomcat/apache-tomcat-9.0.8/bin/startup.sh
ExecStop=/opt/tomcat/apache-tomcat-9.0.8/bin/shutdown.sh
User=tomcat
Group=tomcat
UMask=0007
RestartSec=10
Restart=always
[Install] WantedBy=multi-user.target

Be sure to save the file by using :wq

 

Step 7: Reload the Systemd File

We do this so the system can recognize the changes to the file that we just edited.

systemctl daemon-reload

Step 8: Restart Tomcat

systemctl start tomcat

Note:
If the Tomcat service fails to start use journalctl -xn as a way to know the exact errors that are occurring.

 

Step 9: Verify Tomcat is Running

systemctl status tomcat

Output:

* tomcat.service - Apache Tomcat
Loaded: loaded (/etc/systemd/system/tomcat.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-04-04 14:33:04 EDT; 4min 29s ago
Process: 10912 ExecStart=/opt/tomcat/apache-tomcat-9.0.8/bin/startup.sh (code=exited, status=0/SUCCESS)
Main PID: 10930 (java)
Tasks: 47 (limit: 2157)
CGroup: /system.slice/tomcat.service
`-10930 /usr/lib/jvm/java-1.11.0-openjdk-amd64/bin/java -Djava.util.logging.config.file=/opt/tomcat/apache-tomcat-9.0.8/conf/logging.properties -Djava.util.logging.manage

If correctly installed you’ll also be able to see the Tomcat default page by visiting http://Host_IP:8080 in your browser, replacing Host_IP with your server’s IP or hostname, followed by Tomcat’s port number 8080.

 

Top 4 Lessons Learned Using Ubuntu

Reading Time: 5 minutes

When choosing a server operating system, there are a number of factors and choices that must be decided. An often talked about and referenced OS, Ubuntu, is a popular choice and offers great functionality with a vibrant and helpful community. However; if you’re unfamiliar with Ubuntu and have not worked with either the server or desktop versions, you may encounter differences in common tasks and functionality from previous operating systems you’ve worked with. I’ve been a system administrator and running my own servers for a number of years, almost all of which were Ubuntu, here are the top four lessons I’ve learned while running Ubuntu on my server.

  1. Understanding the “server” vs. “desktop” model
  2. Deciphering the naming scheme/release schedule
  3. Figuring out the package manager
  4. Networking considerations on Ubuntu

 

Understanding “Server” vs. “Desktop” Model

Ubuntu comes in a number of images and released versions. One of the first choices to consider is whether you want the “desktop” or “server” release. When running an application or project on a server the choice to make is the “server” image. You may be wondering what the difference is, and it comes down to two main differences:

  • The “desktop” image has a graphical user interface (GUI) where the “server” image does not
  • The “desktop” image has default applications more suited for a user’s workstation, whereas the “server” image has default applications more suited for a web server.

It is really that simple. Currently, the majority of the differences relate to the default applications offereed when installing Ubuntu and whether or not you need a GUI. When I first started using Ubuntu, there were a number of differences between the two (such as a slightly tweaked Linux kernel on the “server” image designed to perform better on server grade hardware), but today it is a much more simplified decision process.

The choice comes down to saving time for the individual by either adding or removing applications (also called “packages”) that they do or don’t need. At Liquid Web we offer “server” images since it’s best designed for operating on a web server. You do have the choice of the Ubuntu version, which brings me to my next lesson learned:

 

Deciphering the Naming Scheme/Release Schedule

Both Ubuntu and CentOS are based on what is called an “upstream” provider, or another operating system that then CentOS/Ubuntu tweak and re-publish. For Ubuntu, this upstream provider is the OS, Debian, whereas CentOS is based on Red Hat (which is further based on Fedora). It was confusing at first for me to try and understand what the names for Ubuntu releases meant: “Vivid Vervet”, “Xenial Xerus”, “Zesty Zapus”; but that’s just a cute “name” the developers give a particular version until it is released.

The company that oversees Ubuntu, called Canonical, has a regular and set release schedule for Ubuntu. This is a distinct difference between CentOS, which relies on Red Hat, which further relies on Fedora. Every six months a release of Ubuntu is published, and every fourth release (or once every two years) a “long-term support” or LTS release is published. The numbering they use for releases is fairly straight-forward: YY.MM where YY is the two digit year and MM is the month of the release. The latest release is 19.04, meaning it was released in April of 2019.

This was tricky for me to initially navigate and created a headache since the different versions have different levels of support and updates provided to them. For some people, this is not an issue. As a developer, you may want to build your application on the latest and greatest OS that takes advantage of new technologies. Receiving support may not matter because you won’t be using the server for very long. You may also be in a position where you need to know that your OS won’t have any server-crashing bugs.

Which version do you choose? How do you know whether your OS will be stable and receive patches/updates? I ran into these issues with when accidentally creating a server that lost support just a few months later; however, I learned an easy-to-use trick to ensure that in the future I chose an LTS release:

  • LTS releases start with an even number and end in 04

Liquid Web offers a variety of Ubuntu releases within our Core-Managed and Self-Managed support tiers:

Notice that all the version of Ubuntu that we offer are the .04 releases. These are all LTS releases, ensuring that you receive the longest possible lifespan and vital security updates for your OS.

 

Ubuntu’s Package Manager

Prior to working with Ubuntu, I had the most experience with CentOS. Moving to Ubuntu meant having to learn how to install packages since the CentOS package manager, yum, was no longer available. I discovered that the general process is the same with a few minor differences.

The first difference I encountered is that Ubuntu uses the apt-get command. This is part of the Apt package manager and follows a very similar syntax to the yum command for the most basic command: installing packages. To install a package I had to use:

apt-get install $package

Where $package was the name of the package. I didn’t always know the name of the package I wanted to find though so I thought I could search much like I could with yum search $package; however there is another utility that Apt uses for searching: apt-cache search $package

If you forget,  you’ll get an error that you’ve run an illegal operation:

One similarity to yum that is super useful: the “y” flag to bypass any prompts about installing the package. This was my favorite “trick” on CentOS to speed up installing packages and improve my efficiency. Knowing that it works the same on Ubuntu was a major relief:

Of course installing packages meant I had to add the repositories I wanted. This was a significant change since I was use to the CentOS location of /etc/yum.repos.d/ for adding .repo files. With the shift in the Apt system on Ubuntu, I learned that I needed to add a .list file, pointing to the repository file in question in the /etc/apt/sources.list.d/ folder. Take Nginx for example; it was quite easy to add once I realized what had to go where:

 

From here it was as straight-forward as installing Nginx with the apt-get install nginx command:

 

There was a time when adding a repository did not immediately work for me; however, it was quickly resolved by telling Apt that it needed to refresh/update the list of repos it uses with the apt-get update command.

Note
Don’t worry, the apt-get update command does not update any packages needlessly, this simply runs through the configured repositories and ensures that your system is aware of them as well as the packages offered.

 

Networking Consideration on Ubuntu

The last major lesson I want to talk about was one learned painfully. Networking on Ubuntu, by default, is handled from the /etc/network/ directory:

There is a flat file called interfaces, and this will contain all the networking information for your host (configured IPs/gateways, DNS servers, etc…):

Notice in the above configuration that both the primary eth0 interface and the sub-interface, eth0:1, are configured in the same file (/etc/network/interfaces). This caught me in a bad spot once when I made a configuration syntax mistake, restarted networking, and suddenly lost all access to my host. That interfaces file contained all the networking components for my host, and when I restarted networking while it contained a mistake, it prevented ANY networking from coming back up. The only way to correct the issue was to console the server, correct the mistake, and restart networking again.

Though this is the default configuration for Ubuntu, there is now a way to ensure that you have the configuration setup for individual interfaces so, a simple mistake does not break all of the networking: /etc/network/interfaces.d/

To properly use this folder and have individual configuration files for each interface, you need to modify the /etc/network/interface file to tell it to look in the interfaces.d/ folder. This is a simple change of adding one line:

source /etc/network/interfaces.d/*

This will tell the main interfaces configuration file to also review /etc/network/interfaces.d/ for any additional configuration files and use those as well. This would have saved me a lot of hassle had I known about it earlier.

Hopefully, you can use my mistakes and struggles to your advantage. Ubuntu is a great operating system, is supported by a fantastic community, and is covered by “the most helpful humans in hosting” of Liquid Web. If you have any questions about Ubuntu, our support or how you might best use it in your environment feel free to contact us at support@liquidweb.com or via phone at 1-800-580-4985.

 

 

How to Redirect URLs Using Nginx

Reading Time: 3 minutes

What is a Redirect?

A redirect is a web server function that will redirect traffic from one URL to another. Redirects are an important feature when the need arises. There are several different types of redirects, but the more common forms are temporary and permanent. In this article, we will provide some examples of redirecting through the vhost file, forcing a secure HTTPS connection, redirection to www and non-www as well as the difference between temporary and permanent redirects.

Note
As this is an Nginx server, any .htaccess rules will not apply. If your using the other popular web server, Apache, you’ll find this article useful.

Common Methods for Redirects

Temporary redirects (response code: 302 Found) are helpful if a URL is temporarily being served from a different location. For example, these are helpful when performing maintenance and can redirect users to a maintenance page.

However, permanent redirects (response code: 301 Moved Permanently) inform the browser there was an old URL that it should forget and not attempt to access anymore. These are helpful when content has moved from one place to another.

 

How to Redirect

When it comes to Nginx, that is handled within a .conf file, typically found in the document root directory of your site(s), /etc/nginx/sites-available/directory_name.conf. The document root directory is where your site’s files live and it can sometimes be in the /html if you have one site on the server. Or if your server has multiple sites it can be at /domain.com.  Either way that will be your .conf file name. In the /etc/nginx/sites-available/ directory you’ll find the default file that you can copy or use to append your redirects. Or you can create a new file name html.conf or domain.com.conf.

Note
If you choose to create a new file be sure to update your symbolic links in the /etc/nginx/sites-enabled. With the command:

ln -s /etc/nginx/sites-available/domain.com.conf /etc/nginx/sites-enabled/domain.com.conf

The first example we’ll cover is redirection of a specific page/directory to the new page/directory.

Temporary Page to Page Redirect

server {
# Temporary redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage redirect;
}

Permanent Page to Page Redirect

server {
# Permanent redirect to an individual page
rewrite ^/oldpage$ http://www.domain.com/newpage permanent;
}

Permanent www to non-www Redirect

server {
# Permanent redirect to non-www
server_name www.domain.com;
rewrite ^/(.*)$ http://domain.com/$1 permanent;
}

Permanent Redirect to www

server {
# Permanent redirect to www
server_name domain.com;
rewrite ^/(.*)$ http://www.newdomain.com/$1 permanent;
}

Sometimes the need will arise to change the domain name for a website. In this case, a redirect from the old sites URL to the new sites URL will be very helpful in letting users know the domain was moved to a new URL.

The next example we’ll cover is redirecting an old URL to a new URL.

Permanent Redirect to New URL

server {
# Permanent redirect to new URL
server_name olddomain.com;
rewrite ^/(.*)$ http://newdomain.com/$1 permanent;
}

We’ve added the redirect using the rewrite directive we discussed earlier. The ^/(.*)$ regular expression will use everything after the / in the URL. For example, http://olddomain.com/index.html will redirect to http://newdomain.com/index.html. To achieve the permanent redirect, we add permanent after the rewrite directive as you can see in the example code.

When it comes to HTTPS and being fully secure it is ideal for forcing everyone to use https:// instead of http://.

Redirect to HTTPS

server {
# Redirect to HTTPS
listen      80;
server_name domain.com www.domain.com;
return      301 https://example.com$request_uri;
}

After these rewrite rules are in place, testing the configuration prior to running a restart is recommended. Nginx syntax can be checked with the -t flag to ensure there is not a typo present in the file.

Nginx Syntax Check

nginx -t

If nothing is returned the syntax is correct and Nginx has to be reloaded for the redirects to take effect.

Restarting Nginx

service nginx reload

For CentOS 7 which unlike CentOS 6, uses systemd:

systemctl restart nginx

Redirects on Managed WordPress/WooCommerce

If you are on our Managed WordPress/WooCommerce products, redirects can happen through the /home/s#/nginx/redirects.conf file. Each site will have their own s# which is the FTP/SSH user per site. The plugin called ‘Redirection’ can be downloaded to help with a simple page to page redirect, otherwise the redirects.conf file can be utilized in adding more specific redirects as well using the examples explained above.

Due to the nature of a managed platform after you have the rules in place within the redirects.conf file, please reach out to support and ask for Nginx to be reloaded. If you are uncomfortable with performing the outlined steps above, contact our support team via chat, ticket or a phone call.  With Managed WordPress/WooCommerce you get 24/7 support available and ready to help you!

How To Set Up Multiple PHP Versions in Webmin

Reading Time: 4 minutes

What is Webmin?

Webmin is a browser-based graphical interface to help you administrate your Linux server.  Much like cPanel or Plesk, Webmin allows you to set up and manage accounts, Apache, DNS zones, users and configurations.  As these configurations can get somewhat complicated Webmin works to simplify this process. The result is fewer issues during server and domain setup.  Which results in a stable server and a pleasant administration experience. Unlike Plesk or cPanel, Webmin is completely free and open to the public. Unfortunately, here at Liquid Web, we do not offer managed support for Webmin, but we are always willing to assist as much as possible when issues arise.   You can download Webmin from their site. Also, you can find some excellent documentation on this interface.

 

Installing Webmin

Before beginning “if you have not already” you will need to install Webmin on your server.  For this article, we will mainly be working with Webmin installed on a Ubuntu server. However, it is very similar to CentOS therefore we have included instructions for both operating systems below.

  • First, you will need to access your server SSH. If you are not sure how to SSH into your server, please visit our link on the subject.  
  • Once you are logged into your server SSH, please run the following commands in order or copy and paste the entire syntax.
Debian/Ubuntu

sudo sh -c 'echo "deb http://download.webmin.com/download/repository sarge contrib" > /etc/apt/sources.list.d/webmin.list'wget -qO - http://www.webmin.com/jcameron-key.asc | sudo apt-key add -
sudo apt-get updatesudo apt-get install webmin

CentOS/RedHat/Fedora

(echo "[Webmin] name=Webmin Distribution Neutral
baseurl=http://download.webmin.com/download/yum
enabled=1
gpgcheck=1
gpgkey=http://www.webmin.com/jcameron-key.asc" >/etc/yum.repos.d/webmin.repo;
yum -y install webmin)

 

Accessing Webmin

Webmin is a web-based application.  So once Webmin is installed, you can access Webmin by using a browser of your choice.   Be sure to make sure port 10000 is open on your server as Webmin utilizes this port to function.  We have included steps below to ensure the correct port is open for iptables and firewalld.

IPTABLES

iptables-save > /tmp/tabsav
vi /tmp/tabsav
iptables-restore < /tmp/tabsav
You should be able to use the command above to alter you iptables to look something like what we have included below.
# Generated by iptables-save v1.4.7 on Thu Jan 3 00:02:49 2019
*filter
:INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [3044:1198306] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
# Completed on Thu Jan 3 00:02:49 2019

FirewallD

firewall-cmd --zone=public --add-port=10000/tcp --permanent
firewall-cmd --reload

Once you have made sure port 10000 is open, you should be able to access the Webmin interface by entering in your servers IP address followed by the port number “10000”

Example:   https://192.168.1.100:10000             <—— 192.168.1.100 should be replaced with your server IP.

Installing PHP Versions in Webmin

There is a lot of situations where we may need to use multiple PHP versions.  For example, you may have multiple domains or applications on your server that require an older version of PHP while at the same time you may have newer domains that are configured for newer versions of PHP.   For this article, we will be installing PHP7 and PHP5.6 on Debian.

Step 1: First, you will want to SSH into your server and run the following command.
apt-get install php7.0-cli php7.0-fpmYou can check the installation after it has completed by running php –v in your terminal.

Step 2: Now here is where things tend to get tricky.  By default, Debian only offers a single PHP version in the official repository. So, we will have to add an additional repository for Debian. While adding this repository, it is good practice to enable HTTPS for APT and register the APT key. You can accomplish this by executing the commands we have included below.

apt-get install apt-transport-https
curl https://packages.sury.org/php/apt.gpg | apt-key add -
echo 'deb https://packages.sury.org/php/ stretch main' > /etc/apt/sources.list.d/deb.sury.org.list
apt-get update

Once the repository is added, we can go ahead and add our second PHP version to the server.

apt-get install php5.6-cli php5.6-fpmWe can now check both PHP versions on the server by running these commands.

php7.0 -V

Output:


php5.6 -V

Output:

Now that we have confirmed both PHP versions are installed you can access their configuration files in the following directories.

  • /etc/php/5.6/cli/php.ini
  • /etc/php/7.0./cli/php.ini

Step 3: To make things easier, later on, we will want to add the location of the configuration files to Webmin.  This can be done from within the Webmin interface.

  1. Log into Webmin
  2. Navigate to Others >> PHP Configuration
  3. Add the PHP configuration file location
  4. Click Save

You can use this tool to add and edit directives for different PHP versions. For example, you’ll be able to edit PHP’s memory limit, timeout length, extensions and more.  This simply helps consolidate configurations within one interface. From here we can just use a .htaccess file to specify what version of PHP a site should use.

Step 4: If you do not have this file already within your document root you can add this file by navigating to /var/www/exampledomain/  and running the following command to indicate which PHP version you are going to use.

echo "AddHandler application/x-httpd-php56 .php" >  .htaccess  | chown exampleuser. .htaccess

echo "AddHandler application/x-httpd-php70.php" >  .htaccess  | chown exampleuser. .htaccess

Step 5: Once you have completed this, you can test to see if your site is running on the desired PHP version.  You can accomplish this by creating a PHP information page. by making a file in your document root, usually in the path of /var/www/html/

You will want to insert the code below and save the file.

<? phpinfo(); ?>   After you have created this file, you can view the page by visiting your domain followed by the name of the file you created.  For example, www.example.com/phpinfo.php.

Congratulations you can now use Webmin to accomplish your daily admin tasks!  Take a look at our Cloud VPS servers for 24/7 support and lightening speed servers!

How To Set Up FTP isolation in CentOS or Ubuntu

Reading Time: 4 minutes

Configuring Multi-User FTP with User Isolation

This article is intended to give an overview of a chroot environment and configuring your FTP service for user isolation. This is done with a few lines within the main configuration file of the FTP service.

This article is also intended as a guide for our Core-Managed servers running CentOS or Ubuntu without a control panel. Our Fully Managed servers that utilize the cPanel software already have the FTP user isolation configured by default and also provide utilities for creating FTP users.

What is Chroot?

Chroot or change-root is the implementation of setting a new root directory for the environment that a user has access to. By doing this, from the user’s perspective, there will appear to be no higher directory that the user could escape to. They would be limited to the directory they start in and only see the contents inside of that directory.

If a user were to try and list the contents of the root (/) of the system, it would return the contents of their chroot environment and not the actual root of the server. Read more about this at the following link.

 

Installing ProFTPd

As there are many FTP options available, ProFTPd, Pure-FTPd, vsftpd, to name a few, this article will only focus on the use of ProFTPd for simplicity and brevity. This is also not intended to be a guide for installing an FTP service as it’s covered in our Knowledge Base articles below.

https://www.liquidweb.com/kb/how-to-install-proftpd-on-centos-7/

https://www.liquidweb.com/kb/how-to-install-and-configure-proftpd-on-ubuntu-14-04-lts/

 

User Isolation with ProFTPd

User Setup

By default, ProFTPd will read the system /etc/passwd file. These users in this file are the normal system users and are not required to be created outside of normal user creation. There are many ways to create additional FTP users, but this is one way to get started.

Here are some typical entries from the system passwd file. From left to right, you can see the username the user and group IDs, the home directory and the default shell configured for that user.

user1:x:506:521::/home/user1:/bin/bashuser2:x:505:520::/home/user2:/bin/bash

To create these users, you would use the useradd command from the command line or whatever other methods you would typically use to create users on the server.

Create the user

useradd -m -d /home/homedir newuser

Set the user password

passwd newuser

If you are setting up multiple users that all need to have access to the same directory, you will need to make sure that the users are all in the same group. Being in the same group means that each user can have group level access to the directory and allow everyone in the group to access the files that each user uploads. This level of user management is beyond the scope of this article, but be aware that things of this nature are possible.

ProFTPd User Configuration

To jail a user to their home directory within ProFTPd, you have to set the DefaultRoot value to ~.

vim /etc/proftpd.conf

DefaultRoot ~

With this set, it tells the FTP service to only allow the user to access their home directory. The ~ is a shortcut that tells the system to read whatever the user’s home directory is from the /etc/passwd file and use that value.

Using this functionality in ProFTPd, you can also define multiple DefaultRoot directives and have those restrictions match based on some criteria. You can jail some users, and not others, or jail a set of users all to the same directory if desired. This is done by matching the group that a user belongs to.

When a new user is created, as shown above, their default group will be the same as their username. You can, however, add or modify the group(s) assigned to the user after they are created if necessary.

Jail Everyone Not in the “Special-Group”

DefaultRoot ~ !special-group

Jail Group1 and Group2 to the Same Directory

DefaultRoot /path/to/uploads group1,group2

After making these changes to the proftpd.conf file you’ll need to restart the FTP service.

CentOS 6.x (init)

/etc/init.d/proftpd restart

CentOS 7.x (systemd)

systemctl restart proftpd

 

User Isolation with SFTP (SSH)

You can also isolate SFTP users or restrict a subset of SSH users to only have SFTP access. Again, this pertains to regular system users created using the useradd command.

While you can secure FTP communications using SSL, this is an extra level of setup and configuration. SFTP, by contrast, is used for file transfers over an SSH connection. SSH is an encrypted connection to the server and is secure by default. If you are concerned about security and are unsure about adding SSL to your FTP configuration, this may be another option to look into.

 

SFTP User Setup

Create the user and their home directory just like with the FTP user, but here we make sure to set the shell to not allow normal SSH login. We are presuming that you are looking for SFTP-only users and not just regular shell users, so we add the restriction on the shell to prevent non-SFTP logins.

useradd -m -d /home/homedir/ -s /sbin/nologin username

passwd username

We need to make sure that permissions and ownership are set for the home directory to be owned by root, and the upload directory is owned by the user.

chmod 755 /home/homedir/

chown root. /home/homedir/

mkdir -p /home/homedir/upload-dir/

chown username. /home/homedir/upload-dir/

 

SFTP Configuration

Hereby setting the ChrootDirectory to the %h variable, we are confining the user to their home directory as set up when the user was created. Using the ForceCommand directive also limits the commands the user is allowed to execute to only SFTP commands used for file transfers, again eliminating the possibility that the users will be able to break out of the jail and into a normal shell environment.

/etc/ssh/sshd_config
Subsystem sftp internal-sftp
Match User user1,user2,user3
ChrootDirectory %h
ForceCommand internal-sftp

Jail Multiple FTP Users to a Location

Alternatively, if you wanted to have multiple users all jailed to the same location, you can set them all to be in the same group, have the same home directory, and then use a Match Group directive within the SSH configuration.

vim /etc/ssh/sshd_config

Subsystem sftp internal-sftp
Match Group groupname
ChrootDirectory %h
ForceCommand internal-sftp

After making these changes to the sshd_config file, restart the SSH service. One of the following commands should work for you.

CentOS 6.x (init)

/etc/init.d/sshd restart

CentOS 7.x (systemd)

systemctl restart sshd

Further Reading can be found at: