MySQL Performance: Converting MySQL to MariaDB

As we explored in our previous article of our MySQL Perfomance Series: MySQL vs. MariaDB there are very few downsides to using MariaDB over standard MySQL. Our high-availbility MariaDBs have proven itself to be a worthy successor with easily migitated drawbacks.  As the last article in our series we will focus on upgrading to various MySQL and MariaDB version on the following servers:

CentOS 6/7

Ubuntu 14.04/16.04

Choosing a MariaDB Upgrade Plan

Selecting the right upgrade plan requires determining which version of MySQL is running on the intended system. A universal method of determining what version of MySQL the server is currently running is to query the server itself to report the version. This works the same despite the Linux distribution running on the server:

mysql -h localhost -e 'SHOW VARIABLES LIKE "version"\G'

Example Output:

~ $ mysql -h localhost -p -e 'SHOW VARIABLES LIKE "version"\G'
*************************** 1. row ***************************
Variable_name: version
Value: 5.5.61-MariaDB
 Be sure not you are connecting to a local server when performing this check.

Once you’ve identified your MySQL version, use the key below to locate the correct upgrade procedure for your particular server. Some versions of MySQL require upgrading to MariaDB 5.5 before upgrading to MariaDB 10.x versions.

Upgrading from MySQL  to MariaDB

The upgrade process is similar between most server types. There are differences with the specific commands that are executed, but the overall procedure follows this General Upgrade Procedure:

  1. Schedule accurate downtime.
  2. Ensure adequate backups.
  3. Remove the existing MySQL binaries/packages.
  4. Install the correct MariaDB repository.
  5. Clear repository caches to expunge old package data.
  6. Install MariaDB packages via the system’s package manager.
  7. Start MariaDB by starting the MySQL service.
  8. Run MySQL upgrade script.
  9. Confirm MySQL databases are accessible and data is correct

We’ve included specific procedures below for various Liquid Web systems. Before choosing your applicable upgrade procedure, let us review the first two steps in the above General Upgrade Procedure. These steps are generally universal and are critical to the success of any maintenance plan.

Schedule Accurate Downtime

Scheduling proper maintenance windows to perform changes affecting production environments can be tricky. The maintenance event window must include enough time to perform all necessary tasks in the maintenance plan, plus any troubleshooting and extra time to complete the Reversion Plan if needed.

When calculating MySQL maintenance windows, double your estimated calculation plus 20 mins.

Ensure Adequate Backups

There is more to a maintenance backup plan than just backing up files and databases. It’s also necessary to form a Reversion Plan, which outlines the process to restore the environment to its original state before the start of the maintenance.
Maintenance events can sometimes lead to unforeseen issues with the alternative of reverting to backups. Having both a valid backup plan and a valid reversion plan to restore services is a critical component of successful maintenance. Below are outlines of both a Sample Backup Plan and a Sample Reversion Plan which can be used in conjunction with this article’s upgrade procedure outlines.

Sample Backup Plan

Create a backup of all databases.

mysqldump --all-databases –add-drop-database > all-databases-backup.sqlThis command has no visible output unless errors occur. The above command creates a single file backup of all databases used for easy restoration when using the –add-drop-database flag. This .sql file can re-imported into MySQL for deletion and re-creations of all databases. The file created is named all-databases-backup.sql and will be needed later in the Sample Reversion Plan. Be sure you have enough space when making a MySQL backup of all your databases. Table locking occurs when performing a MySQL backup.

Create a backup of MySQL’s configuration file.

cp -p /etc/my.cnf{,.bak}This command has no visible output unless errors occur. The above command will create an exact copy of /etc/my.cnf named /etc/my.cnf.bak which can be used to restore the configuration if needed.

Sample Reversion Plan
Step 1: Restore all databases into MySQL mysql < all-databases-backup.sqlThis command has no visible output unless errors occur.   Step 2:Preserve existing config by renaming /etc/my.cnf to /etc/my.cnf.bak mv -p /etc/my.cnf{,.bad}This command has no visible output unless errors occur.   Step 3: Copy the /etc/my.cnf.bak file to /etc/my.cnf cp -p /etc/my.cnf{.bak,}This command has no visible output unless errors occur.   Step 4: Restart MySQL Service service mysqld restart Example Output:~ $ service mysqld restart
Stopping mysqld: [ OK ] Starting mysqld: [ OK ]
Mysqld has switched to mysql.

MySQL to MariaDB on CentOS 6/7 with cPanel

All Liquid Web cPanel server images include MariaDB 5.5 already installed. This includes both CentOS 6 and CentOS 7 server images. There is no need to upgrade unless the MariaDB 10.x series is required. For a manual upgrade, procedures as provided by MariaDB here: Upgrading from MariaDB 5.5 to MariaDB 10.0, otherwise, cPanel provides an easy point-&-click interface which does all the heavy lifting for you. Follow one of the procedures below in either TexT Only form or Graphical form.

With upgrades it best to schedule off of peak functioning hours.

cPanel MariaDB Upgrade Instructions:

Step A) Confirm backups are present before proceeding at the scheduled time.

Step B) Login to WHM and load the MySQL/MariaDB Upgrades page:

WHM allows you to upgrade your MySQL versions with just a few clicks.

 Use the quick search box on the left and type in: mysql upgrade

 Locate and Click the MySQL/MariaDB Upgrades link.

 Selected your desired version of MariaDB.        

 Click the Next button.

Step C) Wait for the Upgrade Warnings” page to finish loading.

WHM will warn you of any potential consequences before upgrading MySQL.

① Read & acknowledge each warning message by enabling each checkbox.

② Click the Continue button.

Step D) Wait for the Upgrade Type” page to finish loading.

WHM prompts for the upgrade type for MySQL.

① Select Unattended Upgrade for a seamless upgrade experience.

② Click the Continue button.

Step E) Wait for the Upgrade Process to complete.

WHM browser can be closed without interrupting MySQL update.MySQL to MariaDB on CentOS 7 with Plesk Onyx 17

All Liquid Web CentOS 7 server images include MariaDB 5.5 already installed, including our CentOS 7 Plesk Onyx 17 servers. There is no need to upgrade unless the MariaDB 10.x series is required. In these cases, follow the standard MariaDB Plesk Upgrade Procedures here: How to upgrade MySQL 5.5 to 5.6/5.7 or MariaDB 5.5 to 10.0/10.1/10.2 on Linux

 

MySQL 5.1-5.5 to MariaDB 5.5 on CentOS 6

Due to compatibility limitation, MySQL 5.1, 5.2, 5.3 and 5.5.x must be upgraded to MariaDB 5.5 and cannot be upgraded to MariaDB 10.0 directly. This upgrade plan will walk through getting any one of these deprecated MySQL versions upgraded to MariaDB 5.5.  Once the upgrade to MariaDB 5.5 is complete, follow the How to upgrade MariaDB 5.5 to MariaDB 10.0 on CentOS 6 instructions to continue the upgrade process to MariaDB 10.

Step 1: Confirm backups are present before proceeding at the scheduled time.

Step 2: Stop the MySQL service.

service mysqld stop

Example Output:
~ $ service mysqld restart
Stopping mysqld: [ OK ]

Mysqld has switched to mysql.Step 3: Install the MariaDB 5.5 repository for CentOS 6

cat < /etc/yum.repos.d/MariaDB.repo
[mariadb] name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
EOF
This command has no visible output unless errors occur.

Step 4: Clean the yum repository cache

yum clean allExample Output:

~ $ yum clean all
Loaded plugins: fastestmirror
Cleaning repos: base extras mariadb updates
Cleaning up Everything
Cleaning up list of fastest mirrors

Step 5: Remove MySQL packages

yum remove mysql-server mysqlExample Output:

Running Transaction Test
Transaction Test Succeeded
Running Transaction
Erasing : mysql-server-5.1.73-8.el6_8.x86_64 1/1
warning: /var/log/mysqld.log saved as /var/log/mysqld.log.rpmsave
Verifying : mysql-server-5.1.73-8.el6_8.x86_64 1/1
Removed:
mysql-server.x86_64 0:5.1.73-8.el6_8
Complete!
Truncated for brevity.

Step 6: Install the MariaDB-server and MariaDB-client packages via yum.

yum install MariaDB-server MariaDB-client -y

Example Output:Installed:
MariaDB-client.x86_64 0:10.1.35-1.el6 MariaDB-compat.x86_64 0:10.1.35-1.el6 MariaDB-server.x86_64 0:10.1.35-1.el6
Dependency Installed:
MariaDB-common.x86_64 0:10.1.35-1.el6 boost-program-options.x86_64 0:1.41.0-28.el6
galera.x86_64 0:25.3.23-1.rhel6.el6 jemalloc.x86_64 0:3.6.0-1.el6
Replaced:
mysql-libs.x86_64 0:5.1.73-8.el6_8
Complete!
Truncated for brevity.

Step 7: Restart the MySQL service

service mysql startExample Output:

~ $ service mysql start
Starting MariaDB.180808 18:21:13 mysqld_safe Logging to '/var/lib/mysql/cent6.hostname.err'.
180808 18:21:13 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
SUCCESS!
Mysqld has switched to mysql.Step 8: Run MySQL upgrade tool

mysql_upgradeExample Output:

~ $ mysql_upgrade
MySQL upgrade detected
Phase 1/4: Fixing views from mysql
mysql
mysql.columns_priv OK
mysql.db OK
mysql.event OK
mysql.func OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Phase 4/4: Running 'mysql_fix_privilege_tables'
OK
Truncated for brevity.

Step 9: Confirm MariaDB server is running as expected

mysql -e 'SHOW VARIABLES LIKE "version"\G'Example Output:

~ $ mysql -e 'SHOW VARIABLES LIKE "version"\G'
*************************** 1. row ***************************
Variable_name: version
Value: 5.5.61-MariaDB

Step 10: The upgrade is complete, check that your databases exist and are working.

[ Optional ] follow the How to upgrade MariaDB 5.5 to MariaDB 10.0 on CentOS 6 instructions.

MariaDB 5.5 to MariaDB 10.0 on CentOS 6

Step 1:  Confirm backups are present before proceeding at the scheduled time.

Step 2: Remove all MariaDB packages.

yum remove MariaDB-*Example Output:
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Erasing : MariaDB-server-5.5.61-1.el6.x86_64 1/2
Erasing : MariaDB-client-5.5.61-1.el6.x86_64 2/2
Verifying : MariaDB-server-5.5.61-1.el6.x86_64 1/2
Verifying : MariaDB-client-5.5.61-1.el6.x86_64 2/2
Removed:
MariaDB-client.x86_64 0:5.5.61-1.el6 MariaDB-server.x86_64 0:5.5.61-1.el6
Complete!
Truncated for brevity.

Step 3: Remove existing MariaDB repository.

rm -f /etc/yum.repos.d/MariaDB*.repoThis command has no visible output unless errors occur.

Step 4: Create MariaDB 10.O Yum repository file.

cat < /etc/yum.repos.d/MariaDB.repo
[mariadb] name = MariaDB
baseurl = http://yum.mariadb.org/10.0/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
EOF
This command has no visible output unless errors occur.

Step 5: Clean the Yum cache.

yum clean all

Example Output:

~ $ yum clean all
Loaded plugins: fastestmirror
Cleaning repos: base extras mariadb updates
Cleaning up Everything
Cleaning up list of fastest mirrors

Step 6: Install MariaDB-server and MariaDB-client package via Yum.

yum install MariaDB-server MariaDB-clientExample Output:

Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : MariaDB-client-10.0.36-1.el6.x86_64 1/2
Installing : MariaDB-server-10.0.36-1.el6.x86_64 2/2
Verifying : MariaDB-server-10.0.36-1.el6.x86_64 1/2
Verifying : MariaDB-client-10.0.36-1.el6.x86_64 2/2
Installed:
MariaDB-client.x86_64 0:10.0.36-1.el6 MariaDB-server.x86_64 0:10.0.36-1.el6
Complete!
Truncated for brevity.

Step 7: Start the MySQL Service.

service mysql start

Step 8: Run the MySQL Upgrade script.

mysql_upgrade

Step 9: The upgrade is complete, check that your databases exist and are working.

 

MySQL 5.1-5.5 to MariaDB 5.5 on Ubuntu 14.04

Step 1:  Confirm backups are present before proceeding at the scheduled time.

Step 2: Install the MariaDB 5.5 repository with this one-liner it’ll install the necessary required tools (if missing), the Ubuntu 14.04 GPG key and the MariaDB 5.5 repository while updating the apt package caches.

sudo apt-get install software-properties-common -y
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu trusty main'
sudo apt-get update

Example Output:
~ $ sudo apt-get install software-properties-common -y > /dev/null
~ $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.PtpHIXMNvY --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
gpg: requesting key 1BB943DB from hkp server keyserver.ubuntu.com
gpg: key 1BB943DB: public key "MariaDB Package Signing Key <package-signing-key@mariadb.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
~ $ sudo add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu trusty main'
~ $ sudo apt-get update > /dev/null
Truncated for brevity.

Step 3: Stop mysql service.

sudo service mysql stop
Example Output:
~ $ sudo service mysql stop
mysql stop/waiting

Step 4: Rename MySQL config file /etc/mysql/my.cnf to /etc/mysql/my.cnf.bak. This step is necessary to ensure all incompatible directives/variables have been removed from the configuration to prevent problems with the install.

sudo mv /etc/mysql/my.cnf{,.bak}

Check out incompatible directives.Step 5: Install the mariadb-server package. This will simultaneously remove the existing MySQL binaries and then install the new MariaDB binaries.
sudo apt-get install mariadb-server -y

Example Output:
Example Output: ~ $ sudo apt-get install mariadb-server -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
linux-headers-3.13.0-153 linux-headers-3.13.0-153-generic
linux-headers-3.13.0-24 linux-headers-3.13.0-24-generic
linux-headers-3.13.0-91 linux-headers-3.13.0-91-generic
linux-image-3.13.0-153-generic linux-image-3.13.0-24-generic
linux-image-3.13.0-91-generic
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
libmariadbclient18 libmysqlclient18 mariadb-client-5.5
mariadb-client-core-5.5 mariadb-common mariadb-server-5.5
mariadb-server-core-5.5
Suggested packages:
tinyca mailx mariadb-test
The following packages will be REMOVED:
mysql-client-5.5 mysql-client-core-5.5 mysql-server mysql-server-5.5
mysql-server-core-5.5
The following NEW packages will be installed:
libmariadbclient18 mariadb-client-5.5 mariadb-client-core-5.5 mariadb-common
mariadb-server mariadb-server-5.5 mariadb-server-core-5.5
The following packages will be upgraded:
libmysqlclient18
1 upgraded, 7 newly installed, 5 to remove and 1 not upgraded.
Need to get 10.4 MB of archives.
After this operation, 22.1 MB of additional disk space will be used.
Get:1 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main mariadb-common all 5.5.61+maria-1~trusty [3,180 B] Get:2 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main libmysqlclient18 amd64 5.5.61+maria-1~trusty [2,862 B] Get:3 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main libmariadbclient18 amd64 5.5.61+maria-1~trusty [521 kB] Get:4 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main mariadb-client-core-5.5 amd64 5.5.61+maria-1~trusty [622 kB] Get:5 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main mariadb-client-5.5 amd64 5.5.61+maria-1~trusty [961 kB] Get:6 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main mariadb-server-core-5.5 amd64 5.5.61+maria-1~trusty [3,831 kB] Get:7 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main mariadb-server-5.5 amd64 5.5.61+maria-1~trusty [4,413 kB] Get:8 http://ftp.osuosl.org/pub/mariadb/repo/5.5/ubuntu/ trusty/main mariadb-server all 5.5.61+maria-1~trusty [2,896 B] Fetched 10.4 MB in 1s (7,992 kB/s)
Preconfiguring packages ...
(Reading database ... 165598 files and directories currently installed.)
Removing mysql-server (5.5.61-0ubuntu0.14.04.1) ...
Removing mysql-server-5.5 (5.5.61-0ubuntu0.14.04.1) ...
mysql stop/waiting
Removing mysql-client-5.5 (5.5.61-0ubuntu0.14.04.1) ...
Removing mysql-client-core-5.5 (5.5.61-0ubuntu0.14.04.1) ...
Removing mysql-server-core-5.5 (5.5.61-0ubuntu0.14.04.1) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Selecting previously unselected package mariadb-common.
(Reading database ... 165377 files and directories currently installed.)
Preparing to unpack .../mariadb-common_5.5.61+maria-1~trusty_all.deb ...
Unpacking mariadb-common (5.5.61+maria-1~trusty) ...
Preparing to unpack .../libmysqlclient18_5.5.61+maria-1~trusty_amd64.deb ...
Unpacking libmysqlclient18 (5.5.61+maria-1~trusty) over (5.5.61-0ubuntu0.14.04.1) ...
Selecting previously unselected package libmariadbclient18.
Preparing to unpack .../libmariadbclient18_5.5.61+maria-1~trusty_amd64.deb ...
Unpacking libmariadbclient18 (5.5.61+maria-1~trusty) ...
Selecting previously unselected package mariadb-client-core-5.5.
Preparing to unpack .../mariadb-client-core-5.5_5.5.61+maria-1~trusty_amd64.deb ...
Unpacking mariadb-client-core-5.5 (5.5.61+maria-1~trusty) ...
Selecting previously unselected package mariadb-client-5.5.
Preparing to unpack .../mariadb-client-5.5_5.5.61+maria-1~trusty_amd64.deb ...
Unpacking mariadb-client-5.5 (5.5.61+maria-1~trusty) ...
Selecting previously unselected package mariadb-server-core-5.5.
Preparing to unpack .../mariadb-server-core-5.5_5.5.61+maria-1~trusty_amd64.deb ...
Unpacking mariadb-server-core-5.5 (5.5.61+maria-1~trusty) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up mariadb-common (5.5.61+maria-1~trusty) ...
Selecting previously unselected package mariadb-server-5.5.
(Reading database ... 165514 files and directories currently installed.)
Preparing to unpack .../mariadb-server-5.5_5.5.61+maria-1~trusty_amd64.deb ...
Unpacking mariadb-server-5.5 (5.5.61+maria-1~trusty) ...
Selecting previously unselected package mariadb-server.
Preparing to unpack .../mariadb-server_5.5.61+maria-1~trusty_all.deb ...
Unpacking mariadb-server (5.5.61+maria-1~trusty) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Processing triggers for ureadahead (0.100.0-16) ...
Setting up libmysqlclient18 (5.5.61+maria-1~trusty) ...
Setting up libmariadbclient18 (5.5.61+maria-1~trusty) ...
Setting up mariadb-client-core-5.5 (5.5.61+maria-1~trusty) ...
Setting up mariadb-client-5.5 (5.5.61+maria-1~trusty) ...
Setting up mariadb-server-core-5.5 (5.5.61+maria-1~trusty) ...
Setting up mariadb-server-5.5 (5.5.61+maria-1~trusty) ...
Installing new version of config file /etc/apparmor.d/usr.sbin.mysqld ...
Installing new version of config file /etc/init.d/mysql ...
Installing new version of config file /etc/logrotate.d/mysql-server ...
Installing new version of config file /etc/mysql/debian-start ...
mysql start/running, process 7255
Processing triggers for ureadahead (0.100.0-16) ...
Setting up mariadb-server (5.5.61+maria-1~trusty) ...
Processing triggers for libc-bin (2.19-0ubuntu6.14) ...

Step 6: During the installation an on-screen dialog will appear prompting to change the MySQL root password twice. Follow the instructions to either change the password or skip the change by providing a blank password.

Changing the MySQL root password is advised when upgrading.
Step 7: When completed successfully, the mysql service should automatically start and the mysql_upgrade script is completed automatically during the install process.

A) Check that the mysql service is running.sudo service mysql status
Example Output:
~ $ sudo service mysql status
mysql start/running, process 4745

B) Check that the mysql_upgrade was successful.sudo mysql_upgrade

Example Output:
~ $ sudo mysql_upgrade
This installation of MySQL is already upgraded to 5.5.61-MariaDB, use --force if you still need to run mysql_upgrade

Step 8: The upgrade is complete, check that your databases exist and are working.

MySQL 5.7 to MariaDB 10.2 on Ubuntu 16.04

Step 1:  Confirm backups are present before proceeding at the scheduled time.

Step 2: Check the local root account has a proper password. Before upgrading to MariaDB 10.2, check the ‘root’ @ ’localhost’ user to ensure a password is assigned otherwise, the auth_socket plugin uses an empty password. This is common practice in older setups and causes problems with the upgrade process due to the default mode of MariaDB 10.2 in Strict SQL.

The following script will print the local root user details.

mysql -e 'select User,Host,Plugin,authentication_string from user where authentication_string = "" and plugin = "auth_socket";' mysql
Example Output:
~ $ mysql -e 'select User,Host,Plugin,authentication_string from user where authentication_string = "" and plugin = "auth_socket";' mysql
+------+-----------+-------------+-----------------------+
| User | Host | Plugin | authentication_string |
+------+-----------+-------------+-----------------------+
| root | localhost | auth_socket | |
+------+-----------+-------------+-----------------------+

Below are two examples of how to update these entries.

Automagic Method: The following one-liner reads the password from /root/.my.cnf and uses that password to update the MySQL user entry, if the file exists.

sudo test -e /root/.my.cnf && sudo mysql -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '$(awk -F"[=']" 'tolower($1)~/password/{print $(NF-1)}' /root/.my.cnf)'; select User,Host,Plugin,authentication_string from user where user = 'root' and host = 'localhost';" mysql || echo “ERROR: /root/.my.cnf does not exist, are you root?”
Example Output:
~ $ sudo test -e /root/.my.cnf && sudo mysql -e "ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '$(awk -F"[=']" 'tolower($1)~/password/{print $(NF-1)}' /root/.my.cnf)'; select User,Host,Plugin,authentication_string from user where user = 'root' and host = 'localhost';" mysql || echo “ERROR: /root/.my.cnf does not exist, are you root?”
+------+-----------+-----------------------+-------------------------------------------+
| User | Host | Plugin | authentication_string |
+------+-----------+-----------------------+-------------------------------------------+
| root | localhost | mysql_native_password | *40C5E49F0CC7BDC637FEEDFBF14FF100C37619D7 |
+------+-----------+-----------------------+-------------------------------------------+
if the Automagic method does not work try the manual method.

Manual Method: The following syntax is necessary to add a proper password to the root user account. Don’t forget to update the /root/.my.cnf file with the correct password or your update may encounter errors. Be sure to substitute the correct password for SuperSecretPass.

ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY ‘SuperSecretPass’;

Step 3: Install the MariaDB 10.2 repository. This is a multi-command stanza that will install the necessary required tools, GPG key, repository and update the package caches.

sudo apt-get install software-properties-common -y > /dev/null
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
sudo add-apt-repository 'deb [arch=amd64,arm64,i386,ppc64el] http://ftp.osuosl.org/pub/mariadb/repo/10.2/ubuntu xenial main'
sudo apt-get update > /dev/null

Example Output:
~ $ sudo apt-get install software-properties-common -y
~ $ sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
Executing: /tmp/tmp.J8PdET9w5B/gpg.1.sh --recv-keys
--keyserver
hkp://keyserver.ubuntu.com:80
0xF1656F24C74CD1D8
gpg: requesting key C74CD1D8 from hkp server keyserver.ubuntu.com
gpg: key C74CD1D8: public key "MariaDB Signing Key <signing-key@mariadb.org>" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
~ $ sudo add-apt-repository 'deb [arch=amd64,arm64,i386,ppc64el] http://ftp.osuosl.org/pub/mariadb/repo/10.2/ubuntu xenial main'
~ $ sudo apt-get update > /dev/null
Truncated for brevity.

Step 4: Stop mysql service.

sudo service mysql stop
Example Output:
~ $ sudo service mysql stop
mysql stop/waiting

Step 5: Rename MySQL config file /etc/mysql/my.cnf to /etc/mysql/my.cnf.bak. This step is necessary to ensure all incompatible directives/variables have been removed from the configuration to prevent problems with the install.
sudo mv /etc/mysql/my.cnf{,.bak}Check out incompatible directives.
Step 6: Install the mariadb-server package. This will simultaneously remove the existing MySQL binaries and then install the new MariaDB binaries.

sudo apt-get install mariadb-server -yExample Output:
~ $ sudo apt install mariadb-server -y</code?

Step 7: During the installation an on-screen dialog will appear prompting that the root password could not be changed because one is already in place. Press enter to proceed.

During upgrading MariaDB will state that a password already exists.

Step 8: When completed successfully, the mysql service should automatically start.
sudo service mysql statusExample Output:
~ $ sudo service mysql status
mysql start/running, process 4745

Step 9: Finally, run mysql_upgrade to complete the upgrade.
sudo mysql_upgradeThe output is suppose illustrates normal errors when upgrading.

Example Output:

~ $ sudo mysql_upgrade
MySQL upgrade detected
Phase 1/7: Checking and upgrading mysql database
Processing databases
mysql
mysql.column_stats OK
mysql.columns_priv OK
mysql.db OK
mysql.engine_cost OK
mysql.event OK
mysql.func OK
mysql.gtid_executed OK
mysql.gtid_slave_pos OK
mysql.help_category OK
mysql.help_keyword OK
mysql.help_relation OK
mysql.help_topic OK
mysql.host OK
mysql.index_stats OK
mysql.innodb_index_stats OK
mysql.innodb_table_stats OK
mysql.plugin OK
mysql.proc OK
mysql.procs_priv OK
mysql.proxies_priv OK
mysql.roles_mapping OK
mysql.server_cost OK
mysql.servers OK
mysql.slave_master_info OK
mysql.slave_relay_log_info OK
mysql.slave_worker_info OK
mysql.table_stats OK
mysql.tables_priv OK
mysql.time_zone OK
mysql.time_zone_leap_second OK
mysql.time_zone_name OK
mysql.time_zone_transition OK
mysql.time_zone_transition_type OK
mysql.user OK
Upgrading from a version before MariaDB-10.1
Phase 2/7: Installing used storage engines
Checking for tables with unknown storage engine
Phase 3/7: Fixing views from mysql
sys.host_summary
Error : Table 'performance_schema.memory_summary_by_host_by_event_name' doesn't exist
Error : View 'sys.host_summary' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.host_summary_by_file_io
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.host_summary_by_file_io_type
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.host_summary_by_stages
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.host_summary_by_statement_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.host_summary_by_statement_type
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.innodb_buffer_stats_by_schema
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.innodb_buffer_stats_by_table
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.innodb_lock_waits
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.io_by_thread_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.io_global_by_file_by_bytes
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.io_global_by_file_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.io_global_by_wait_by_bytes
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.io_global_by_wait_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.latest_file_io
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.memory_by_host_by_current_bytes
Error : Table 'performance_schema.memory_summary_by_host_by_event_name' doesn't exist
Error : View 'sys.memory_by_host_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.memory_by_thread_by_current_bytes
Error : Table 'performance_schema.memory_summary_by_thread_by_event_name' doesn't exist
Error : View 'sys.memory_by_thread_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.memory_by_user_by_current_bytes
Error : Table 'performance_schema.memory_summary_by_user_by_event_name' doesn't exist
Error : View 'sys.memory_by_user_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.memory_global_by_current_bytes
Error : Table 'performance_schema.memory_summary_global_by_event_name' doesn't exist
Error : View 'sys.memory_global_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.memory_global_total
Error : Table 'performance_schema.memory_summary_global_by_event_name' doesn't exist
Error : View 'sys.memory_global_total' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.metrics
Error : Table 'performance_schema.global_status' doesn't exist
Error : View 'sys.metrics' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.processlist
Error : Table 'performance_schema.events_transactions_current' doesn't exist
Error : View 'sys.processlist' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.ps_check_lost_instrumentation
Error : Table 'performance_schema.global_status' doesn't exist
Error : View 'sys.ps_check_lost_instrumentation' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.schema_auto_increment_columns OK
sys.schema_index_statistics
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.schema_object_overview OK
sys.schema_redundant_indexes OK
sys.schema_table_lock_waits
Error : Table 'performance_schema.metadata_locks' doesn't exist
Error : View 'sys.schema_table_lock_waits' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.schema_table_statistics
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.schema_table_statistics_with_buffer
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.schema_tables_with_full_table_scans
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.schema_unused_indexes OK
sys.session
Error : Table 'performance_schema.events_transactions_current' doesn't exist
Error : View 'sys.session' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.session_ssl_status
Error : Table 'performance_schema.status_by_thread' doesn't exist
Error : View 'sys.session_ssl_status' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.statement_analysis
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.statements_with_errors_or_warnings
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.statements_with_full_table_scans
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.statements_with_runtimes_in_95th_percentile
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.statements_with_sorting
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.statements_with_temp_tables
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.user_summary
Error : Table 'performance_schema.memory_summary_by_user_by_event_name' doesn't exist
Error : View 'sys.user_summary' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.user_summary_by_file_io
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.user_summary_by_file_io_type
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.user_summary_by_stages
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.user_summary_by_statement_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.user_summary_by_statement_type
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.version OK
sys.wait_classes_global_by_avg_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.wait_classes_global_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.waits_by_host_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.waits_by_user_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.waits_global_by_latency
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.x$host_summary
Error : Table 'performance_schema.memory_summary_by_host_by_event_name' doesn't exist
Error : View 'sys.x$host_summary' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$host_summary_by_file_io OK
sys.x$host_summary_by_file_io_type OK
sys.x$host_summary_by_stages OK
sys.x$host_summary_by_statement_latency OK
sys.x$host_summary_by_statement_type OK
sys.x$innodb_buffer_stats_by_schema OK
sys.x$innodb_buffer_stats_by_table OK
sys.x$innodb_lock_waits OK
sys.x$io_by_thread_by_latency OK
sys.x$io_global_by_file_by_bytes OK
sys.x$io_global_by_file_by_latency OK
sys.x$io_global_by_wait_by_bytes OK
sys.x$io_global_by_wait_by_latency OK
sys.x$latest_file_io OK
sys.x$memory_by_host_by_current_bytes
Error : Table 'performance_schema.memory_summary_by_host_by_event_name' doesn't exist
Error : View 'sys.x$memory_by_host_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$memory_by_thread_by_current_bytes
Error : Table 'performance_schema.memory_summary_by_thread_by_event_name' doesn't exist
Error : View 'sys.x$memory_by_thread_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$memory_by_user_by_current_bytes
Error : Table 'performance_schema.memory_summary_by_user_by_event_name' doesn't exist
Error : View 'sys.x$memory_by_user_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$memory_global_by_current_bytes
Error : Table 'performance_schema.memory_summary_global_by_event_name' doesn't exist
Error : View 'sys.x$memory_global_by_current_bytes' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$memory_global_total
Error : Table 'performance_schema.memory_summary_global_by_event_name' doesn't exist
Error : View 'sys.x$memory_global_total' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$processlist
Error : Table 'performance_schema.events_transactions_current' doesn't exist
Error : View 'sys.x$processlist' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$ps_digest_95th_percentile_by_avg_us OK
sys.x$ps_digest_avg_latency_distribution OK
sys.x$ps_schema_table_statistics_io
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.x$schema_flattened_keys OK
sys.x$schema_index_statistics OK
sys.x$schema_table_lock_waits
Error : Table 'performance_schema.metadata_locks' doesn't exist
Error : View 'sys.x$schema_table_lock_waits' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$schema_table_statistics
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.x$schema_table_statistics_with_buffer
Error : Cannot load from mysql.proc. The table is probably corrupted
error : Corrupt
sys.x$schema_tables_with_full_table_scans OK
sys.x$session
Error : Table 'performance_schema.events_transactions_current' doesn't exist
Error : View 'sys.x$session' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$statement_analysis OK
sys.x$statements_with_errors_or_warnings OK
sys.x$statements_with_full_table_scans OK
sys.x$statements_with_runtimes_in_95th_percentile OK
sys.x$statements_with_sorting OK
sys.x$statements_with_temp_tables OK
sys.x$user_summary
Error : Table 'performance_schema.memory_summary_by_user_by_event_name' doesn't exist
Error : View 'sys.x$user_summary' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
error : Corrupt
sys.x$user_summary_by_file_io OK
sys.x$user_summary_by_file_io_type OK
sys.x$user_summary_by_stages OK
sys.x$user_summary_by_statement_latency OK
sys.x$user_summary_by_statement_type OK
sys.x$wait_classes_global_by_avg_latency OK
sys.x$wait_classes_global_by_latency OK
sys.x$waits_by_host_by_latency OK
sys.x$waits_by_user_by_latency OK
sys.x$waits_global_by_latency OK
Phase 4/7: Running 'mysql_fix_privilege_tables'
Phase 5/7: Fixing table and database names
Phase 6/7: Checking and upgrading tables
Processing databases
information_schema
performance_schema
sys
sys.sys_config OK
Phase 7/7: Running 'FLUSH PRIVILEGES'
OK

performance_schema and mysql.pc are normal errors when updating MySQL.Step 10: The upgrade is complete, check that your databases exist and are working.

MySQL Performance: MySQL vs. MariaDB

MariaDB and MySQL logos.

We continue our series on MySQL Optimization by focusing on the differences and similarities between MariaDB and MySQL. MariaDB is a popular alternative to using traditional MySQL for database hosting. The free market of hosting allows additional actors to compete with the long-standing database solution, MySQL. MariaDB is the most prominent alternative to MySQL and is a favorite among industry giants such as Google, Craigslist, Wikipedia, Arch Linux, RedHat, CentOS, Fedora, and cPanel. In a direct performance comparison, MariaDB leaps to the head of the pack through performance optimizations, easy of conversion and compatibility.

Why Switch to MariaDB?

MariaDB is mostly a clone of MySQL, but it also provides several improvements over running the standard MySQL instance. MariaDB aims for a drop-in replacement to MySQL, which makes the switch painless and straightforward. Combine the simplicity of switching with a long list of improvements, MariaDB brings both performance and cutting-edge features to the table. Below are some of its selling points over standard MySQL.

More Storage Engines Options: There is 12 new storage engine built into MariaDB. Among these are CONNECT, Spider, and SphinxSE. Visit their Storage Engines page for a complete list of these engines, how they function, and ways to leverage them to optimize your database.

Speed improvements: MariaDB sports many new speed improvements compared directly with standard MySQL. This improved performance makes MariaDB stand apart from the baseline performance of traditional MySQL servers. Like MySQL, MariaDB has dozens of features for speed optimization including disk access, JOIN and EXPLAIN improvements, subquery, derived tables/views, execution control, and optimizer control.

Faster Indexes/Cache: When using the MEMORY storage engine, MariaDB can complete INSERT statements up to 24% faster than traditional MySQL servers, along with CHECKSUM TABLE and MyISAM Segment Key Cache being 4x faster.

Speedier and Larger Connection Pool: MariaDB benefits from an improved pool of threads that run faster and support up to 200,000+ connections where standard MySQL falls short.

Improved Replication: MariaDB sports faster and safer replication with updates being up to 2x faster than with traditional MySQL Replication setups. Now possible, parallel replication allows the existence of Active/Active or Master/Master configurations. MariaDB replication is backward compatible with MySQL servers, so migrating your cluster to MariaDB is possible by utilizing one node at a time.

New Extensions/Features: There are several new extension and features, to name a few, the WITH, JSON and KILL statements. DECIMAL sees an increased from 30 to 38 decimals while KILL ALL queries for a specified user.

List of Features: Available MariaDB’s website is a comprehensive list of improvements and features, located here: MariaDB versus MySQL-Features.

 

What is the Downside to Using MariaDB?

The MariaDB project is entirely open-source and free, unlike MySQL which uses dual licensing to keep its Enterprise edition specific features proprietary. The MariaDB developers actively update and enhance with cutting-edge features that standard MySQL does not carry.

Missing Features: MySQL uses some proprietary code in its Enterprise Edition. MariaDB does not have access to this propriety content and is a closed source, meaning the features are only available to MySQL Enterprise users.

Mitigation: MariaDB has addressed this concern through allowing viable open-source alternative plugins for MariaDB, providing the same functionality that the MySQL Enterprise Edition offers. This allows the vast majority of standard MySQL and MySQL Enterprise Edition users the option to switch to MariaDB, taking advantage of its robust performance enhancing features.

Delayed MySQL Compatibility: MariaDB originally forked from MySQL 5.5, so this is the base starting point for the MariaDB source code. Meaning that newer features and bug fixes developed for standard MySQL after version 5.5 are not part of the existing MariaDB source code.

Mitigation: MariaDB conducts monthly merges of the standard MySQL source code to ensure both compatibility and feature/bug-fix adoption. This kind of regular code merging, allows MariaDB to maintain its allure as a drop-in replacement for practically any existing versions of standard MySQL. However, there will always be a delay causing MariaDB to lag behind MySQL’s newer features and bug patches.

MySQL Performance: How To Leverage MySQL Database Indexing

A Mysql Indexing Logo

Throughout this tutorial, we will cover some of the fundamentals of indexing. As part of the MySQL series, we will introduce capabilities of MySQL indexing and the role it plays in optimizing database performance. Liquid Web recommends consulting with a DBA before making any changes to your production level application.

What is Indexing?

Indexing is a powerful structure in MySQL which can be leveraged to get the fastest response times from common queries. MySQL queries achieve efficiency by generating a smaller table, called an index, from a specified column or set of columns. These columns, called a key, can be used to enforce uniqueness. Below is a simple visualization of an example index using two columns as a key.

+------+----------+----------+
| ROW | COLUMN_1 | COLUMN_2 |
+------+----------+----------+
| 1 | data1 | data2 |
+------+----------+----------+
| 2 | data1 | data1 |
+------+----------+----------+
| 3 | data1 | data1 |
+------+----------+----------+
| 4 | data1 | data1 |
+------+----------+----------+
| 5 | data1 | data1 |
+------+----------+----------+

Queries utilize indexes to identify and retrieve the targeted data, even if they are a combination of keys. Without an index, running that same query results in an inspection of every row for the needed data. Indexing produces a shortcut, with much faster query times on expansive tables. A textbooks analogy may provide another common way to visualize how indexes function.
This analogy compares MySQL indexing to indexing in the back of a book.

When to Enable Indexing?

Indexing is only advantageous for huge tables with regularly accessed information. For instance, to continue with our textbook analogy, it makes little sense to index a children’s storybook with only a dozen pages. It’s more efficient to simply read the book to find each occurrence of the word “turtle” than it would be to set up and maintain indexes, query for those indexes, and then review each page provided. In the computing world, those extra tasks surrounding indexing represent wasted resources which would be better purposed by not indexing.

Without indexes, when tables grow to enormous proportions, response times suffer from queries targeting those obtuse tables. Inefficient queries manifest into latency within application or website performance. We commonly identify this latency by using the MySQL slow query log feature. You can find more details about using the slow query log feature in the first article in this series: MySQL Performance: Identifying Long Queries.
Once a colossal table hits its tipping point, it reaches the potential for downtime for applications and websites. Conducting routine evaluations for growing database establishes optimal database performance and sidesteps long queries’ inherent interruptions.

MySQL Indexing Pros vs. Cons

There are benefits and downsides to using MySQL indexing, and we’ll discuss the significant pros and cons for your consideration. These aspects will guide you to decide whether indexing is an appropriate choice for your situation.

quick data transmissions and ideal for OLAP.

What Information Does One Index?

Selecting what to index is probably the most challenging part to indexing your databases. Determining what is important enough to index and what is benign enough to not index. Generally speaking, indexing works best on those columns that are the subject of the WHERE clauses in your commonly executed queries. Consider the following simplified table:

ID, TITLE, LAST_NAME, FIRST_NAME, MAIDEN_NAME, DOB, GENDER, AGE, DESCRIPTION, HISTORY, ETC...

If your queries rely on testing the WHERE clause using LAST_NAME and FIRST_NAME then indexing by these two columns would significantly increase query response times. Alternately, if your queries rely on a simple ID lookup, indexing by ID would be the better choice.

These examples are merely a rudimentary example, and there are several types of indexing structures built-in to MySQL. The following MySQL page discusses these types of indexes in greater detail, and a recommended read for anyone considering indexing: How MySQL Uses Indexes

What is a Unique Index?

Another point for consideration when evaluating which columns to serve as the key in your index is whether to use the UNIQUE constraint. Setting the UNIQUE constraint will enforce uniqueness based on the configured indexing key. As with any key, this can be a single column or a concatenation of multiple columns. The function of this constraint ensures that there are no duplicate entries in the table based on the configured key.

UNIQUE constraints increase write speeds, a taxation of implementation.

What is a Primary Key Index?

As commonly invoked as the UNIQUE constraint the PRIMARY KEY is used to optimize indexes. This constraint ensures that the designated PRIMARY KEY cannot be of a null value. As a result, a performance boost occurs when running on an InnoDB storage engine for the table in question. This boost is due to how InnoDB physically stores data, placing null valued rows in the key out of contiguous sequence with rows that have values. Enabling this constraint ensures the rows of the table are kept in contiguous order for quicker responses.

Primary Key Index is absolutely necessary for large tables.

Managing Indexes

Now we will cover some of the basics of manipulating indexes using MySQL syntax. In examples, we will include the creation, deletion, and listing of indexes.Keywords for Managing Indexes: dbName, tableName, indexName Keep in mind, these examples have placeholder entries for the specific keywords. These keywords are self-explanatory by nature for easy reading, and below is an outline of them.

Instead of tableName you can use dbName.tableName.

Listing/Showing Indexes

Tables can have multiple indexes. Managing indexes will inevitably require being able to list the existing indexes on a table. The syntax for viewing an index is below.

SHOW INDEX FROM tableName;

SHOW INDEX FROM tableName; shows all indexes.

Indexing are present on 3 different columns.

Creating Indexes

Index creation has a simple syntax. The difficulty is in determining what columns need indexing and whether enforcing uniqueness is necessary. Below we will illustrate how to create indexes with and without a PRIMARY KEY and UNIQUE constraints.

As previously mentioned, tables can have multiple indexes. Multiple indexing is useful for creating indexes attuned to the queries required by your application or website. The default settings allow for up to 16 indexes per table, increase this number but is generally more than is necessary. Indexes can be created during a table’s creation or added on to the table as additional indexes later on. We will go over both methods below.

Creating too many indexes can add latency, but if you must then increase buffers in MySQL config.

Example: Create a Table with a Standard Index

CREATE TABLE tableName (
ID int,
LName varchar(255),
FName varchar(255),
DOB varchar(255),
LOC varchar(255),
INDEX ( ID )
);
You can create an index for several columns, using ID as the index.

Example: Create a Table with Unique Index & Primary Key

CREATE TABLE tableName (
ID int,
LName varchar(255),
FName varchar(255),
DOB varchar(255),
LOC varchar(255),
PRIMARY KEY (ID),
UNIQUE INDEX ( ID )
);
You can create an Primary Key and UNIQUE constraint over several columns.

Example: Add an Index to Existing Table

CREATE INDEX indexName ON tableName (ID, LName, FName, LOC);CREATE INDEX statement creates an index and names it.

Example: Add an Index to Existing Table with Primary Key

CREATE UNIQUE INDEX indexName ON tableName (ID, LName, FName, LOC);the CREATE UNIQUE command can add an index to a table ensuring no duplicate data.

Deleting Indexes

While managing indexes, you may find it necessary to remove some. Deleting indexes is also a very simple process, see the example below:

DROP INDEX indexName ON tableName;The DROP INDEX command lets us drop indexes on particular column.

There are many ways to optimize your database for true efficiency. If you would like to learn more or convert the search engines types available in MySQL read through our MyISAM vs. InnoDB tutorial.  Or if you are need of high functioning databases check out our MySQL product page to view different options.

Kubernetes Tutorial

What is Kubernetes?

The name Kubernetes has its origins from the original Greek term for helmsman or pilot. Kubernetes, or ‘k8s’ (pronounced “Kate’s”) as it’s sometimes referred to, is an open-source software tool that was originally created by Google and is now being maintained by the Cloud Native Computing Foundation. Kubernetes is used for arranging and coordinating containers that an application needs to run into easy to handle groups.

In order to manage your Kubernetes cluster effectively, we recommend using kubectl as the command-line tool of choice. Basically, kubectl communicates with the master node (or server) which in turn submits those commands to the worker nodes to manage the cluster.

The Kubernetes cluster consists of two basic types of resources;

  • Master server – a master server organizes the cluster
  • Node server – Nodes are the workers that contain and run the applications

Each node contains a Kubelet, which is the agent for managing the node and communicating with the master. You can use kubectl to deploy, explore, review and remove Kubernetes objects (like nodes, images or containers).

Let’s next look at setting up kubectl.

The Master communicates with containers through the worker node.

Note:
This tutorial assumes you have a Kubernetes cluster already setup and running.

In order to setup kubectl, we will need the following:

Prerequisites

  1. A working internet connection
  2. The cURL or wget utilities installed
  3. Basic knowledge of the Linux command line

Installing kubectl

On an Ubuntu 16.04 LTS server, here are the commands to use if logged in as root to install kubectl:

apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
touch /etc/apt/sources.list.d/kubernetes.list
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubectl

Kubectl Commands for Basic Kubernetes Functions

Now that we have kubectl up and running, let’s review a few of the basic commands available. Here are the five most basic commands we will be reviewing along with their fundamental definition:

  • kubectl create – the create command constructs a resource from a configuration file or stdin (or standard input). A resource is defined as “something that can be “requested by”, “allocated to”, “or consumed by” a pod or container.”
  • kubectl get – the get command displays a table of the most relevant information about one or multiple relevant resources.
  • kubectl run –  the run command will kickoff one or more instances of a container in the cluster.
  • kubectl expose – the expose command will start to load balance inbound traffic across your running instances. This command can also create a High Availability proxy for the client to access the running containers from outside the cluster.
  • kubectl delete – the delete command removes defined resources by
    • filenames
    • stdin
    • resources and names
    • resources and label selector

Kubectl App Management

  • kubectl edit – Alters the characteristics of a resource on a server using the default editor.
  • kubectl apply – Applies a change to a resource from a file or stdin.
  • kubectl label – Adds or updates a specific attribute to specifically identify an object

Working with Apps Using Kubectl

  • kubectl exec – Runs a command on a container in a pod
  • kubectl logs – Prints a container log
  • kubectl describe – Displays the status or state of the resources.

Kubectl Cluster Management

  • kubectl cluster-info – Displays information about the master and services in the cluster.
  • kubectl drain – removes pods in preparation for maintenance
  • kubectl certificate – Approves a CSR or certificate signing request

Kubectl Settings and Usage

  • kubectl api-resources (eg. Pods and Services) – Lists all of the supported resources and their shortnames, API grouping, if namespaced, and Kind
  • kubectl config –  Changes or alters kubeconfig files
  • kubectl version – Displays the Kubernetes version

 

These are just some of the many basic command examples that are available for use in setting up and maintaining your Kubernetes environment.

 

Managed Dedicated Server Overview

Are you looking for hosting? You may be asking yourself what is the best solution for your website or online application. One that will provide the uptime and reliability you need with the best support in the industry?  Let’s go through a breakdown of our traditional Dedicated Server options to get a full scope on the aspects of these types of servers.

A traditional Dedicated Server means pure hardware. It’s customized and built to the specifications you select on our website. All resources (CPU, RAM, Disk) are “dedicated” to only you and no other customer.

What does it mean to be Fully Managed customer?

Tired of struggling with server configuration and maintenance? That is where our Fully Managed Heroic Support applies to you. Full Management is going to offer a variety of support options for your server. Those options are below:

  1. Fully Managed (CentOS – WHM/cPanel and/or Windows – Plesk)
  2. Heroic Support Accessible 24/7/365
  3. Fully Managed Network Infrastructure
  4. Fully Managed Hardware
  5. Server housed in our completely owned Liquid Web Data Centers
  6. Level 3 Technicians On-Site 24/7/365
  7. System Level Monitoring Alerts & Notifications
  8. System Level Health Monitoring and Graphing
  9. 100% Uptime SLA on These Items
  10. Installation and Full Support of Core Software Package
  11. Core Operating System Updates & Patches
  12. Security Enhancements
  13. Virus and Spam Protection
  14. Full Control Panel Support
  15. Control Panel Updates and Patches
    To see a full support comparison, see the Support Comparison Chart.



Experiencing performance issues? Large growth? These are early signs that you may need a Dedicated Server to handle processes? Let’s talk on the key components that call for a Dedicated Server.

Performance Issues
Have an e-commerce site and experiencing slow load times when your site is active? Consider upgrading to a Dedicated Server to really kick your website into high gear. An influx of traffic coming to your site demands more from server’s resources.

Each time a user submits an order via your e-commerce site, a read/write request to the MySQL database is performed. Multiply this process by 1,500+ users performing the same action causing your hard disk read/write to spike. For regular VPS servers, this spike may leave your VPS struggling to serve visitors. With Dedicates Server using Solid State Drives (SSD) they can serve out faster read/write speeds.

 

Increased Traffic
Having a popular site is a great problem to have! Traffic spikes can happen due to various reasons. A Dedicated or Cloud Dedicated Server is going to be your first step on ensuring you are ready for increased traffic. If you require scalability, our Cloud products give you the flexibility of resizing and upgrading or downgrading your server with a simple click of a button. We don’t stop there, you may be in a position to explore our Enterprise options to handle high demand traffic!

Server Errors
Pulling up your website only to see a “500 Internal Server Error” can be disastrous for your online business. Most common errors come from a lack of resources to keep your website online and stable. To avoid this type of error, placing yourself on Dedicated hardware is the first step. This goes hand in hand with increased Traffic. A larger flow of traffic can also cause issues due to lack of Dedicated hardware.

Server Specifications
Considering a Dedicated Server can be quite the task if you are not familiar. Let’s break down each server component and talk on how it can benefit you and your application or website.

Server Central Processing Unit (CPU) and Cores
On our Dedicated Servers page, you’ll see that our initial offering is a Single CPU, Quad Core (Intel E3-1230 v5). This means you have a Single CPU Socket on the motherboard with 4 Cores at 3.4Ghz. Each Server CPU will provide a different CPU Clock speed which is important for performance. Higher the clock speed, the more efficient the CPU can process data from your website.

Server Memory (RAM)
LiquidWeb provides you with 16GB DDR4 Memory for Traditional Dedicated Servers. The majority of large websites that are built around a Content Management System use Apache, PHP, and a MySQL Database. Ensuring your server has enough RAM is crucial to maintaining performance as all of these software components will require RAM usage.

For example, a small WordPress site with less than 1,000 visitors per day would be fine to start off with 16GB of RAM. Yet a large e-commerce site with 5,000+ users per day will need extra RAM to handle simultaneous requests.

Hard Disk Drives and Solid State Drives
Configuring a Dedicated Server on our website? You’ll notice that we provide Solid State Drives for your Primary Hard Disk.  We also provide you with a free backup drive of 1TB (SATA 7,200 RPM)! You’ll be placing your Linux/Apache/PHP/MySQL onto SSD drives for quicker response times. By utilizing SSD drives, your CPU and RAM will not have to rely on each other due to the high performing speeds on the SSD. If a SATA drive was being utilized to run your LAMP stack, the performance would decrease as there’s now a physical moving disk at 7,200 revolutions per minute (RPM). When comparing SSD vs SATA, the performance of our SSD is 10x faster than what our SATA drives can deliver. SATA drives are ideal for massive amounts of storage/backups. Files that will not need a lot of access or reading/writing to.

RAID
A Redundant Array of Independent Disks (RAID) is an easy method to combine several disks into a big array. There are different levels of RAID. Each that offers their own fault-tolerance, performance, and capacity. There are also different storage techniques such as striping, mirroring, and parity. If your server already has Solid State Drives, adding a RAID array is beneficial to further increase performance and redundancy.

For example, having 2x 250GB SSD drives in RAID1 is a great start. Unless there is a high demand from your users, this can call for a RAID10 array that will add 4x 250GB SSD drives. This will increase storage size and performance for your server, with more drives mirroring and working together on-top of adding redundancy.

Bandwidth
One of the largest benefits of having a Liquid Web Dedicated Server power of 5 Terabytes of bandwidth. This is no added cost and only outgoing bandwidth counts against you! There are two types of bandwidth transfer, incoming and outgoing. When you are uploading files via FTP to your server its considered inbound bandwidth.When a user goes to your website or application, they are sending a request to your server, known as outbound data. We have higher bandwidth packages to offer along with unlimited options if required.

What Kind Of Dedicated Server Is Right For Me?
The big question, how do I know what I need? There are a few recommendations to help yourself get an idea of your needs. If your website is with another hosting provider, contacting that hosting provider and requesting your current servers specifications is recommended. Not all companies will give you the details you need but this could be very helpful in comparing apple to apple services across hosts.
If you are about to start a new online project or want to switch to a Dedicated Server, it’s recommended you chat or speak to one of our Hosting Advisors. They can assist you with a fitted package recommendation or answer any technical questions. Check out our Dedicated or Cloud Dedicated solutions to order yours today!

 

An Overview of Managed WordPress

WordPress is open source software for building unique and powerful websites! It is quickly becoming the easiest and most popular way to create blogs, business sites, portfolios, forums, memberships, and e-commerce websites.

Liquid Web’s Managed WordPress Hosting is a complete solution for your web publishing needs. With pre-installed plugins, streamlined plugin updates, website staging area, nightly backups, iThemes sync, and customizable website stencils, it’s a must-have for any WordPress developer.

 

MWP Features

Let’s get right into it with a couple of Managed WordPress Hosting best features and ease of use!

 

 

Pre-installed Plugins:

Managed WordPress (MWP) comes with quite a few money saving, pre-installed plugins, curated for maximum performance:

Akismet
Akismet: The number one plugin for spam filtering on blogs and forum pages.  This indispensable plugin helps comment-heavy sites by preventing spam comments from being posted to your site. Akismet protects your website from being marked negatively thus helping your Google SEO standing. A free plan is included with every site as well as having the option to subscribe to their Plus and Enterprise plans.
Async JavaScript
Async JavaScriptWith a 4.5-star rating by its users, Async JavaScript increases site speed and search engine ranking by only loading javascript viewable by the user.
Autoptimize
Autoptimize: If speed is essential to your site then you can’t go wrong with the Auto Optimizer plugin. Autoptimize takes out the legwork of site optimization by aggregating, minifying and caching scripts. For the CSS and JavaScript programmer, it can inject CSS into your page header, async non-aggregated JavaScript, and minimize HTML.
BJ Lazy Load
BJ Lazy Load: This plugin comes in handy for sites with lots of images. The idea is only to load those videos/pictures that are viewable on the browser and not those below the screen view (or “fold” as it’s called) until the client scrolls down the page. Thus increasing site speed and performance by reducing the resources loaded at a given time.
iThemes Sync
Themes Sync: Ever wish you had a portal where you could update all your plugins and themes for multiple websites in one spot? Look no further then iThemes Sync.  iThemes Sync provides you with one central dashboard for all your WordPress admin tasks saving you time to focus on development.
TinyPNG
TinyPNG: Minimizes load times for your site by compressing your images, while still maintaining photo resolution. Compression increases site performance especially if you have a large number of images.

WP Forms Lite
WP Forms Lite: Easily create contact forms with intuitive tools that allow for drag and drop construction.

Liquid Web’s Managed WordPress staging site allows you to experiment with themes, plugins, or any variety of other changes you might want to make, all without affecting your live site! It works by creating a temporary clone of your site to test your changes, giving you a chance to get things exactly the way you want them before applying them to your live site. So feel free to test away!

Nightly backups of your sites are included in each plan. The backups allow you to roll back to an older version by clicking the restore option, or you can download a copy. No need to install an extra plugin or stress when a development error occurs.

Core updates help secure your site from hackers and malware by keeping your website up to date. Once available, the core WordPress plugin updates MWP are tested before being implemented on your site. If the plugin or update is compatible with your site, it will auto push the update to the live site. If the new update is not compatible, it will let you know via email allowing you time to inspect at your leisure.

The stencil feature is useful for developers who use the same themes and plugins across multiple sites. You can create as many stencils as you like with the click of a button! Click here for information about how you can set up your stencil.

Managed WordPress allows you to access your content via SSH and FTP. Once logged into SSH, Liquid Web’s Manage WordPress comes with WP-CLI pre-installed, so you can make simple commands from the command line to fine-tune users, plugins, and current themes settings.

Let your users know their information is safe! Managed WordPress includes free SSL for all sites on your server to help keep your sites secure.  With automatic SSLs you no longer have to purchase certificates for your websites!

Our managed WordPress product has a 24/7 operations team that manages routine server maintenance and monitors for DDoS attacks, so you don’t have to, leaving you free to develop your website’s content.

Experience a streamlined way managing your sites through Liquid Web’s Managed WordPress platform.

 

How To Install Docker on Ubuntu 16.04

Adding Docker to an Ubuntu server.

Docker is an open-source software tool designed to automate and ease the process of creating, packaging, and deploying applications using an environment called a container. The use of Linux containers to deploy applications is called containerization. A Container allows us to package an application with all of the parts needed to run an application (code, system tools, logs, libraries, configuration settings and other dependencies) and sends it out as a single standalone package deployable via Ubuntu (in this case 16.04 LTS). Docker can be installed on other platforms as well. Currently, the Docker software is maintained by the Docker community and Docker Inc. Check out the official documentation to find more specifics on Docker. Docker Terms and Concepts

Docker is made up of several components:

  • Docker for Linux: Software which runs Docker containers on the Ubuntu Linux OS.
  • Docker Engine: Used for building Docker images and creating Docker containers.
  • Docker Registry: Used to store various Docker images.
  • Docker Compose: Used to define applications using multiple Docker containers.

 

Some of the other essential terms and concepts you will come into contact with are:

  • Containerization: Containerization is a lightweight alternative to full machine virtualization (like VMWare) that involves encapsulating an application within a container with its own operating environment.

Docker also uses images and containers. The two ideas are closely related, but very distinct.

  • Docker Image: A Docker Image is the basic unit for deploying a Docker container. A Docker image is essentially a static snapshot of a container, incorporating all of the objects needed to run a container.  
  • Docker Container: A Docker Container encapsulates a Docker image and when live and running, is considered a container. Each container runs isolated in the host machine.
  • Docker Registry: The Docker Registry is a stateless, highly scalable server-side application that stores and distributes Docker images. This registry holds Docker images, along with their versions and, it can provide both public and private storage location. There is a public Docker registry called Docker Hub which provides a free-to-use, hosted Registry, plus additional features like organization accounts, automated builds, and more. Users interact with a registry by using Docker push or pull commands. Example:

docker pull registry-1.docker.io/distribution/registry:2.1.

  • Docker Engine: The Docker Engine is a layer which exists between containers and the Linux kernel and runs the containers. It is also known as the Docker daemon. Any Docker container can run on any server that has the Docker-daemon enabled, regardless of the underlying operating system.
  • Docker Compose: Docker Compose is a tool that defines, manages and controls multi-container Docker applications. With Compose, a single configuration file is used to set up all of your application’s services. Then, using a single command, you can create and start all the services from that file.
  • Dockerfiles: Dockerfiles are merely text documents (.yaml files) that contains all of the configuration information and commands needed to assemble a container image. With a Dockerfile, the Docker daemon can automatically build the container image.

    Example: The following basic Dockerfile sets up an SSHd service in a container that you can use to connect to and inspect other containers volumes, or to get quick access to a test container.

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin
yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional
pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Docker Versions

There are three versions of Docker available, each with its own unique use:

  • Docker CE is the simple, classic Docker Engine.
  • Docker EE is Docker CE with certification on some systems and support by Docker Inc.
  • Docker CS (Commercially Supported) is kind of the old bundle version of Docker EE for versions <= 1.13.

We will be installing Docker CE.

 

Docker logo

Step 1 — Checking Prerequisites

To begin, start with the following server environment: 

  1. 64-bit Ubuntu 16.04 server
  2. Logged in as the root user
Important:
Docker on Ubuntu requires a 64-bit architecture for installation and, the Linux Kernel version must be 3.10 or above.

Before installing Docker, we need to set up the repository which contains the latest version of the software (Docker is unavailable in the standard Ubuntu 16.04 repository). Adding the repository allows us to easily update the software later as well.

Step 2 — Installing Docker

The next step is to remove any default Docker packages from the existing system before installing Docker on a Linux VPS. Execute the following commands to start this process:

root@test:~# apt-get remove docker docker-engine docker.io lxc-docker
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package 'docker-engine' is not installed, so not removed
Package 'docker' is not installed, so not removed
Package 'docker.io' is not installed, so not removed
E: Unable to locate package lxc-docker

Note:
In certain instances, a specific variant of the linux kernel is slimmed down by removing less common modules (or drivers). If this is the case, the “linux-image-extra” package contains all of the “extra” kernel modules which were left out. Use this command to re-add them: root@test:~# sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual

Step 3 — Add required packages

Now, we need to install some required packages on your system. Run the commands below to accomplish this:

root@test:~# apt-get install curl apt-transport-https ca-certificates software-properties-common

Note:
If you get the error: “E: Unable to locate package curl”, Use the commands “curl -V” to see if curl is already installed; if so, move on to step 4.

Step 4 — Verify, Add and Update Repositories

Add the Docker GPG key to your system:

root@test:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
OK

Next, update the APT sources to add the source:

root@test:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" | tee /etc/apt/sources.list.d/docker.list

Run the update again so the Docker packages are recognized:

root@test:~# apt-get update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Hit:2 http://us.archive.ubuntu.com/ubuntu xenial InRelease                              
Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]             
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]                 
Fetched 323 kB in 0s (827 kB/s)                             
Reading package lists... Done
E: The method driver /usr/lib/apt/methods/https could not be found.
N: Is the package apt-transport-https installed?
E: Failed to fetch https://download.docker.com/linux/ubuntu/dists/xenial/InRelease  
E: Some index files failed to download. They have been ignored, or old ones used instead.

Note:
If you get the error seen above: “N: Is the package apt-transport-https installed?”, Use the following command to correct this. root@test:~# sudo apt-get install apt-transport-https

Let’s rerun the update:

root@test:~# apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]        
Get:4 http://us.archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]                 
Hit:5 https://download.docker.com/linux/ubuntu xenial InRelease
Fetched 323 kB in 0s (656 kB/s)
Reading package lists... Done

Success! Now, verify we are installing Docker from the correct repo instead of the default Ubuntu 16.04 repo:

root@test:~# apt-cache policy docker-ce
docker-ce:
 Installed: (none)
 Candidate: 18.06.0~ce~3-0~ubuntu
 Version table:
    18.06.0~ce~3-0~ubuntu 500
       500 https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages

Step 5 — Install Docker

Finally, let’s start the Docker install:

root@test:~# apt-get install -y docker-ce
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
 aufs-tools cgroupfs-mount libltdl7 pigz
Suggested packages:
 mountall
The following NEW packages will be installed:
 aufs-tools cgroupfs-mount docker-ce libltdl7 pigz
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 40.3 MB of archives.
After this operation, 198 MB of additional disk space will be used.
Get:1 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 pigz amd64 2.3.1-2 [61.1 kB]
Get:2 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 aufs-tools amd64 1:3.2+20130722-1.1ubuntu1 [92.9 kB]
Get:3 http://us.archive.ubuntu.com/ubuntu xenial/universe amd64 cgroupfs-mount all 1.2 [4,970 B]
Get:4 http://us.archive.ubuntu.com/ubuntu xenial/main amd64 libltdl7 amd64 2.4.6-0.1 [38.3 kB]
Get:5 https://download.docker.com/linux/ubuntu xenial/stable amd64 docker-ce amd64 18.06.0~ce~3-0~ubuntu [40.1 MB]
Fetched 40.3 MB in 1s (38.4 MB/s)    
...
...

Docker should now be installed, the daemon started, and the process enabled to start on boot. Let’s check to see if it’s running:

root@test:~# systemctl status docker
* docker.service - Docker Application Container Engine
  Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
  Active: active (running) since Wed 2018-08-08 13:51:22 EDT; 2min 13s ago
    Docs: https://docs.docker.com
Main PID: 6519 (dockerd)
  CGroup: /system.slice/docker.service
          |-6519 /usr/bin/dockerd -H fd://
          `-6529 docker-containerd --config /var/run/docker/containerd/containerd.toml

Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192600502-04:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192630873-04:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42020f6a0, CONNECTING" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192854891-04:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42020f6a0, READY" module=grpc
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.192867421-04:00" level=info msg="Loading containers: start."
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.340349000-04:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.397715134-04:00" level=info msg="Loading containers: done."
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.424005987-04:00" level=info msg="Docker daemon" commit=0ffa825 graphdriver(s)=overlay2 version=18.06.0-ce
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.424168214-04:00" level=info msg="Daemon has completed initialization"
Aug 08 13:51:22 test.docker.com dockerd[6519]: time="2018-08-08T13:51:22.448805942-04:00" level=info msg="API listen on /var/run/docker.sock"
Aug 08 13:51:22 test.docker.com systemd[1]: Started Docker Application Container Engine.
~
~
~
(press q to quit)

Excellent! Good to go!

If Docker is not started automatically after the installation, run the following commands:

root@test:~# systemctl start docker.service
root@test:~# systemctl enable docker.service

Step 6 — Test Docker

Let’s check the new Docker build by downloading the hello-world test image.
To start testing, issue the following command:

 


root@test:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
9db2ca6ccae0: Pull complete
Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
   (amd64)
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/

Step 7 — The ‘Docker’ Command

With Docker installed and working, now is the time to become familiar with the command line utility. The ‘Docker’ command consists of using Docker with a chain of options followed by arguments. The syntax takes this form:

root@test:~# docker
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Run 'docker COMMAND --help' for more information on a command.


To view all of the available Options and Management Commands, simply type:

docker

To view the switches available for a specific command, type:

docker docker-subcommand --help

Lastly, To view system-wide information about Docker, use:

docker info

Docker is a dynamic, robust and responsive tool that makes it very simple to run applications within a containerized environment. It is portable, less resource-intensive, and more reliant on the host operating system which allows for multiple uses. Overall, Docker is a ‘must have’ system and should be included in your toolkit for automation, deployment, and scaling of your applications!

Our Support Teams are filled with talented admins with an intimate knowledge of multiple web hosting technologies, especially those discussed in this article. If you are uncomfortable walking through the steps outlined here, we are a phone call, chat or ticket away from assisting you with this process. If you’re running one of our fully Managed Cloud VPS Servers, we can provide more information on directly implementing the software described in this article.

 

SQL Databases Migration with Command Line

What if you have dozens of SQL databases and manually backing up/restoring each database is too time-consuming for your project? No problem! We can script out a method that will export and import all databases at once without needing manual intervention. For help with transferring SQL Logins and Stored Procedures & Views take a look at our MSSQL Migration with SSMS article.

1. Open SSMS (Microsoft SQL Server Management Studio) on the source server, log in to the SQL instance and open a New Query window. Run the following query:

SELECT name FROM master.sys.databases

This command will output a list of all MSSQL databases on your server. To copy this list out, click anywhere in the results and use the keyboard shortcut CTRL+A (Command + A for Mac users) to select all databases. After highlighting all the databases right click and select copy.

2. Open Notepad, paste in your results and delete all databases (in the newly copied notepad text) you do NOT wish to migrate, as well as deleting the following entries:

  • master
  • tempdb
  • model
  • msdb

These entries are the system’s databases, and copying them is not necessary. Make sure to delete everything except explicitly the databases you need to migrate.  You should now have a list of all required databases separated by a line. i.e.

  • AdventureWorks2012
  • AdventureWorks2014
  • AdventureWorks2016

3. Save this result on the computer as C:\databases.txt.

4. Create a new Notepad window, copy/paste the following into the document and save it as C:\db-backup.bat

mkdir %systemdrive%\dbbackups
for /F "tokens=*" %%a in (databases.txt) do ( sqlcmd.exe -Slocalhost -Q"BACKUP DATABASE %%a TO DISK ='%systemdrive%\dbbackups\%%a.bak' WITH STATS" )

5. Now that you’ve saved the file as C:\db-backup.bat, navigate to the Start menu and type cmd and right click on Command Prompt to select Run as Administrator.Type the following command:

cd C:\

And hit enter. Afterward, type db-backup.bat and hit enter once again.

At this point, your databases have begun exporting and you will see the percentage progress of each databases export (pictured below).

Command line shows the process of each database that is exported.

Take note of any failed databases, as you can re-run the batch file when it’s done, using only the databases that may have failed. If the databases are failing to back up, take note of the error message displayed in the command prompt, address the error by modifying the existing C:\databases.txt file to include only the failed databases and re-run db-backup.bat until all databases are successfully exported.

 

By now you have the folder C:\dbbackups\ that contains .bak files for each database you want to migrate. You will need to copy the folder and your C:\databases.txt file to the destination server. There are numerous ways to move your data to the destination server; you can use USB, Robocopy or FTP. The folder on the C drive of the destination server should be called C:\dbbackups . It’s important to accurately name the file as our script will be looking for the .bak files here. Be sure that the destination server has your C:\databases.txt file as well, as our script will be looking for the database names here.

 

1. Open a Notepad and copy/paste the following into the document and save it as C:\db-restore.bat

for /F "tokens=*" %%a in (C:\databases.txt) do (
sqlcmd.exe -E -Slocalhost -Q"RESTORE DATABASE %%a FROM DISK='%systemdrive%\dbbackups\%%a.bak' WITH RECOVERY"
)

2. Save the file as C:\db-restore.bat 

3. Navigate to the Start menu and type cmd.

4. Right click on Command Prompt and select Run as Administrator. Type the following command:

cd C:\

and hit Enter. Now type db-restore.bat and hit Enter.

Your databases have now begun importing. You will see the percentage of each databases restoration and the message “RESTORE DATABASE successfully processed” for each database that has been successfully processed.

Take note of any failed databases, as you can re-run the batch file when it’s done, using only the databases that have failed. If the databases are failing to back up, take note of the error message displayed in the command prompt, address the error (you can change the batch file as necessary), modify C:\databases.txt to include only the failed databases and re-run db-restore.bat until all databases are successfully exported.

Congratulations, you have now backed up and restored all of your databases to the new server. If you have any login issues while testing the SQL connections on the destination server, refer to the Migrating Microsoft SQL Logins (anchor link) section of this article and follow the steps therein. To migrate views or stored procedures please refer to the Migrating Views and Stored Procedures section. Every SQL server will have it’s own configurations and obstacles to face but we hope this article has given you a strong foundation for your Microsoft SQL Server Migration.

 

Protecting against CVE-2018-14634 (Mutagen Astronomy)

There is a new exploit, rated as 7.8 severity level,  that affects major Linux distributions of RedHat Enterprise Linux, Debian 8 and CentOS named Mutagen Astronomy. Mutagen Astronomy exploits an integer overflow vulnerability in the Linux kernel and supplies root access (admin privileges) to unauthorized users on the intended server. This exploit affects Linux kernel version dating back from July 2007 to July 2017.  Living in the Kernel, the memory table can be manipulated to overflow using the create_tables_elf() function. After overwhelming the server, the hacker can then overtake the server with its malicious intents.

As mentioned this vulnerability is present in RedHat, Debian 8, and CentOS distributions but is limited to affecting only 64-bit versions as the 32-bit versions do not have the address space to overwhelm the server.  Along with 64-bit versions, the exploit is also limited to Linux Kernel versions 2.6.x, 3.10.x, and 4.14.x. (Read our article How To Check the Kernel Version to see which version you are running)  Proof of concept reported on August 31, 2018, and although remediation from a one-year-old patch was backported to most LTS (long-term support) kernels, CentOS and Debian 8 remain vulnerable.

RedHat has recently released a patch and updating to kernel-3.10.0-862.14.4.el7.x86_64.rpm will keep you safe from Mutagen Astronomy. Unfortunately, CentOS and Debian 8 distributions have not yet released a patch for the Mutagen Astronomy vulnerability but stay tuned as we will be updating this article once information is released.

 

SQL Database Migration with SSMS

Migrating MSSQL between servers can be challenging without the proper guidelines to keep you on track. In this article, I will be outlining the various ways to migrate Microsoft SQL Server databases between servers or instances. Whether you need to move a single database,  many databases, logins or stored procedures and views we have you covered!

There are many circumstances where you will need to move a database or restore databases. The most common reasons are:

 

  • Moving to an entirely new server.
  • Moving to a different instance of SQL.
  • Creating a development server or going live to a production server.
  • Restoring databases from a backup.

 

There are two main ways to move SQL databases. Manually with Microsoft SQL Server Management Studio (SSMS) or with the command line. The method you choose depends on what you need to accomplish. If you are moving a single database or just a few, manually backing up and restoring the databases with SSMS will be the easiest approach. If you are moving a lot of databases (think more than 10) then using the command line method will speed up the process. The command line method takes more prep work beforehand, but if you are transferring dozens of databases, then it is well worth the time spent configuring the script instead of migrating each database individually. If you aren’t sure which method to use, try the manual approach first while you get comfortable with the process. I recommend reading all the way through for a deeper understanding of the methodology.

 

Useful References for Terminology

SSMS – An acronym for Microsoft SQL Server Management Studio.

Source Server – The server or instance you are moving databases from or off.

Destination Server – The server or instance you are moving databases to.

 

Moving SQL databases with the manual method can be very easy. It is the preferred process for transferring a few or smaller databases. To follow this part of the guide, you must have MSSQL, and Microsoft SQL Server Management Studio (SSMS) installed.

 

1. Begin by logging into the Source server (the server you are moving databases from or off of). You will want to open Microsoft SQL Server Management Studio by selecting Start > Microsoft SQL Server >  Microsoft SQL Server Management Studio.

2.Log into the SQL server using Windows Authentication or SQL Authentication.

3. Expand the server(in our case SQL01), expand Databases, select the first database you want to move (pictured below).

Select your database within Microsoft SQL Server Management Studio.

4. Right click on your database and select Tasks then click Back Up.

Back up button in Microsoft SQL Server Management Studio.

5. From here you are now at the Back Up Database screen. You can choose a Backup Type such as Full or Differential, make sure the correct database is selected, and set the destination for the SQL backup. For our example, we can leave the Backup Type as Full.

6. Under Backup Type, check the box for “Copy-only backup.” If you are running DPM or another form of server backup, backing up without the Copy-Only flag will cause a break in the backup log chain.

7. You will see a location under Destination for the path of the new backup. Typically you will Remove this entry then Add a new one to select a folder that SQL has read/write access. Adding a new Backup Destination shows a path similar to the following:

C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\

This C:\ path is where your stored database backup is. Note this location for later reference, as this is the default path to stored backups and will have to have proper read/write access for SQL services.

Note:
Advanced users may be comfortable leaving the destination as is, provided the permissions are correct on the output folder.

8. Next, append a filename to the end of this path such as AdventureWorks2012-081418.bak – Be sure to end the filename with the extension .bak and select OKSet the file name with the .bak extension in Microsoft SQL Server Management Studio

10. Once you have pressed OK on the Select Backup Destination prompt, you are ready to back up the database! All you need to do now is hit OK, and the database will begin backing up. You will see a progress bar in the bottom left-hand corner, and when the backup is complete, a window will appear saying ‘The backup of database ‘AdventureWorks2012’ completed successfully.

Navigate to the destination path, noted earlier, (in this case C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\) you will see your newly created file (in this case AdventureWorks2012-081418.bak) – Congratulations! This file is the full export of your database and is ready to be imported to the new server.  If you have more databases, then repeat the steps above for each database you are moving. After copying all database process to the next step of restoring databases to the destination server.

 

You should now have a .bak file of all your databases on the source server. These database files need to be transferred to the destination server. There are numerous ways to move your data to the destination server; you can use USB, Robocopy or FTP. After copying a database you can store it on your destination server,  for our example, we have stored it on the C drive in a folder named C:\dbbackups .

1. Open Microsoft SQL Server Management Studio.

2. Log in to the SQL server using Windows Authentication or SQL Authentication.

3. Expand the server and right click on Databases and select Restore Database.

4. The Restore Database screen looks very similar to the Back Up Database screen.Under Source, you will want to select Device instead of Database. Selecting Device allows you to restore directly from a file. Once you’ve chosen Device, click the browse icon […]

5. Select Add, then navigate to the folder in which your .bak files lives. (In this case, C:\dbbackups).

6. Select the first database .bak you would like to restore and click OK.

Select the .bak file to import your database into the destination server via SSMS.

7. Click OK and now you are ready to import the database. Before importing, let’s take a look at the Options section on the left-hand side. Under Options, you will see other configurations for restoring databases such as Overwrite the Existing Database, Preserve the Replication Settings and Restrict Access to the Restored DatabaseIn this case, we are not replacing an existing database so I will leave all these options unchecked. If you wanted to replace an existing database (for example, the backed up database has newer data than on the destination server or you are replacing a development or production database) then simply select Overwrite the Existing Database.

Note:
Advanced users may be comfortable leaving the destination as is, provided the permissions are correct on the output folder.

8. Clicking OK  begins the restore process as indicated by the popup window that reads ‘Database ‘AdventureWorks2012′ restored successfully.’ You have migrated your database from the source to the destination server.

Repeat this process for each database that you are migrating. You can then update path references in your scripts/application to point to the new server, verify that the migration was successful.

 

After importing your databases if you are unable to connect using your SQL login, you may receive the error ‘Login failed for user ‘example.’ (Microsoft SQL Server, Error: 18456).‘ Because the database is in the Traditional Login and User Model, logins are stored separately in the source server and credentials are not contained within the database itself. From this point on, the destination server can be configured to use the Contained Database User Model which keeps the logins in your database and out of the source server. (You can read more about this here.)Until then, we will have to move and interact with the users as part of the Traditional model. Continue below to proceed with the migration of your SQL users.

Backing up and restoring the databases did move your SQL logins relation to the databases (your logins are still associated with the correct databases with the correct permissions) but the actual logins itself did not transfer to the new server.  You can verify this by opening SSMS on the destination server and navigating to Server > Security > Logins. You will notice that any custom SQL logins you created on the previous server did not transfer over here, but if you go to Server > Databases > Your Database (AdventureWorks2012 in this case) > Security > Users you’ll see the correct login associated with the database.

If you have one or two SQL users, you can just delete the user’s association to the database in Servers > Databases > AdventureWorks2012 > Security > Users, re-create the user in Server > Security > Logins and map it to the proper database.

If you have a lot of logins, you will have to follow an additional process outlined below. To migrate all SQL users, open a New Query window on the source server and run the following script:

SQL Login Script
USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
@binvalue varbinary(256),
@hexvalue varchar (514) OUTPUT
AS
DECLARE @charvalue varchar (514)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
SELECT @hexvalue = @charvalue
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @type varchar (1)
DECLARE @hasaccess int
DECLARE @denylogin int
DECLARE @is_disabled int
DECLARE @PWD_varbinary varbinary (256)
DECLARE @PWD_string varchar (514)
DECLARE @SID_varbinary varbinary (85)
DECLARE @SID_string varchar (514)
DECLARE @tmpstr varchar (1024)
DECLARE @is_policy_checked varchar (3)
DECLARE @is_expiration_checked varchar (3)
DECLARE @defaultdb sysname
IF (@login_name IS NULL)
DECLARE login_curs CURSOR FOR
SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM
sys.server_principals p LEFT JOIN sys.syslogins l
ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR
SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM
sys.server_principals p LEFT JOIN sys.syslogins l
ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name = @login_name
OPEN login_curs
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated ' + CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@type IN ( 'G', 'U'))
BEGIN -- NT authenticated account/group
SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' FROM WINDOWS WITH DEFAULT_DATABASE = [' + @defaultdb + ']'
END
ELSE BEGIN -- SQL Server authentication
-- obtain password and sid
SET @PWD_varbinary = CAST( LOGINPROPERTY( @name, 'PasswordHash' ) AS varbinary (256) )
EXEC sp_hexadecimal @PWD_varbinary, @PWD_string OUT
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
-- obtain password policy state
SELECT @is_policy_checked = CASE is_policy_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
SELECT @is_expiration_checked = CASE is_expiration_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' WITH PASSWORD = ' + @PWD_string + ' HASHED, SID = ' + @SID_string + ', DEFAULT_DATABASE = [' + @defaultdb + ']'
IF ( @is_policy_checked IS NOT NULL )
BEGIN
SET @tmpstr = @tmpstr + ', CHECK_POLICY = ' + @is_policy_checked
END
IF ( @is_expiration_checked IS NOT NULL )
BEGIN
SET @tmpstr = @tmpstr + ', CHECK_EXPIRATION = ' + @is_expiration_checked
END
END
IF (@denylogin = 1)
BEGIN -- login is denied access
SET @tmpstr = @tmpstr + '; DENY CONNECT SQL TO ' + QUOTENAME( @name )
END
ELSE IF (@hasaccess = 0)
BEGIN -- login exists but does not have access
SET @tmpstr = @tmpstr + '; REVOKE CONNECT SQL TO ' + QUOTENAME( @name )
END
IF (@is_disabled = 1)
BEGIN -- login is disabled
SET @tmpstr = @tmpstr + '; ALTER LOGIN ' + QUOTENAME( @name ) + ' DISABLE'
END
PRINT @tmpstr
END
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO

This script creates two stored procedures in the source database which helps with migrating these logins. Open a New Query window and run the following:
EXEC sp_help_revlogin

This query outputs a script that creates new logins for the destination server. Copy the output of this query and save it for later. You will need to run this on the destination server.

Once you’ve copied the output of this query, login to SSMS on the destination server and open a New Query window. Paste the contents from the previous script (it should have a series of lines that look similar to — Login: BUILTIN\Administrators
CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE = [master]) and hit Execute.

You have now successfully imported all SQL logins and can now verify that the databases have been migrated to the destination server by using your previous credentials.

Views and stored procedures will migrate with the database if you are using the typical SQL Tape backups. Follow the instructions below if you need to migrate views and stored procedures independently.

  1. Open Microsoft SQL Management Studio on the Source server.
  2. Log in to your SQL server.
  3. Expand the server and as well as Databases.
  4. Right click on the name of your database and go to Tasks > Generate Scripts.
  5. Click Next.
  6. We will change Script entire database and all database objects to Select specific database objects and only check Views and Stored Procedures.Transfer Stored Procedures and Views within Microsoft SQL Server Management Studio
  7. Click Next, notice the Save to File option. Take note of the file path listed. In my case, it is C:\Users\Administrator\Documents\script.sql – The path of saved views and stored procedures.
  8. Click Next >> Next >>Finish, and select C:\Users\Administrator\Documents\script.sql and copy it to the destination server.
  9. Go to the destination server, open SSMS and log in to the SQL server.
  10. Go to File > Open > File or use the keyboard shortcut CTRL+O to open the SQL script. Select the file C:\Users\Administrator\Documents\script.sql to open it.
  11. You will see the script generated from the source server containing all views and stored procedures. Click Execute or use the keyboard shortcut F5 and run the script.
Note:
Unfortunately, there is no built-in way to do this with the command line. There are 3rd party tools and even a tool by Microsoft called mssql-scripter for more advanced scripting.

You have now migrated the views and stored procedures to your destination server! Repeat this process for each database you are migrating. A little guidance goes a long way in database administration. Every SQL server will have it’s own configurations and obstacles to face but we hope this article has given you a strong foundation for your Microsoft SQL Server Migration.

Looking for a High Availability, platform-independent SQL service that is easily scalable and can grow with your business? Check out our SQL as a Service product offered at Liquid Web. Speak with one of our amazing Hosting Advisers to find the perfect solution for you!