SQL Databases Migration with Command Line

What if you have dozens of SQL databases and manually backing up/restoring each database is too time-consuming for your project? No problem! We can script out a method that will export and import all databases at once without needing manual intervention. For help with transferring SQL Logins and Stored Procedures & Views take a look at our MSSQL Migration with SSMS article.

1. Open SSMS (Microsoft SQL Server Management Studio) on the source server, log in to the SQL instance and open a New Query window. Run the following query:

SELECT name FROM master.sys.databases

This command will output a list of all MSSQL databases on your server. To copy this list out, click anywhere in the results and use the keyboard shortcut CTRL+A (Command + A for Mac users) to select all databases. After highlighting all the databases right click and select copy.

2. Open Notepad, paste in your results and delete all databases (in the newly copied notepad text) you do NOT wish to migrate, as well as deleting the following entries:

  • master
  • tempdb
  • model
  • msdb

These entries are the system’s databases, and copying them is not necessary. Make sure to delete everything except explicitly the databases you need to migrate.  You should now have a list of all required databases separated by a line. i.e.

  • AdventureWorks2012
  • AdventureWorks2014
  • AdventureWorks2016

3. Save this result on the computer as C:\databases.txt.

4. Create a new Notepad window, copy/paste the following into the document and save it as C:\db-backup.bat

mkdir %systemdrive%\dbbackups
for /F "tokens=*" %%a in (databases.txt) do ( sqlcmd.exe -Slocalhost -Q"BACKUP DATABASE %%a TO DISK ='%systemdrive%\dbbackups\%%a.bak' WITH STATS" )

5. Now that you’ve saved the file as C:\db-backup.bat, navigate to the Start menu and type cmd and right click on Command Prompt to select Run as Administrator.Type the following command:

cd C:\

And hit enter. Afterward, type db-backup.bat and hit enter once again.

At this point, your databases have begun exporting and you will see the percentage progress of each databases export (pictured below).

Command line shows the process of each database that is exported.

Take note of any failed databases, as you can re-run the batch file when it’s done, using only the databases that may have failed. If the databases are failing to back up, take note of the error message displayed in the command prompt, address the error by modifying the existing C:\databases.txt file to include only the failed databases and re-run db-backup.bat until all databases are successfully exported.

 

By now you have the folder C:\dbbackups\ that contains .bak files for each database you want to migrate. You will need to copy the folder and your C:\databases.txt file to the destination server. There are numerous ways to move your data to the destination server; you can use USB, Robocopy or FTP. The folder on the C drive of the destination server should be called C:\dbbackups . It’s important to accurately name the file as our script will be looking for the .bak files here. Be sure that the destination server has your C:\databases.txt file as well, as our script will be looking for the database names here.

 

1. Open a Notepad and copy/paste the following into the document and save it as C:\db-restore.bat

for /F "tokens=*" %%a in (C:\databases.txt) do (
sqlcmd.exe -E -Slocalhost -Q"RESTORE DATABASE %%a FROM DISK='%systemdrive%\dbbackups\%%a.bak' WITH RECOVERY"
)

2. Save the file as C:\db-restore.bat 

3. Navigate to the Start menu and type cmd.

4. Right click on Command Prompt and select Run as Administrator. Type the following command:

cd C:\

and hit Enter. Now type db-restore.bat and hit Enter.

Your databases have now begun importing. You will see the percentage of each databases restoration and the message “RESTORE DATABASE successfully processed” for each database that has been successfully processed.

Take note of any failed databases, as you can re-run the batch file when it’s done, using only the databases that have failed. If the databases are failing to back up, take note of the error message displayed in the command prompt, address the error (you can change the batch file as necessary), modify C:\databases.txt to include only the failed databases and re-run db-restore.bat until all databases are successfully exported.

Congratulations, you have now backed up and restored all of your databases to the new server. If you have any login issues while testing the SQL connections on the destination server, refer to the Migrating Microsoft SQL Logins (anchor link) section of this article and follow the steps therein. To migrate views or stored procedures please refer to the Migrating Views and Stored Procedures section. Every SQL server will have it’s own configurations and obstacles to face but we hope this article has given you a strong foundation for your Microsoft SQL Server Migration.

 

SQL Database Migration with SSMS

Migrating MSSQL between servers can be challenging without the proper guidelines to keep you on track. In this article, I will be outlining the various ways to migrate Microsoft SQL Server databases between servers or instances. Whether you need to move a single database,  many databases, logins or stored procedures and views we have you covered!

There are many circumstances where you will need to move a database or restore databases. The most common reasons are:

 

  • Moving to an entirely new server.
  • Moving to a different instance of SQL.
  • Creating a development server or going live to a production server.
  • Restoring databases from a backup.

 

There are two main ways to move SQL databases. Manually with Microsoft SQL Server Management Studio (SSMS) or with the command line. The method you choose depends on what you need to accomplish. If you are moving a single database or just a few, manually backing up and restoring the databases with SSMS will be the easiest approach. If you are moving a lot of databases (think more than 10) then using the command line method will speed up the process. The command line method takes more prep work beforehand, but if you are transferring dozens of databases, then it is well worth the time spent configuring the script instead of migrating each database individually. If you aren’t sure which method to use, try the manual approach first while you get comfortable with the process. I recommend reading all the way through for a deeper understanding of the methodology.

 

Useful References for Terminology

SSMS – An acronym for Microsoft SQL Server Management Studio.

Source Server – The server or instance you are moving databases from or off.

Destination Server – The server or instance you are moving databases to.

 

Moving SQL databases with the manual method can be very easy. It is the preferred process for transferring a few or smaller databases. To follow this part of the guide, you must have MSSQL, and Microsoft SQL Server Management Studio (SSMS) installed.

 

1. Begin by logging into the Source server (the server you are moving databases from or off of). You will want to open Microsoft SQL Server Management Studio by selecting Start > Microsoft SQL Server >  Microsoft SQL Server Management Studio.

2.Log into the SQL server using Windows Authentication or SQL Authentication.

3. Expand the server(in our case SQL01), expand Databases, select the first database you want to move (pictured below).

Select your database within Microsoft SQL Server Management Studio.

4. Right click on your database and select Tasks then click Back Up.

Back up button in Microsoft SQL Server Management Studio.

5. From here you are now at the Back Up Database screen. You can choose a Backup Type such as Full or Differential, make sure the correct database is selected, and set the destination for the SQL backup. For our example, we can leave the Backup Type as Full.

6. Under Backup Type, check the box for “Copy-only backup.” If you are running DPM or another form of server backup, backing up without the Copy-Only flag will cause a break in the backup log chain.

7. You will see a location under Destination for the path of the new backup. Typically you will Remove this entry then Add a new one to select a folder that SQL has read/write access. Adding a new Backup Destination shows a path similar to the following:

C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\

This C:\ path is where your stored database backup is. Note this location for later reference, as this is the default path to stored backups and will have to have proper read/write access for SQL services.

Note:
Advanced users may be comfortable leaving the destination as is, provided the permissions are correct on the output folder.

8. Next, append a filename to the end of this path such as AdventureWorks2012-081418.bak – Be sure to end the filename with the extension .bak and select OKSet the file name with the .bak extension in Microsoft SQL Server Management Studio

10. Once you have pressed OK on the Select Backup Destination prompt, you are ready to back up the database! All you need to do now is hit OK, and the database will begin backing up. You will see a progress bar in the bottom left-hand corner, and when the backup is complete, a window will appear saying ‘The backup of database ‘AdventureWorks2012’ completed successfully.

Navigate to the destination path, noted earlier, (in this case C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup\) you will see your newly created file (in this case AdventureWorks2012-081418.bak) – Congratulations! This file is the full export of your database and is ready to be imported to the new server.  If you have more databases, then repeat the steps above for each database you are moving. After copying all database process to the next step of restoring databases to the destination server.

 

You should now have a .bak file of all your databases on the source server. These database files need to be transferred to the destination server. There are numerous ways to move your data to the destination server; you can use USB, Robocopy or FTP. After copying a database you can store it on your destination server,  for our example, we have stored it on the C drive in a folder named C:\dbbackups .

1. Open Microsoft SQL Server Management Studio.

2. Log in to the SQL server using Windows Authentication or SQL Authentication.

3. Expand the server and right click on Databases and select Restore Database.

4. The Restore Database screen looks very similar to the Back Up Database screen.Under Source, you will want to select Device instead of Database. Selecting Device allows you to restore directly from a file. Once you’ve chosen Device, click the browse icon […]

5. Select Add, then navigate to the folder in which your .bak files lives. (In this case, C:\dbbackups).

6. Select the first database .bak you would like to restore and click OK.

Select the .bak file to import your database into the destination server via SSMS.

7. Click OK and now you are ready to import the database. Before importing, let’s take a look at the Options section on the left-hand side. Under Options, you will see other configurations for restoring databases such as Overwrite the Existing Database, Preserve the Replication Settings and Restrict Access to the Restored DatabaseIn this case, we are not replacing an existing database so I will leave all these options unchecked. If you wanted to replace an existing database (for example, the backed up database has newer data than on the destination server or you are replacing a development or production database) then simply select Overwrite the Existing Database.

Note:
Advanced users may be comfortable leaving the destination as is, provided the permissions are correct on the output folder.

8. Clicking OK  begins the restore process as indicated by the popup window that reads ‘Database ‘AdventureWorks2012′ restored successfully.’ You have migrated your database from the source to the destination server.

Repeat this process for each database that you are migrating. You can then update path references in your scripts/application to point to the new server, verify that the migration was successful.

 

After importing your databases if you are unable to connect using your SQL login, you may receive the error ‘Login failed for user ‘example.’ (Microsoft SQL Server, Error: 18456).‘ Because the database is in the Traditional Login and User Model, logins are stored separately in the source server and credentials are not contained within the database itself. From this point on, the destination server can be configured to use the Contained Database User Model which keeps the logins in your database and out of the source server. (You can read more about this here.)Until then, we will have to move and interact with the users as part of the Traditional model. Continue below to proceed with the migration of your SQL users.

Backing up and restoring the databases did move your SQL logins relation to the databases (your logins are still associated with the correct databases with the correct permissions) but the actual logins itself did not transfer to the new server.  You can verify this by opening SSMS on the destination server and navigating to Server > Security > Logins. You will notice that any custom SQL logins you created on the previous server did not transfer over here, but if you go to Server > Databases > Your Database (AdventureWorks2012 in this case) > Security > Users you’ll see the correct login associated with the database.

If you have one or two SQL users, you can just delete the user’s association to the database in Servers > Databases > AdventureWorks2012 > Security > Users, re-create the user in Server > Security > Logins and map it to the proper database.

If you have a lot of logins, you will have to follow an additional process outlined below. To migrate all SQL users, open a New Query window on the source server and run the following script:

SQL Login Script
USE master
GO
IF OBJECT_ID ('sp_hexadecimal') IS NOT NULL
DROP PROCEDURE sp_hexadecimal
GO
CREATE PROCEDURE sp_hexadecimal
@binvalue varbinary(256),
@hexvalue varchar (514) OUTPUT
AS
DECLARE @charvalue varchar (514)
DECLARE @i int
DECLARE @length int
DECLARE @hexstring char(16)
SELECT @charvalue = '0x'
SELECT @i = 1
SELECT @length = DATALENGTH (@binvalue)
SELECT @hexstring = '0123456789ABCDEF'
WHILE (@i <= @length)
BEGIN
DECLARE @tempint int
DECLARE @firstint int
DECLARE @secondint int
SELECT @tempint = CONVERT(int, SUBSTRING(@binvalue,@i,1))
SELECT @firstint = FLOOR(@tempint/16)
SELECT @secondint = @tempint - (@firstint*16)
SELECT @charvalue = @charvalue +
SUBSTRING(@hexstring, @firstint+1, 1) +
SUBSTRING(@hexstring, @secondint+1, 1)
SELECT @i = @i + 1
END
SELECT @hexvalue = @charvalue
GO
IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @type varchar (1)
DECLARE @hasaccess int
DECLARE @denylogin int
DECLARE @is_disabled int
DECLARE @PWD_varbinary varbinary (256)
DECLARE @PWD_string varchar (514)
DECLARE @SID_varbinary varbinary (85)
DECLARE @SID_string varchar (514)
DECLARE @tmpstr varchar (1024)
DECLARE @is_policy_checked varchar (3)
DECLARE @is_expiration_checked varchar (3)
DECLARE @defaultdb sysname
IF (@login_name IS NULL)
DECLARE login_curs CURSOR FOR
SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM
sys.server_principals p LEFT JOIN sys.syslogins l
ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name <> 'sa'
ELSE
DECLARE login_curs CURSOR FOR
SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM
sys.server_principals p LEFT JOIN sys.syslogins l
ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name = @login_name
OPEN login_curs
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
IF (@@fetch_status = -1)
BEGIN
PRINT 'No login(s) found.'
CLOSE login_curs
DEALLOCATE login_curs
RETURN -1
END
SET @tmpstr = '/* sp_help_revlogin script '
PRINT @tmpstr
SET @tmpstr = '** Generated ' + CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
PRINT ''
SET @tmpstr = '-- Login: ' + @name
PRINT @tmpstr
IF (@type IN ( 'G', 'U'))
BEGIN -- NT authenticated account/group
SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' FROM WINDOWS WITH DEFAULT_DATABASE = [' + @defaultdb + ']'
END
ELSE BEGIN -- SQL Server authentication
-- obtain password and sid
SET @PWD_varbinary = CAST( LOGINPROPERTY( @name, 'PasswordHash' ) AS varbinary (256) )
EXEC sp_hexadecimal @PWD_varbinary, @PWD_string OUT
EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT
-- obtain password policy state
SELECT @is_policy_checked = CASE is_policy_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
SELECT @is_expiration_checked = CASE is_expiration_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
SET @tmpstr = 'CREATE LOGIN ' + QUOTENAME( @name ) + ' WITH PASSWORD = ' + @PWD_string + ' HASHED, SID = ' + @SID_string + ', DEFAULT_DATABASE = [' + @defaultdb + ']'
IF ( @is_policy_checked IS NOT NULL )
BEGIN
SET @tmpstr = @tmpstr + ', CHECK_POLICY = ' + @is_policy_checked
END
IF ( @is_expiration_checked IS NOT NULL )
BEGIN
SET @tmpstr = @tmpstr + ', CHECK_EXPIRATION = ' + @is_expiration_checked
END
END
IF (@denylogin = 1)
BEGIN -- login is denied access
SET @tmpstr = @tmpstr + '; DENY CONNECT SQL TO ' + QUOTENAME( @name )
END
ELSE IF (@hasaccess = 0)
BEGIN -- login exists but does not have access
SET @tmpstr = @tmpstr + '; REVOKE CONNECT SQL TO ' + QUOTENAME( @name )
END
IF (@is_disabled = 1)
BEGIN -- login is disabled
SET @tmpstr = @tmpstr + '; ALTER LOGIN ' + QUOTENAME( @name ) + ' DISABLE'
END
PRINT @tmpstr
END
FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO

This script creates two stored procedures in the source database which helps with migrating these logins. Open a New Query window and run the following:
EXEC sp_help_revlogin

This query outputs a script that creates new logins for the destination server. Copy the output of this query and save it for later. You will need to run this on the destination server.

Once you’ve copied the output of this query, login to SSMS on the destination server and open a New Query window. Paste the contents from the previous script (it should have a series of lines that look similar to — Login: BUILTIN\Administrators
CREATE LOGIN [BUILTIN\Administrators] FROM WINDOWS WITH DEFAULT_DATABASE = [master]) and hit Execute.

You have now successfully imported all SQL logins and can now verify that the databases have been migrated to the destination server by using your previous credentials.

Views and stored procedures will migrate with the database if you are using the typical SQL Tape backups. Follow the instructions below if you need to migrate views and stored procedures independently.

  1. Open Microsoft SQL Management Studio on the Source server.
  2. Log in to your SQL server.
  3. Expand the server and as well as Databases.
  4. Right click on the name of your database and go to Tasks > Generate Scripts.
  5. Click Next.
  6. We will change Script entire database and all database objects to Select specific database objects and only check Views and Stored Procedures.Transfer Stored Procedures and Views within Microsoft SQL Server Management Studio
  7. Click Next, notice the Save to File option. Take note of the file path listed. In my case, it is C:\Users\Administrator\Documents\script.sql – The path of saved views and stored procedures.
  8. Click Next >> Next >>Finish, and select C:\Users\Administrator\Documents\script.sql and copy it to the destination server.
  9. Go to the destination server, open SSMS and log in to the SQL server.
  10. Go to File > Open > File or use the keyboard shortcut CTRL+O to open the SQL script. Select the file C:\Users\Administrator\Documents\script.sql to open it.
  11. You will see the script generated from the source server containing all views and stored procedures. Click Execute or use the keyboard shortcut F5 and run the script.
Note:
Unfortunately, there is no built-in way to do this with the command line. There are 3rd party tools and even a tool by Microsoft called mssql-scripter for more advanced scripting.

You have now migrated the views and stored procedures to your destination server! Repeat this process for each database you are migrating. A little guidance goes a long way in database administration. Every SQL server will have it’s own configurations and obstacles to face but we hope this article has given you a strong foundation for your Microsoft SQL Server Migration.

Looking for a High Availability, platform-independent SQL service that is easily scalable and can grow with your business? Check out our SQL as a Service product offered at Liquid Web. Speak with one of our amazing Hosting Advisers to find the perfect solution for you!

 

Install Memcached on Ubuntu 16.04

Memcached works to enhance performance by keeping a copy of commonly used script elements within the server’s memory in a form that is more easily read by the server thus reducing time. A bonus feature of this object cacher is its ability to decrease the number of connections to your database. In this tutorial, we instruct how to install Memcached, but it’s important to note that when using Memcache in an application, the application must be specially coded or configured to store and retrieve data this cached data.

Learn more about caching from our dedicated article or visit our series for database optimization.

Pre-flight

  • We are logged in as root on an Ubuntu 16.04 VPS powered by Liquid Web!
  • Installed and running Apache and PHP 7.

Installation of Memcached

Step 1:
Following best practices, we will do a quick package update by using the following command:
apt-get update
Step 2:
Install the Memcached daemon using
apt-get install memcached -y
Step 3:
Install the Memcache module for PHP fuctionality:
apt-get install php-memcached -y

Verify installation of Memcached

Use the php -m flag to show compiled modules while sorting through specifically looking for memcached.

php -m | grep memcached
memcached

Optional Configurations

At some point, you may find that you need to change the default settings of Memcached. These include adjusting the port number, memory for your cache, and the listening IP address.
vim /etc/memcached.conf

Adjust these configurations by keeping the same flags (-m, -p, -u, -l), adjusting the letter or number after the flag and save the file by typing :wq .
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 64
 
# Default connection port is 11211
-p 11211
 
# Run the daemon as root. The start-memcached will default to running as root if no
# -u command is present in this config file
-u memcache
 
# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.
-l 127.0.0.1

 

Restart your Memcached service to recognize the changes to this file:
systemctl restart memcached

MySQL Performance: MyISAM vs InnoDB

 

A major factor in database performance is the storage engine used by the database, and more specifically, its tables. Different storage engines provide better performance in one situation over another. For general use, there are two contenders to be considered. These are MyISAM, which is the default MySQL storage engine, or InnoDB, which is an alternative engine built-in to MySQL intended for high-performance databases. Before we can understand the difference between the two storage engines, we need to understand the term “locking.”

To protect the integrity of the data stored within databases, MySQL employs locking. Locking, simply put, means protecting data from being accessed. When a lock is applied, the data cannot be modified except by the query that initiated the lock. Locking is a necessary component to ensure the accuracy of the stored information.  Each storage engine has a different method of locking used. Depending on your data and query practices, one engine can outperform another. In this series, we will look at the two most common types of locking employed by our two storage engines.

 

Table locking:  The technique of locking an entire table when one or more cells within the table need to be updated or deleted. Table locking is the default method employed by the default storage engine, MyISAM.

Example: MyISAM Table LockingColumn AColumn BColumn C
Query 1 UPDATERow 1Writingdatadata
Query 2 SELECT (Wait)Row 2datadatadata
Query 3 UPDATE (Wait)Row 3datadatadata
Query 4 SELECT (Wait)Row 4datadatadata
Query 5 SELECT (Wait)Row 5datadatadata
The example illustrates how a single write operation locks the entire table causing other queries to wait for the UPDATE query finish.

 

Row-level locking: The act of locking an effective range of rows in a table while one or more cells within the range are modified or deleted. Row-level locking is the method used by the InnoDB storage engine and is intended for high-performance databases.

Example: InnoDB Row-Level LockingColumn AColumn AColumn A
Query 1 UPDATERow 1Writingdatadata
Query 2 SELECTRow 2Readingdatadata
Query 3 UPDATERow 3dataWritingdata
Query 4 SELECTRow 4ReadingReadingReading
Query 5 SELECTRow 5ReadingdataReading
The example shows how using row-level locking allows for multiple queries to run on individual rows by locking only the rows being updated instead of the entire table.

 

By comparing the two storage engines, we get to the crux of the argument between using InnoDB over MyISAM. An application or website that has a frequently used table works exceptionally well using the InnoDB storage engine by resolving table-locking bottlenecks. However, the question of using one over the other is a subjective as neither of them is perfect in all situations. There are strengths and limitations to both storage engines. Intimate knowledge of the database structure and query practices is critical for selecting the best storage engine for your tables.

MyISAM will out-perform InnoDB on large tables that require vastly more read activity versus write activity. MyISAM’s readabilities outshine InnoDB because locking the entire table is quicker than figuring out which rows are locked in the table. The more information in the table, the more time it takes InnoDB to figure out which ones are not accessible. If your application relies on huge tables that do not change data frequently, then MyISAM will out-perform InnoDB.  Conversely, InnoDB outperforms MyISAM when data within the table changes frequently. Table changes write data more than reading data per second. In these situations, InnoDB can keep up with large amounts of requests easier than locking the entire table for each one.

 

Should I use InnoDB with WordPress, Magento or Joomla Sites?

The short answer here is yes, in most cases. Liquid Web’s Most Helpful Humans in Hosting Support Teams have encountered several table-locking bottlenecks when clients are using some of the standard web applications of today. Most users of popular third-party applications like WordPress, Magento, and Joomla have limited knowledge of the underlying database components or code involved to make an informed decision on storage engines. Most table-locking bottlenecks from these content management systems (CMS) are generally resolved by changing all the tables for the site over to  InnoDB instead of the default MyISAM.  If you are hosting many of these types of CMS on your server, it would be beneficial to change the default storage engine in MySQL to use InnoDB for all new tables so that any new table installations start off with InnoDB.

 

Set your default storage engine to InnoDB by adding default_storage_engine=InnoDB to the [mysqld] section of the system config file located at:  /etc/my.cnf . Restarting the MySQL service is necessary for the server to detect changes to the file.

~ $ cat /etc/my.cnf
[mysqld]
log-error=/var/lib/mysql/mysql.err
innodb_file_per_table=1
default-storage-engine=innodb
innodb_buffer_pool_size=128M

 

Unfortunately, MySQL does not inherently have an option to convert tables, leaving each table to be changed individually. Liquid Web’s support team has put together an easy to follow maintenance plan for this process. The script, which you can run on the necessary server via shell access (SSH) will convert all tables between storage engines.

Note
Plan accordingly when performing batch operations of this nature just in case downtime occurs. Best practice is to backup all your MySQL Databases before implementing a change of this magnitude, doing so provides an easy recovery point to prevent any data loss.

Step 1: Prep

Plan to start at a time of day where downtime would have minimal consequences. This process itself does not require any downtime, however, downtime may be necessary to recover from unforeseen circumstances.  

 

Step 2: Backup All Databases To A File

The command below creates a single file backup of all databases named all-databases-backup.sqld and can be deleted once the conversion has succeeded and there are no apparent problems.
mysqldump --all-databases > all-databases-backup.sql

 

Step 3: Record Existing Table Engines To A File

Run the following script to record the existing table engines to a file named table-engine-backup.sql. You can then “import” or “run” this file later to convert back to their original engines if necessary.

mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=",Engine,";") FROM information_schema.tables WHERE table_schema NOT IN("mysql","information_schema","performance_schema");' | tee table-engine-backup.sql

If you need to revert the table engines back for any reason, run:
mysql < table-engine-backup.sql

 

Step 4a: Convert MyISAM Tables To InnoDB

The below command will proceed even if a table fails and lets you know which tables failed to convert. The output is saved to the file named convert-to-innodb.log for later review.
mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=InnoDB;") FROM information_schema.tables WHERE table_schema NOT IN ("mysql","information_schema","performance_schema") AND Engine = "MyISAM";' | while read -r i; do echo $i; mysql -e "$i"; done | tee convert-to-innodb.log

 

Step 4b: Convert All InnoDB Tables To MyISAM

This command will proceed even if a table fails and lets you know which tables failed to convert. The output is also saved to the file named convert-to-myisam.log for later review.

mysql -Bse 'SELECT CONCAT("ALTER TABLE ",table_schema,".",table_name," ENGINE=MyISAM;") FROM information_schema.tables WHERE table_schema NOT IN ("mysql","information_schema","performance_schema") AND Engine = "InnoDB";' | while read -r i; do echo $i; mysql -e "$i"; done | tee convert-to-myisam.log

 

The following commands illustrate how converting a single table is accomplished.

Note
Replace database_name with the proper database name and table_name with the correct table name. Make sure you have a valid backup of the table in question before proceeding. 

Backup A Single Table To A File
mysqldump database_name table_name > backup-table_name.sql

 

Convert A Single Table To InnoDB

mysql -Bse ‘ALTER TABLE database_name.table_name ENGINE=InnoDB;’

 

Convert A Single Table To MyISAM:

mysql -Bse ‘ALTER TABLE database_name.table_name ENGINE=MyISAM;’

 

Check out our other articles in this series, MySQL Performance: Identifying Long Queries, to pinpoint slow queries within your database.  Stay tuned for our next article where we will cover caching and optimization.

MySQL Performance: Identifying Long Queries

Every MySQL backed application can benefit from a finely tuned database server. The Liquid Web Heroic Support team has encountered numerous situations over the years where some minor adjustments have made a world of difference in website and application performance. In this series of articles, we have outlined some of the more common recommendations that have had the largest impact on performance.

Preflight Check

This article applies to most Linux based MySQL servers. This includes, but is not limited to, both Traditional Dedicated and Cloud VPS servers running a variety of common Linux distributions. The article can be used with the following Liquid Web system types:

  • Core-managed CentOS 6x/7x
  • Core-managed Ubuntu 14.04/16.04
  • Fully-managed CentOS 6/7 cPanel
  • Fully-managed CentOS 7 Plesk Onyx 17
  • Self-managed Linux servers
Note
Self-managed systems, which have opted out of direct support can take advantage of the techniques discussed here, however, the Liquid Web Heroic Support Team cannot offer direct aid on these server types.

This series of articles assumes familiarity with the following basic system administration concepts:

 

What is MySQL Optimization?

There is no clearly defined definition for the term MySQL Optimization. It can mean something different depending on the person,  administrator, group or company. For the sake of this series of articles on MySQL Optimization, we will define MySQL Optimization as:  The configuration of a MySQL or MariaDB server which has been configured to avoid commonly encountered bottlenecks discussed in this series of articles.

What is a bottleneck?

Very similar to the neck on a soda bottle, a bottleneck as a technical term is a point in an application or server configuration where a small amount of traffic or data can pass through without issue. However, a larger volume of the same type of traffic or data is hindered or blocked and cannot operate successfully as-is. See the following example of a configuration bottleneck:

Visual Difference between Optimized and Non-Optimized DatabaseIn this example, the server is capable of handling 10 connections simultaneously. However, the configuration only accepts 5 connections. This issue would not manifest so long as there were 5 or less connections at one time. However, when traffic ramps up to 10 connections, half of them start to fail due to unused resources in the server configuration. The above examples illustrates the bottleneck shape where it derives its name versus an optimized configuration which corrects the bottleneck.

When Should I Optimize My MySQL database?

Ideally, database performance tuning should occur regularly and before productivity is affected. It is best practice behavior to conduct weekly or monthly audits of database performance to prevent issues from adversely affecting  applications. The most obvious symptoms of performance problems are:

  • Queries stack up and never completing in the MySQL process table.
  • Applications or websites using the database become sluggish.
  • Connection timeouts errors, especially during peak hours.

While it is normal for there to be several concurrent queries running at one time on a busy system, it becomes a problem when these queries are taking too long to finish on a regular basis. Although the specific threshold varies per system and per application, average query times exceeding several seconds will manifest as a slowdown within attached websites and applications. These slowdowns can sometimes start out small and go unnoticed until a large traffic surge hits a particular bottleneck.

Identifying Performance Issues

Knowing how to examine the MySQL process table is vital for diagnosing the specific bottleneck being encountered. There is a number of ways to view the process table depending on your particular server and preference. For the sake of brevity this series will focus on the most common methods used via Secure Shell (SSH) access:

 

Using The MySQL Process Table: Method 1

Use the ‘mysqladmin’ command line tool with the flag ‘processlist’ or ‘proc’ for short. (Adding the flag ‘statistics’ or ‘stat’ for short will show running statistics for queries since MySQL’s last restart.)

Command:

mysqladmin proc stat

Output:

 +-------+------+-----------+-----------+---------+------+-------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+
| 77255 | root | localhost | employees | Query | 150 | | call While_Loop2() | 0.000 |
| 77285 | root | localhost | | Query | 0 | init | show processlist | 0.000 |
+-------+------+-----------+-----------+---------+------+-------+--------------------+----------+
Uptime: 861755 Threads: 2 Questions: 20961045 Slow queries: 0 Opens: 2976 Flush tables: 1 Open tables: 1011 Queries per second avg: 24.323

Pro: Used on the shell interface, this makes piping output to other scripts and tools very easy.
Con: The process table’s info column is always truncated so does not provide the full query on longer queries.

Using The MySQL Process Table: Method 2

Run the ‘show processlist;’ query from within MySQL interactive mode prompt. (Adding the ‘full’  modifier to the command disables truncation of the Info column. This is necessary when viewing long queries.)

 

Command:

show processlist;

Output:
MariaDB [(none)]> show full processlist;
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+
| 77006 | root | localhost | employees | Query | 151 | NULL | call While_Loop2() | 0.000 |
| 77021 | root | localhost | NULL | Query | 0 | init | show full processlist | 0.000 |
+-------+------+-----------+-----------+---------+------+-------+-----------------------+----------+

Pro: Using the full modifier allows for seeing the full query on longer queries.
Con: MySQL Interactive mode cannot access scripts and tools available in the shell interface.

Using The slow query log

Another valuable tool in  MySQL is the included slow query logging feature. This feature is the preferred method for finding long running queries on a regular basis. There are several directives available to adjust this feature. However, the most commonly needed settings are:

 

slow_query_logenable/disable the slow query log
slow_query_log_filename and path of the slow query log file
long_query_timetime in seconds/microseconds defining a slow query

These directives are set within the [mysqld] section of the MySQL configuration file located at /etc/my.cnf and will require a MySQL service restart before they will take affect. See the example below for formatting:

Caution
There is a large disk space concern with the slow query log file, which needs to be attended to continually until the slow query log feature is disabled. Keep in mind, the lower your long_query_time directive the faster the slow query log fills up a disk partition.
[mysqld]
log-error=/var/lib/mysql/mysql.err
innodb_file_per_table=1
default-storage-engine=innodb
innodb_buffer_pool_size=128M
innodb_log_file_size=128M
max_connections=300
key_buffer_size = 8M
slow_query_log=1
slow_query_log_file=/var/lib/mysql/slowquery.log
long_query_time=5

Once the slow query log is enabled you will need to periodically follow-up with it to review unruly queries that need to be adjusted for better performance. To analyze the slow query log file, you can parse it directly to review its contents. The following example shows the statistics for the sample query which ran longer that the configured 5 seconds:

Caution
There is a performance hit taken by enabling the slow query log feature. This is due to the additional routines needed to analyze each query as well as the I/O needed to write the necessary queries to the log file. Because of this, it is considered best practice on production systems to disable the slow query log. The slow query log should only remain enabled for a specific duration when actively looking for troublesome queries that may be impacting the application or website.
# Time: 180717 0:23:28
# User@Host: root[root] @ localhost [] # Thread_id: 32 Schema: employees QC_hit: No
# Query_time: 627.163085 Lock_time: 0.000021 Rows_sent: 0 Rows_examined: 0
# Rows_affected: 0
use employees;
SET timestamp=1531801408;
call While_Loop2();

Optionally, you can use the mysqldumpslow command line tool, which parses the slow query log file and groups like queries together except values of number and string data:
~ $ mysqldumpslow -a /var/lib/mysql/slowquery.log
Reading mysql slow query log from /var/lib/mysql/slowquery.log
Count: 2 Time=316.67s (633s) Lock=0.00s (0s) Rows_sent=0.5 (1), Rows_examined=0.0 (0), Rows_affected=0.0 (0), root[root]@localhost
call While_Loop2()
(For usage information visit MySQL documentation here: mysqldumpslow – Summarize Slow Query Log Files)

So concludes the first part of our Database Optimization series and gives us a solid basis to refer back to for benchmark purposes. Though database issues can be complicated, our series will break down these concepts to provide means to optimize your database through database conversion, table conversion, and indexing.

 

Installing SQL Express

MSSQL Express 2017 on a Dedicated Server

Microsoft SQL Server is a powerful database that is commonly used with ASP.NET and other website programming languages. However, licensing for MSSQL can be expensive and is sometimes prohibitive for smaller businesses and applications. Fortunately, Microsoft provides a free version of MSSQL server called MSSQL Express. Installing MSSQL Express on your dedicated server is quick and easy, especially with the new features included in MSSQL Express 2017.

Continue reading “Installing SQL Express”

Restore Your WordPress Database with WP-CLI

This article is a follow up to a previous article on the process of backing up a WordPress database with wp-cli. You may want to read that article before this one.

In this article you will learn how to restore a WordPress database backup using wp-cli tool. Having this skill at your disposal is crucial for situations where you need to restore a backup in a pinch. This skill can be particularly helpful if you are testing major changes and need to revert back.

Pre-flight Check:

  • These instructions were created with a cPanel-based server in mind.
  • Command line access via SSH will be necessary to follow along.
  • The server must have WP-CLI installed, for installation directions see this tutorial.

Continue reading “Restore Your WordPress Database with WP-CLI”

How to Disable MySQL Strict Mode

MySQL’s, and MariaDB’s, strict mode controls how invalid or missing values in data changing queries are handled; this includes INSERT, UPDATE, and CREATE TABLE statements. With MySQL strict mode enabled, which is the default state, invalid or missing data may cause warnings or errors when attempting to process the query.

When strict mode is disabled the same query would have its invalid, or missing, values adjusted and would produce a simple warning. This may seem like the preferred result, however with strict mode disabled certain actions may cause unexpected results; for instance, when the value being inserted exceeds the maximum character limit it will be truncated to fit the limit. Continue reading “How to Disable MySQL Strict Mode”

Modifying Fields in Database Tables with phpMyAdmin

 

  1. This tutorial assumes you’ve already logged in to phpMyAdmin.
  2. Now let’s learn how to modify fields in database tables.
  3. Select the details table here.phpma-modifyfield-frame3_1
  4. Let’s modify the address field. Click Change  icon.phpma-modifyfield-frame4_1
  5. Make the changes you want, then click Save.phpma-modifyfield-frame6_1
  6. That’s it! We’ve successfully changed the name of the address field to city.phpma-modifyfield-frame7_1