Dump MySQL Table and Import
In this tutorial called dump MySQL table and import we will learn how to export a single database table using
mysqlpump utility instead of exporting an entire database and also we will guide you how to import a single database table using different methods. Exporting a single table can be very handy when you have to update or replace another database table that is not part of a cluster or maybe you simply need it for your development environment to perform some quick changes or various tests. In this example we will use MySQL 5.7 and MySQL Client (CLI) based on CentOS 7 operating system, these steps can be easily reproduced on different Unix platform or higher MySQL versions.
Table of Contents
Prepare dump folder
On our first step we need to make sure that on our source server we have enough space on the partition that we want to export our database table. We’ll need a specific folder for this, so lets create a new folder and change the directory to it like show in the example below.
$ mkdir /path/to/table-backup $ cd /path/to/table-backup/
Access MySQL console
Having now a folder for our table export we can try to access MySQL service via MySQL client which is a command line client. Below we have three examples about how to access MySQL, we’ll begin with our first option were we’ll access MySQL without using a password if you have a setup similar with this one explained in our tutorial called Access MySQL without password.
The second option consists in accessing MySQL locally using a username and password, this can be completed by passing the
-p arguments as show below.
$ mysql -u my_username -p
And finally the third option to access MySQL remotely, this time we’ll be adding an extra argument for the remote hostname
-h, username and password as in the previous example.
$ mysql -h remote_host -u my_username -p
Once we are connected to MySQL console we shall be able perform a list of our databases, nothing to worry about, we just want to make sure that we are using the right database.
mysql> SHOW DATABASES;
A successful output of
SHOW DATABASES MySQL query command should look similar to this one:
+--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | | my_database_name | +--------------------+ 5 rows in set (0.01 sec)
On the above listing we can clearly see that first four databases are MySQL’s default databases and the fifth database called
my_database_name in our tutorial is our test database.
List database tables
We have managed to connect to MySQL console, to list the databases and now it is time to list all tables from our
my_database_name test database using two MySQL queries.
mysql> USE my_database_name; mysql> SHOW TABLES;
First we have instructed MySQL to use
my_database_name database and finally to show its tables, the output of these two MySQL queries will look similar to this:
+----------------------------+ | Tables_in_my_database_name | +----------------------------+ | my_1st_table_name | | my_2nd_table_name | | my_3rd_table_name | | my_4th_table_name | | my_5th_table_name | +----------------------------+ 5 rows in set (0.01 sec)
Knowing now that we have 5 tables within out database we can safely quit and start the next step where we’ll have to dump only one MySQL table using two methods, plain SQL and archived.
Dump database table
Before MySQL 5.7.8 the best utility to use for safely exporting databases or tables was
mysqldump but from 5.7.8 MySQL introduced
mysqlpump which is a more advanced tool that supports multiple parallel threads boosting the dump speed, not quite a game changer yet in terms of backups and not really recommended for production use.
First method that will use to export a single MySQL table from our database is called plain and is been named like this because we will export the file in a plain SQL format, no extra options or compression tools will be involved so you can easily alter it’s content using a text editor if needed, just be very careful when you are manually altering a .sql file as this can lead to data loss.
Next method is recommended when you have no disk space constraints and the table doesn’t require a copy for example between two geographically spread data centres (i.e. US to DE) where a Internet connection is required for data transport as this can lead to more ingress / egress costs than using the second option, compressed, as described on our next step, on top of that is time consuming.
Se lets export the third table from our database which is called
my_3rd_table_name to a SQL plain file called
my_3rd_table_name.sql as show in the example below:
$ mysqlpump my_database_name my_3rd_table_name > my_3rd_table_name.sql
Second option to export MySQL table is recommended when you have a quite large amount of data and also when you need to move the data via a Internet link, basically this method is not much different than the previously used method but this time we will pipe the output of
mysqlpump to a
.gz file, the output will be compressed automatically using
gzip utility which comes by default with CentOS 7 operating system.
$ mysqlpump my_database_name my_3rd_table_name | gzip > my_3rd_table_name.sql.gz
Copy MySQL dump file
On this step we will copy our database table file to a remote host using once again two methods, it’s up to you which one fits better in your case. On our first transfer option will be using
scp utility which allows us to perform a copy over SSH, this is the basic method of moving files between two hosts mainly recommended when using a local network and also when the file is not that big lets say under 100GB. Lets see how the
scp works for our file transfer:
$ scp my_3rd_table_name.sql my_ssh_user@remote_server:/path/to-save/table-backup-file/
The second option is most recommend due the fact that it comes with a resume option by default in case the connection between the two hosts it is lost for various reasons. In the example below we will perform a file transfer using
rsync utility and also involving a different user than the one we are usually using to log into the server.
$ rsync -avzr -e "ssh -i /home/2nd_user/.ssh/id_rsa" 2nd_user@remote_server:/table-backup/ .
You can see that we have used
2nd_user for our file transfer instead of using the local one that we are logged with, this is quite handy if
2nd_user is a general user lets say for backups.
Either way, compressed or plain, the file can be easily copied to a remote host using
rsync utility, it’s purely up to you which of the method you are going to use, the goal still remains the same.
Import MySQL table
Finally we are getting quite close to finish our short Dump MySQL Table and Import tutorial with this last step where we have to perform the import of our database table to a remote MySQL host.
Assuming that we are already logged into our remote host we have to access MySQL console as we did for the source server in the beginning of our tutorial.
If we have the same database and same tables structure as we had on our source server then is the time to drop
my_3rd_table_name table on this remote host. This process is automatically handled by the SQL script that’s present in the first lines of our exported file but for the beauty of this tutorial we will make use of these SQL queries this time just to get more familiar with SQL language, it’s fun and easy, no worries, table import won’t fail unless the database is missing.
mysql> USE my_database_name; mysql> DROP TABLE my_3rd_table_name;
Once the table has been dropped on the remote host we can carry on with our import process where again we will be using two methods depending on the export method chosen previously.
Below we will use
SOURCE SQL query to perform the import which is usually recommended when the amount of data is quite big, this method proved to be one of the fastest methods of importing a plain SQL file, being an entire database import or simply a single table import like in our tutorial.
mysql> SOURCE my_3rd_table_name.sql
Second option used below will help to directly perform the import of database table
my_database_name. The command will be executed using the terminal window and will start first by decompressing the content of
my_3rd_table_name.sql.gz file and piping the output to MySQL service directly.
$ gunzip < my_3rd_table_name.sql.gz | mysql my_database_name
That's all, we are finally done, we have managed to dump a MySQL table and to import it to a remote host.
If you are dealing with large imports or large exports that you think it may take couple of hours the best way to accomplish this is by using a screen session. You can perform a quick search on Tufora.com for "screen" and you find a few tutorials that are using this utility. Introducing screen into your tasks will help you not to lose any work during a connection blip or when changing the place, you can easily resume you sessions anytime.