Create MySQL slave by cloning

 
 

In this tutorial called create MySQL slave by cloning we’ll try to guide you through a very detailed but in the same time simple process to create a MySQL slave, also known as MySQL replica, by using a fresh backup taken from an existing MySQL server referred here as MySQL master server. Assuming that our master database is 11TB large we’ll have to copy its data content from US, where the master server is located to another server located in a UK data centre, not quite an easy task with this amount of data or without using a costly commercial solution. In terms of software on both servers we’ll use MySQL 5.7 on top of CentOS 7 operating system and some additional tools provided by Percona and EPEL repositories. Both servers will have identical hardware configuration like 24 CPUs, 128 GB RAM memory, one 12TB disk needed for data, one 500 GB disk for MySQL binary logs and a third 100GB disk for temporary files. Now that we have described roughly our environment we can start to split this small project in multiple tasks detailed below on our tutorial.

Table of Contents

Create MySQL slave by cloning

Repositories installation
Tools installation
Create MySQL backup
Compress MySQL backup
Split .tar.gz file
Transfer backup files
Combine splited files
Decompress MySQL backup
Apply MySQL logs
Get binary log position
Change my.cnf configuration
Start MySQL service
Start MySQL slave replication
Check MySQL replication status

Repositories installation

As we’ve said in our tutorial overview we need to install the appropriate repositories first, also commonly referred as repos. These repos are representing the source of our tools and we have to install two trusted and reputable repos like epel-release and percona-release on both servers, on our future MySQL master server located in US and also on our future MySQL slave server located in UK. Please check first if a newer version for percona-release is available for download as in this tutorial we will be using version 0.1-6.


$ yum install -y epel-release
$ yum install -y http://www.percona.com/downloads/percona-release/redhat/0.1-6/percona-release-0.1-6.noarch.rpm

Tools installation

Having now epel and percona release repos installed on our servers we can now move to the next step where we have to action tools installation process as described below, once again on both servers.


$ yum install -y percona-xtrabackup screen pigz

Shortly percona-xtrabackup will help us to take a hot backup without stoping or locking our source server from US, screen will give us the opportunity to keep a session always connected to our source server during compression or data transfer and finally pigz will be used for backup compression.

Please make sure that on the slave server you have installed the same MySQL version as on the master server, no need to start the service but this have to be installed. We won’t use it as we’ll get a full backup from our master server but we need to make sure that all dependencies are installed and deployed with the right MyQSL version.

Create MySQL backup

We can now say that we have all needed tools and we’re in a good position to start MySQL backup. Here, on our tutorial we will initiate a screen session first just to make sure that our MySQL backup will be completed no matter if our local session will be destroyed or not due execution time or a link blip between our terminal and the remote server. Also, before proceeding with this step please make sure that you have enough free disk space left on your target backup location (disk, bucket etc.). Please be aware that the backup process will be initiated directly on the MySQL master server.


$ screen -S backup-session

Having the screen session initiated we can move forward to initiate the backup process as well inside our screen session. The next command will start the backup process with 18 threads out of 24 available but you can decrease or increase this value according to your server load or available CPUs.


$ xtrabackup --backup --compress --compress-threads=18 --target-dir=/mnt/mysql-backup/lastbackup

Compress MySQL backup

The backup process for such amount of data as in our example can take a few hours but once completed we can move to the next step, we’ll have to wrap all backup files to a single .tar file before compressing it with pigz utility.


$ cd /mnt/mysql-backup/
$ tar -cvf lastbackup.tar lastbackup/

Once lastbackup.tar file is created then we can initiate the second compression of our backup like shown in the example below using once again 18 threads for the compression process. For this operation we can use the same master server within US or another server that’s sitting in the same data centre as the master server, this way we’ll reduce the response time between the storage and the server that’s performing the compression.


$ pigz --best -k -p 18 -v lastbackup.tar

Split .tar.gz file

Why do I need to split the backup? Well you can avoid this but bear in mind that in our tutorial we’ll have to transfer the files between US and UK and the network latency can be a problem or the connection may be sometimes limited due the amount of data transfer. We’ll try to avoid any of these problems by sending over the internet small files and also we’ll get the flexibility to stop and start the transfer anytime we want in case of any limitations or similar issues. In our example we’ll split the lastbackup.tar.gz into multiple files, each individual file having 5GB in size.


$ split -b 5000M lastbackup.tar.gz lastbackup.tar.gz-

The split command would generate multiple 5GB files named lastbackup.tar.gz-aa, lastbackup.tar.gz-ab and so on until the split is fully completed.

Once the split command is completed and we are getting back the access to our shell prompt then we can safely close our screen backup-session by simply executing the exit command. We can say now that we are done with all backup tasks on our MySQL master server located in US and all this without locking it, without stopping mysql service or any disruption in any way.


$ exit

Transfer backup files

Now that we are done with backup preparation we can begin to transfer all files by pulling these to our slave server located within UK. Knowing this we can use once again screen and scp utility to initiate the transfer between our US backup location and the UK destination server. You are free to use any other utility like rsync, lsyncd etc. but in this tutorial we’ll stick with scp utility.


$ scp username@masterhost:/path/to/backup/lastbackup.tar.gz-* /mnt/mysql-data/

The above command will pull all files from the source server but if you see that the internet connection is too slow you can cancel the transfer and rethink the pull method. We can go for multiple screen sessions each one managing just a set of files like in the example below:


$ screen -S transfer-a-and-b-files
$ scp username@masterhost:/path/to/backup/lastbackup.tar.gz-[a-b]* /mnt/mysql-data/

And once again let’s start another session that’ll handle the transfer for a new pair of files like lastbackup.tar.gz-c* and lastbackup.tar.gz-d*:


$ screen -S transfer-c-and-d-files
$ scp username@masterhost:/path/to/backup/lastbackup.tar.gz-[c-d]* /mnt/mysql-data/

If this solution helps you in terms of transfer speed just do the math and open as many sessions you consider is right for your specific scenario.

Combine splited files

Once all the files were transferred to our new MySQL slave server within UK we have to combine these files back to a single large .tar.gz file so we can start later the first decompression layer, let’s combine now the files by using cat command.


$ cat lastbackup.tar.gz-* > lastbackup.tar.gz

Decompress MySQL backup

As soon as cat command finishes to join all our files we can start the first decompression layer for lastbackup.tar.gz file by using the same pigz utility that we’ve used to compress the backup but this time we’re performing the decompression on the slave server within UK. Please note that the decompression speed relies on our disk speed at this point using just a small amount of CPU and RAM memory.


$ pigz -d lastbackup.tar.gz

At the end of pigz decompression we’ll get our lastbackup.tar file which needs to be unpacked as well, let’s unpack this file by executing the next command on our terminal window:


$ tar -xvf lastbackup.tar

At this point we should be able to see lastabackup folder and its content compressed by xtrabackup utility, we can notice that all files are ending in .qp extension meaning that we need to initiate the second and the last layer of decompression:


$ for i in $(find -name "*.qp"); do qpress -vd $i $(dirname ${i}) && rm -f $i; done

The above command will search for all .qp files, will decompress each file individually and will remove it from the disk once the decompression is completed. This way we’ll make sure that our disk won’t be filled up with unused files or compressed duplicates, we’ll keep only the decompressed files because in the end that’s what we need.

Apply MySQL logs

So far we have managed to take the backup from our US source server, to compress it, transfer it and decompress it on our destination server based in UK. Now it’s the time to move to our next step where we’ll have to apply the log changes to our hot-backup. For this specific task we will use innobackupex utility provided by percona-release repo as part of percona-xtrabackup package. In the next command we will instruct innobackupex to use 64GiB RAM memory out of 128GiB available on our MySQL slave server but you can adjust this value according to your system resources bearing in mind that this should not exceed 80% of your total available RAM memory, this argument acts exactly like innodb-buffer-pool-size setting from my.cnf file.


$ innobackupex --apply-log --use-memory=64G .

Your terminal output should looks similar to this one:


xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
...
...
...
180709 04:13:18 completed OK!

Get binary log position

Once innobackupex finishes to apply the logs then we need to find out the position from where to start our slave server, we’ll extract this information by simply executing cat command over xtrabackup_binlog_pos_innodb file located under our decompressed backup folder.


$ cat xtrabackup_binlog_pos_innodb
bin.000759      666807930

We need to make a note of the binary log file name, bin.000759 in our case and also to note the position, 666807930, these two will be used later when we’ll start the replication.

Change my.cnf configuration

On the master node located within US in our tutorial we’ll have to add a new configuration setting within my.cnf file where we have to specify the ID of it, the master must have always the ID 1.


[mysql]
...
server-id = 1

This change will require a service restart in order to take effect so let’s restart now mysql service on the master node.


$ systemctl restart mysql

On the slave node we’ll have to change the default my.cnf file but we won’t remove the file, we’ll simply keep it under a different name and we’ll create a new file like shown below:


$ mv /etc/my.cnf /etc/my.cnf_default
$ touch /etc/my.cnf

Now we have to copy the entire my.cnf file content from our master server to our slave, remember that it is very important to keep the same configuration on both nodes, let’s get the configuration from the master node:


$ cat /etc/my.cnf

Copy the output of the above command on the newly created file on our slave server and bear in mind that we have to change server-id value to be equal to 2:


[mysqld]
...
server-id = 2

Start MySQL service

Opposite to our chapter title in the next step we have to stop the default mysql service on our slave node only if that’s running simply because we’ll use that one that we’ve copied from our master node alongside with our backup, let’s stop first the default service on the slave:


$ systemctl stop mysql

Next we need to disable the default mysql installation, we don’t have to remove it as we’ll need all dependencies in place, so instead deleting it from our system we’ll make sure that we’re keeping it but without actually using it:


$ mv /var/lib/mysql /var/lib/mysql_default

Assuming that our decompressed backup is located under /mnt/mysql-data/lastbackup we’ll have to rename this folder called lastbackup so we’ll have a proper folder structure in place and also the right ownership for all its content like shown in the commands below:


$ mv /mnt/mysql-data/lastbackup /mnt/mysql-data/mysql
$ chown -R mysql:mysql /mnt/mysql-data/mysql

Next we’ll put in place a symlink for our mysql folder:


$ ln -s /mnt/mysql-data/mysql /var/lib/mysql

Knowing now that we have the symlink in place and the right ownership we can start mysql service on our slave node:


$ systemctl start mysql

At this point we should have a fully functional mysql server, a full copy of the master node running on a different server but not actually a slave, no worries we’ll take care of this on the next chapter.

Start MySQL slave replication

Before performing any changes on the slave node we need to make sure that no other mysql values are kept inside the system and we’ll reset all settings, if any are stored, regardless replication:


mysql> RESET MASTER; RESET SLAVE;

Having now everything in place is time to start our slave replication by using the binary log file and the position that we’ve got previously on our chapter named get binary log position:


mysql> CHANGE MASTER TO MASTER_HOST='10.1.1.1', MASTER_USER='replication-user', MASTER_PASSWORD='A-Very-Complex-Password', MASTER_LOG_FILE='bin.000759', MASTER_LOG_POS=666807930;

The above mysql command is quite explicit but we’ll explain shortly what doest it really mean anyway.

First sentence CHANGE MASTER TO executed on the slave node will instruct the slave to set a master where to pull data from continuing with MASTER_HOST='10.1.1.1' that in our example represents US master’s IP address which can be public or private if a VPN tunnel is in place between the two locations, US and UK. In terms of security a VPN tunnel will be a wise choice and always recommended instead of using a public IP address no matter of the security level that’s in place.

Next, MASTER_USER='replication-user' value refers to the user that our slave node must use in order to connect to the master node, user that has the password specified within MASTER_PASSWORD='A-Very-Complex-Password' on our query. The user must be added only on the master node, no need to add it on the slave as in the end this will be automatically created on the slave as well due the replication, remember all master data will be replicated onto our slave node by default. We can replicate only specific databases but in our tutorial we’ll stick with the default replication which implies a 1:1 master-slave continuous copy. Don’t think about data replication as an actual data copy, shortly the replication relies on a set of SQL queries recorded in the binary logs files which are in the end executed on slave exactly in the same order as they are executed on the master. Think about replication like mimicking, the slave node will always mimic its master. As we’ve said in the beginning of our tutorial we’ll keep it simple so we’ll stop here, we won’t write a book here.

The last two sentences, MASTER_LOG_FILE='bin.000759', MASTER_LOG_POS=666807930 instructs the slave node to read bin.000759 file from the master node and to start the replication from the position 666807930. These two, file and position, are bounded together so please make sure that you’re starting the replication from the right file and the right position otherwise you will end up with data inconsistency, shortly you would not have a 1:1 copy and you cannot trust you slave node data at all, except maybe for read-only tests.

With the above query we have only instructed our slave node where to start from but now it is time to start master-slave replication by using the next query:


mysql> START SLAVE;

If everything went fine and no errors were thrown to our mysql logs then we can say that we’ve managed to set up and start the replication successfully.

Check MySQL replication status

At this stage we can only check how the replication works, let’s check the status of it:


mysql> SHOW SLAVE STATUS \G

A succesful output will look similar like this one below:


*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.1.1.1
                  Master_User: replication-user
...
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
...
        Seconds_Behind_Master: 335985
...

The above truncated output tells us that our slave node is set up to connect to Master_Host: 10.1.1.1 using Master_User: replication-user.

Slave_IO_Running: Yes tells us that the communication between master and slave works as expected and also that all SQL queries received from the master will be executed Slave_SQL_Running: Yes.

The value Seconds_Behind_Master: 335985 tells us how much lag actually is between the master and slave, this doesn’t means that the slave will take 335985 seconds to catch up before a full sync, it represents how much time in terms of data needs to be recovered, can take an hour, ten or even more depending on the number of transactions. We need to check slave status periodically until this value goes down to 0 seconds, once Seconds_Behind_Master gets down to 0 seconds then we can safely say that we have a full replication in place and most importantly in sync.

Video

No video posted for this page.

Screenshots

No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Article
Create MySQL slave by cloning
Author
Category
Published
06/07/2018
Updated
05/11/2018
Tags

Share this page

If you found this page useful please share it with your friends or colleagues.