Create a Backup Server on Linux

 
 

How to create a backup server on Linux is just another quick and easy tutorial from Tufora.com that will guide you step by step to create in just a few minutes your own Linux backup server based on CentOS 7 operating system and also system storage manager (ssm) utility. Even if we are living in the age of Cloud technology with so many nice tools available out there like storage buckets, blob stores and many other trendy names some of us may still need to keep the data safe somewhere on premises. This could be due data sensitivity or maybe confidentiality, some internal policies, local law or any other possible reason, it doesn’t really matter as we said we’ll be focusing on this tutorial to build a backup server on linux.

Table of Contents

Index of Create a Backup Server on Linux Tutorial

Environment Description
Tools Installation
Initial System Check
Creating Logical Volume (LVM) using SSM
Resizing Logical Volume (LVM) using SSM
Mounting Logical Volume (LVM)

Environment Description

In this tutorial we have build a backup server based on CentOS 7 and System Storage Manager as software components as for the hardware component we’ll use 12 x 2TB disks. Overall in terms of storage space our backup server will have 24TB for backups and it has to be extremely flexible and manageable, meaning that we should be able to easily increase or decrease the storage anytime when this is required.

Tools Installation

On our very first step of this tutorial we’ll have to install epel-release repository, this reputable repo provided by Fedora Project contains ssm package needed later for our disk management operation. So let’s start by installing epel-release repo and ssm package as shown in the example below.


$ yum install -y epel-release
$ yum install -y system-storage-manager

Once these two components were installed we can then proceed with the next step of our tutorial where we’ll have to perform the initial system check.

Initial System Check

Assuming that we have already added 12 x 2TB disks to our backup server we can start the initial system check to simply verify how many active disks and partitions we have, we’ll perform this quick check by executing df command.


$ df -h

The output of df command will look similar to this one listed below where we can clearly see that our system doesn’t have any 24TB disk / partition defined.


Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   26G  1.5G   25G   6% /
devtmpfs                 7.8G     0  7.8G   0% /dev
tmpfs                    7.8G     0  7.8G   0% /dev/shm
tmpfs                    7.8G  8.9M  7.8G   1% /run
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda1               1014M  226M  789M  23% /boot
tmpfs                    1.6G     0  1.6G   0% /run/user/1000

In order to see all attached and unpartitioned disks from our system we’ll have to use ssm utility that we have installed on our previous step named Tools Installation, ssm being a very handy software tool in terms of disk management. Let’s get a full list of disks and partitions from our backup server using ssm utility by running the next command on our terminal window:


$ ssm list

The output of ssm list command will look similar to this:


-----------------------------------------------------------
Device        Free      Used     Total  Pool    Mount point
-----------------------------------------------------------
/dev/sda                      30.00 GB          PARTITIONED
/dev/sda1                      1.00 GB          /boot
/dev/sda2  0.00 KB  29.00 GB  29.00 GB  centos
/dev/sdb                       2.00 TB
/dev/sdc                       2.00 TB
/dev/sdd                       2.00 TB
/dev/sde                       2.00 TB
/dev/sdf                       2.00 TB
/dev/sdg                       2.00 TB
/dev/sdh                       2.00 TB
/dev/sdi                       2.00 TB
/dev/sdj                       2.00 TB
/dev/sdk                       2.00 TB
/dev/sdl                       2.00 TB
/dev/sdm                       2.00 TB
-----------------------------------------------------------
--------------------------------------------------
Pool    Type  Devices     Free      Used     Total
--------------------------------------------------
centos  lvm   1        0.00 KB  29.00 GB  29.00 GB
--------------------------------------------------
--------------------------------------------------------------------------------------
Volume            Pool    Volume size  FS      FS size       Free  Type    Mount point
--------------------------------------------------------------------------------------
/dev/centos/root  centos     26.00 GB  xfs    25.98 GB   24.59 GB  linear  /
/dev/centos/swap  centos      3.00 GB                              linear
/dev/sda1                     1.00 GB  xfs  1014.00 MB  820.67 MB  part    /boot
--------------------------------------------------------------------------------------

From the above output we can now see what we had listed out by df -h command and also we can see all our 12 x 2TB disks which are not in use at all, there are no volumes or partitions defined for any of these 12 disks, no worries we’ll take care of these on our next step where we’ll define a pool of disks and a logical volume commonly known as LVM.

Creating Logical Volume (LVM) using SSM

Knowing now that we have 12 unallocated disks on our backup server is time to start to create a 23TB LVM. Oh wait, why 23TB because we have allocated 24TB? Well that is simply because we will have to learn as well about how to resize a pool and a LVM if needed. We have previously talk about this when we pointed out terms like storage flexibility and manageability.

Let’s create our 23TB LVM by running the next ssm create command:


$ ssm create -s 23T -n pool001 --fstype xfs -p bkp /dev/sd[b-m] /mnt/mysql-backups

Shortly we have instructed ssm utility to create a 23TB volume named pool001 using xfs file system in a pool called bkp. Within this pool called bkp we have added all our 12 disks from /dev/sdb to /dev/sdm and in the last argument we have also instructed ssm to mount this newly created LVM pool001 to /mnt/mysql-backups.

By executing the above ssm create command our console output will look pretty much like this one below which is quite self-explanatory:


  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.
  Physical volume "/dev/sdd" successfully created.
  Physical volume "/dev/sde" successfully created.
  Physical volume "/dev/sdf" successfully created.
  Physical volume "/dev/sdg" successfully created.
  Physical volume "/dev/sdh" successfully created.
  Physical volume "/dev/sdi" successfully created.
  Physical volume "/dev/sdj" successfully created.
  Physical volume "/dev/sdk" successfully created.
  Physical volume "/dev/sdl" successfully created.
  Physical volume "/dev/sdm" successfully created.
  Volume group "bkp" successfully created
  Logical volume "pool001" created.
meta-data=/dev/bkp/pool001       isize=512    agcount=23, agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=6174015465, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Let’s run one more time ssm list command to see what has been changed on our backup server in terms of storage:


$ ssm list

In this console output we can now see that all 12 disks were assigned to our bkp pool and we can also see that our /dev/bkp/pool001 (LVM) volume size is 23TB exactly as we wished.


--------------------------------------------------------------
Device           Free      Used     Total  Pool    Mount point
--------------------------------------------------------------
/dev/sda                         30.00 GB          PARTITIONED
/dev/sda1                         1.00 GB          /boot
/dev/sda2     0.00 KB  29.00 GB  29.00 GB  centos
/dev/sdb      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdc      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdd      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sde      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdf      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdg      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdh      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdi      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdj      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdk      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdl      0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdm   1023.95 GB   1.00 TB   2.00 TB  bkp
--------------------------------------------------------------
-----------------------------------------------------
Pool    Type  Devices        Free      Used     Total
-----------------------------------------------------
bkp     lvm   12       1023.95 GB  23.00 TB  24.00 TB
centos  lvm   1           0.00 KB  29.00 GB  29.00 GB
-----------------------------------------------------
---------------------------------------------------------------------------------------------
Volume            Pool    Volume size  FS      FS size       Free  Type    Mount point
---------------------------------------------------------------------------------------------
/dev/centos/root  centos     26.00 GB  xfs    25.98 GB   24.56 GB  linear  /
/dev/centos/swap  centos      3.00 GB                              linear
/dev/bkp/pool001  bkp        23.00 TB  xfs    23.00 TB   23.00 TB  linear  /mnt/mysql-backups
/dev/sda1                     1.00 GB  xfs  1014.00 MB  820.67 MB  part    /boot
---------------------------------------------------------------------------------------------

Resizing Logical Volume (LVM) using SSM

On this step we will have to resize our 23TB LVM by adding another 1023.95G to it, this is the available / unused disk space that we have left from the ssm create command executed on the previous step.

Let’s proceed with volume resize by running the next ssm command:


$ ssm resize -s +1023.95G /dev/bkp/pool001

We will get something like this from the output of the command:


  Rounding size to boundary between physical extents: <24.00 TiB.
  Size of logical volume bkp/pool001 changed from 23.00 TiB (6029312 extents) to <24.00 TiB (6291444 extents).
  Logical volume bkp/pool001 successfully resized.
meta-data=/dev/mapper/bkp-pool001 isize=512    agcount=23, agsize=268435455 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=6174015465, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 6174015465 to 6442438656

We can notice the fact that the our bkp/pool001 volume has been successfully resized up to 24TB as expected.

Let's confirm all these changes on our backup server by executing once again ssm list


$ ssm list

Below output confirms that all space has been allocated and now our LVM size is indeed 24TB, that's all usable space that we can use for our future backups.


-----------------------------------------------------------
Device        Free      Used     Total  Pool    Mount point
-----------------------------------------------------------
/dev/sda                      30.00 GB          PARTITIONED
/dev/sda1                      1.00 GB          /boot
/dev/sda2  0.00 KB  29.00 GB  29.00 GB  centos
/dev/sdb   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdc   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdd   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sde   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdf   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdg   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdh   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdi   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdj   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdk   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdl   0.00 KB   2.00 TB   2.00 TB  bkp
/dev/sdm   0.00 KB   2.00 TB   2.00 TB  bkp
-----------------------------------------------------------
--------------------------------------------------
Pool    Type  Devices     Free      Used     Total
--------------------------------------------------
bkp     lvm   12       0.00 KB  24.00 TB  24.00 TB
centos  lvm   1        0.00 KB  29.00 GB  29.00 GB
--------------------------------------------------
---------------------------------------------------------------------------------------------
Volume            Pool    Volume size  FS      FS size       Free  Type    Mount point
---------------------------------------------------------------------------------------------
/dev/centos/root  centos     26.00 GB  xfs    25.98 GB   24.56 GB  linear  /
/dev/centos/swap  centos      3.00 GB                              linear
/dev/bkp/pool001  bkp        24.00 TB  xfs    24.00 TB   24.00 TB  linear  /mnt/mysql-backups
/dev/sda1                     1.00 GB  xfs  1014.00 MB  820.67 MB  part    /boot
---------------------------------------------------------------------------------------------

All good so far, ssm shows us the right space but what about df output, is our disk mounted? Let's check that as well so we can make sure that we can actually use that volume:


$ df -h

A succesfull df output will definitely look like this one below confirming that we can now use that volume:


Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   26G  1.5G   25G   6% /
devtmpfs                 7.8G     0  7.8G   0% /dev
tmpfs                    7.8G     0  7.8G   0% /dev/shm
tmpfs                    7.8G  8.9M  7.8G   1% /run
tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda1               1014M  226M  789M  23% /boot
/dev/mapper/bkp-pool001   24T   33M   24T   1% /mnt/mysql-backups
tmpfs                    1.6G     0  1.6G   0% /run/user/1000

We can now say that we are one step closer to finish our tutorial about how to create a backup server on linux.

Mounting Logical Volume (LVM)

In this last step we will have to make sure that our backup LVM volume will be automatically mounted every time when our backup server is restarted for various genuine reasons. First we have to find out the UUID (Universally Unique Identifier) of our pool001 LVM volume by running blkid command like shown below:


$ blkid /dev/bkp/pool001

The output of blkid command will provide us a set of details for our /dev/bkp/pool001 LVM volume:


/dev/mapper/bkp-pool001: UUID="8dae10d0-8616-40e3-916d-293bf14de34a" TYPE="xfs"

All that we need for the above output is the UUID for pool001 LVM which is UUID="8dae10d0-8616-40e3-916d-293bf14de34a". We will use this UUID to amend fstab file which is the file responsible for mounting all volumes, each correct entry line will be read by the operating system during boot time and will be mounted automatically.

Please be very careful when editing fstab file, do not remove any entries automatically generated by the operating system as this can lead to a non-bootable server. Amend only those entries that you know you have manually added.

Let's edit fstab file:


$ vi /etc/fstab

And now let's add the next line right at the end of the file:


...
UUID="8dae10d0-8616-40e3-916d-293bf14de34a" /mnt/mysql-backups 	xfs  discard,defaults  0 0

You can now safely save and close fstab file, we're done.

At this stage we can proudly say that we have managed to create a backup server on linux using just ssm utility and CentOS 7, all that is left is to populate wisely this space, Set up NFS Server and NFS Client on CentOS 7 tutorial may help you with your next step.

Video

No video posted for this page.

Screenshots

No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Article
Create a Backup Server on Linux
Author
Category
Published
06/07/2018
Updated
05/11/2018
Tags

Share this page

If you found this page useful please share it with your friends or colleagues.