Resize RAID 0 array by adding new disks

 
 

In this tutorial called Resize RAID 0 Array by Adding New Disks we will learn how to extend an existing RAID 0 Array in just a few easy steps via CLI. Before starting to resize your RAID 0 array please do take a backup of your data, this is very important as the resize action can lead to data loss. It’s well known that RAID 0 architecture by definition is very fast but in the same time it is not the safest, at any time one of the disks can fail and the entire array could be lost. It is advisable to avoid RAID 0 on production servers unless you have a good failover mechanism in place or hot backups to quickly restore affected services, we would recommend instead to use RAID 10 for production services, most of the time being your second choice in terms of speed and data safety.

Table of contents

Context
Getting devices list
Adding new disks to server
Getting RAID 0 current configuration
Adding new disks to the array
Grow RAID 0 array
Check RAID reshaping status
Resize the filesystem
Check volume size
Troubleshooting

Context

In this particular example we will assume that we have a MySQL server that is using 36 physical disks in a RAID 0 array that serves its data disk. This data disk is slowly running out of space and we have to add a few more physical disk into array to cover its future growth.

On our very first step we will identify the data disk and its actual size like shown in the example below:


# df -h

From the truncated output below we can clearly see that our data disk is 18T large and is hitting the warning threshold which is currently set to 90%. Depending on your threshold setting this can be set to a lower or higher value but that is a completely different story, that’s up to you and your particular monitoring policies. Also, we can notice the fact that our data disk is using /dev/md0 array, we will need this later in our tutorial.


Filesystem    Size  Used Avail Use% Mounted on
...
/dev/md0       18T   15T  1.6T  91% /mnt/mysql-data
...

Getting devices list

Now lets get the list of the devices that are actually connected to our server by using ls command followed by ltr arguments that will give us the list of the devices sorted by their date.


# ls -ltr /dev/sd*

As you can see our first device called /dev/sda has been added on November 30th and the last device named /dev/sdas has been added on January 9th.


brw-rw---- 1 root disk  8,   0 Nov 30 04:13 /dev/sda
...
brw-rw---- 1 root disk 66, 192 Jan  9 13:59 /dev/sdas

Adding new disks to server

Knowing now when the last device has been added and its name we can easily add four new disks for example to our server. Soon after this action we should check once again if these are now visible and recognised by our server using the same ls command as in the previous step:


# ls -ltr /dev/sd*

By adding these four new disks we can see those listed within /dev/ after running ls command.


...
brw-rw---- 1 root disk 66, 208 Jan 10 02:41 /dev/sdat
brw-rw---- 1 root disk 66, 224 Jan 10 02:41 /dev/sdau
brw-rw---- 1 root disk 66, 240 Jan 10 02:41 /dev/sdav
brw-rw---- 1 root disk 67,   0 Jan 10 02:41 /dev/sdaw

We know now that the last four disks were successfully added on January 10th and their names, /dev/sda{t,u,v,w} so we are now ready to jump to the next step where we will need to get the RAID details.

Getting RAID 0 current configuration

On this particular step we will check the details of the current RAID array using mdadm utility against our array block, /dev/md0 as resulted by using df command on Context step.


# mdadm --detail /dev/md0

A successful output should look similar to this one listed below:


/dev/md0:
           Version : 1.2
     Creation Time : Thu Oct  4 17:57:25 2018
        Raid Level : raid0
        Array Size : 18785858560 (17915.59 GiB 19236.72 GB)
     Used Dev Size : 536738816 (511.87 GiB 549.62 GB)
      Raid Devices : 36
     Total Devices : 36
       Persistence : Superblock is persistent

       Update Time : Thu Jan 10 02:23:39 2019
             State : active
    Active Devices : 36
   Working Devices : 36
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:md0  (local to host localhost)
              UUID : 341f2893:b7721c3c:c2c11504:23a096b1
            Events : 2662

    Number   Major   Minor   RaidDevice State
       0      65       64        0      active sync   /dev/sdu
       1      65       96        1      active sync   /dev/sdw
      ...
      35      66      176       34      active sync   /dev/sdar
      34      66      160       35      active sync   /dev/sdaq

Now that we have our array block details we’ll be looking for Raid Devices and Total Devices fields that will tell us how many devices overall we have within our RAID array block, in this case we have 36 disks also visible in the RaidDevice column, those being counted from 0 to 35, meaning that we do have 36 active disks. Next field that needs to be checked is named Raid Level where we can clearly see that is raid0 and its State is marked as active only.

Adding new disks to the array

Now that we have all the details that we need lets add all new four disks to our existing /dev/md0 array as shown in the example below by using mdadm utility:


# mdadm --add /dev/md0 /dev/sda[t-u-v-w]

If the command was successful we should be able to see on our terminal window an output similar to this:


mdadm: added /dev/sdat
mdadm: added /dev/sdau
mdadm: added /dev/sdav
mdadm: added /dev/sdaw

Lets check once again the status of our RAID array by invoking once again mdadm:


# mdadm --detail /dev/md0

We can notice from the output listed below that a few values were changed within our RAID configuration first one being Raid Level being now marked as raid4 instead of raid0 as previously seen before adding the disks and also Total Devices field value has been increased from 36 originally to 40.

On the same note we can see that our new disks, displayed at the bottom of the window, are now marked as spare as these were just added within configuration but not actually added to our array block, we will get there on the next step.


/dev/md0:
           Version : 1.2
     Creation Time : Thu Oct  4 17:57:25 2018
        Raid Level : raid4
        Array Size : 18785858560 (17915.59 GiB 19236.72 GB)
     Used Dev Size : 536738816 (511.87 GiB 549.62 GB)
      Raid Devices : 36
     Total Devices : 40
       Persistence : Superblock is persistent

       Update Time : Thu Jan 10 02:46:18 2019
             State : active
    Active Devices : 36
   Working Devices : 40
    Failed Devices : 0
     Spare Devices : 4

        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost:md0  (local to host localhost)
              UUID : 341f2893:b7721c3c:c2c11504:23a096b1
            Events : 2668

    Number   Major   Minor   RaidDevice State
       0      65       64        0      active sync   /dev/sdu
       1      65       96        1      active sync   /dev/sdw
      ...
      35      66      176       34      active sync   /dev/sdar
      34      66      160       35      active sync   /dev/sdaq

      37      66      208        -      spare   /dev/sdat
      38      66      224        -      spare   /dev/sdau
      39      66      240        -      spare   /dev/sdav
      40      67        0        -      spare   /dev/sdaw

Grow RAID 0 array

Array block being now configured we can start growing the array from 36 disks to 40 disks:


# mdadm --grow /dev/md0 --raid-devices=40

Please note that newly added disks won’t be ready to use immediately as the array needs to be reshaped.

Check RAID reshaping status

For checking the status of array reshaping we have two options, first one being mdstat and can be invoked by running cat /proc/mdstat wrapped in a watch session as shown below:


# watch cat /proc/mdstat

Once the command has been executed we should be able to see a similar output like this one listed below:


Every 2.0s: cat /proc/mdstat                                                                                                                                                                Thu Jan 10 03:10:51 2019

Personalities : [raid6] [raid5] [raid4]
md0 : active raid4 sdak[13] sdaf[8] sdp[27] sdav[34] sdz[4] sdo[24] sdag[10] sdaq[35] sdv[0] sdah[12] sdac[9] sdae[7] sdau[40](S) sdab[5] sdd[20] sdap[33] sde[18] sdar[37](S) sdl[23] sdaj[16] sdal[15] sdat[39](S)
 sdas[38](S) sdam[17] sdu[31] sdai[14] sdaw[36] sdaa[6] sdad[11] sdt[30] sdc[19] sdn[22] sdy[2] sdw[3] sdx[1] sdr[28] sdm[25] sds[29] sdk[21] sdq[26]
      18785858560 blocks super 1.2 level 4, 512k chunk, algorithm 0 [36/36] [UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU]
      [>....................]  resync =  3.0% (16343588/536738816) finish=352.6min speed=24594K/sec

unused devices: 

Alternatively we can use mdadm by periodically checking the status of the field State where we can see now that is not just saying that active but also resyncing. The second field to check would be Resync Status where we can actually see its current status, this can also be wrapped as well in a watch session:


# mdadm --detail /dev/md0

Something similar to the output below we should get once the reshape process has been initiated:


/dev/md0:
           Version : 1.2
     Creation Time : Thu Oct  4 17:57:25 2018
        Raid Level : raid4
        Array Size : 18785858560 (17915.59 GiB 19236.72 GB)
     Used Dev Size : 536738816 (511.87 GiB 549.62 GB)
      Raid Devices : 36
     Total Devices : 40
       Persistence : Superblock is persistent

       Update Time : Thu Jan 10 03:08:42 2019
             State : active, resyncing
    Active Devices : 36
   Working Devices : 40
    Failed Devices : 0
     Spare Devices : 4

        Chunk Size : 512K

Consistency Policy : resync

     Resync Status : 3% complete

              Name : localhost:md0  (local to host localhost)
              UUID : 341f2893:b7721c3c:c2c11504:23a096b1
            Events : 2931
                  ...

Please note that reshaping process can take hours depending of your RAID 0 Array size in terms of disks and total space.

Resize the filesystem

Once the reshaping process is over and we can see that mdadm --detail shows Active Devices: 40 then we can safely proceed to extend the filesystem using resize2fs utility like shown here:


# resize2fs /dev/md0

Filesystem resize should not take more than a few minutes after we can successfully check volume size, but we will show this in the next step.

Check volume size

Having everything in place now we can check the size of our volume by using df command followed by -h argument like shown here:


# df -h

On the output below we can see now that the total size of the volume has been increased from 18T to 20T which means that our resize operation has been successful.


Filesystem       Size  Used Avail Use% Mounted on
...
/dev/md0          20T   16T  3.4T  82% /mnt/mysql-data
...

Troubleshooting

If you are not able to get the right size results after using df command then please make sure that no services are using that volume and proceed to umount and remount the disk as shown in the example below:


# mount -o remount /mnt/mysql-data/

Video

No video posted for this page.

Screenshots

No screenshots posted for this page.

Source code

No code posted for this page.

About this page

Article
Resize RAID 0 array by adding new disks
Author
Category
Published
10/01/2019
Updated
23/04/2019
Tags

Share this page

If you found this page useful please share it with your friends or colleagues.