How to backup Linux VM’s with large data disks (>4 TB)

Recently I ran into an issue on a project I’m working on. The customer has a Linux virtual machine running on Azure with a large data disk (20 TB). I knew – but forgot to remember – that Azure Backup doesn’t support disks larger then 4 TB (more info here). The specific drive is used for logging, so for a moment I thought that Azure files could be a solution but the specific Linux version (RHEL 6.7) isn’t supported for secure transportation towards Azure files. So I found another solution.

I decided to add multiple drives to this virtual machine (and split the needed size by the number of drives). In my example I added 3 disks to this virtual machine.

largedisk1.png

Now logon to the CLI of that specific virtual machine. The 3 datadisks were made available using /dev/sdc, /dev/sdd and /dev/sde.

First we have to create physical volumes on top of /sdc, /sdd and /sde using the following command :

pvcreate /dev/sdc /dev/sdd /dev/sde

You can check this using the following command :

pvs

of for detailed information:

pvdisplay /dev/sdc

Now we are going to create a volume group named logging using the 3 physical volumes with this command :

vgcreate logging /dev/sdc /dev/sdd /dev/sde

Now we create a logical volume using the following command:

lvcreate -n logs -l 100%FREE logging

Now let’s format the volume

mkfs.ext4 /dev/logging/logs default 0 0

Now we have to edit the /etc/fstab file. In my (demo) case I add the following line:

/dev/logging/logs /var/logging ext4 defaults 0 0

My fstab file looks als follows:

largedisk2

After rebooting the new volume is available on /var/logging (in my demo case)

As you can see there is one 9 TB disk (in my demo) which I can access :

largedisk3

Now we are able to use Azure Backup to backup this machine:

largedisk4

Thanks to BM for the feedback! #TheManWithTheSleeve

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.