Integrating LVM with Hadoop Cluster providing Elasticity to DataNode Storage

Sweta Sardar
4 min readDec 2, 2020

In this article I’m going to show you how we can integrate the Logical Volume Management concept to give the more storage to the data node.

Task Description 📄

🌀 7.1: Elasticity Task

🔅Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

🔅Increase or Decrease the Size of Static Partition in Linux.

So in my previous article I have already mentioned that how we can create partition and can give limited amount of storage. Now I’m going to show you the concept of Logical Volume Management(LVM). We know in normal partition we can create only 4 partition, which is 3 primary partition and 1 extended partition. But in LVM Partition World we can create multiple partition with any size we want.

So this is the most sweetest thing about LVM concept I have mentioned above, now I jump to the task and going to show you the steps that what we have to do :

🔅Let’s first Integrate the LVM with Hadoop and providing Elasticity to DataNode Storage

💠First we have to go inside our Hard Disk :

Command is : fdisk -l

We have attached two hard disk , one is sdb with 10GiB of size and another one is sdc with 4GiB of size.

💠Now these two hard disk we have to convert in physical volume :

Command is :
pvcreate /dev/sdb
pvcreate /dev/sdc

💠We can check also by this command : pvdisplay

💠Now we have to combine these two physical volume to create new volume group and then these two vg combined with each other and create one new hard disk with the size of 14 GiB so let’s see :

Command is : vgcreate swetavg /dev/sdb /dev/sdc

💠We can check also by this command : vgdisplay

And here we can see that the new volume group is created with the size of 13.99 GiB

💠Now to make usable the Hard Disk that we can store our data, for that we have to always start with this “First step” is to create a partition. So we have to create our lvm partition with the size whatever we want so here I’m creating with the size of 5GiB :

Command is : lvcreate — size 50G — name mylv /dev/swetavg

And can see the logical volume(lv) is created or not by lvdisplay command :

💠Now the “Second step” is to format the partition for create inode table, it is like the Index table of our file system :

Command is : mkfs.ext4 /dev/swetavg/mylv

💠The “Third step” is to mount that partition with the the particular folder so here I have created one folder with the name of data1:

Command is : mount /dev/swetavg/mylv /data1

We have successfully mounted our partition

🔅So now let’s see the second task here I’m Increasing the Size of Static Partition in Linux.

💠We can easily increase the size of partition by without going offline, on the fly by online we can increase the size of partition and give the more the storage to the datanode .I have taken 3GiB of size for extend, but you can increase the size as you want by having spaces in your volume group. So I’m am showing you the steps that how we can increase the size, you can follow with these steps :

Command is : lvextend — size +3G /dev/swetavg/mylv

Our lv size has been extended successfully

💠Now as the following steps we have to reformat the rest of new extend partitions for creating the inode table for this new allocated spaces :

Command is : resize2fs /dev/swetavg/mylv

Here we can see that our size has been increased from 4.9G to 7.9G We have successfully done.

We can see now our hadoop cluster having the size of 7.81GiB of storage.

Thank you for reading my article, I hope this article will help you to do LVM Partition and also can extend the partition by easily without going offline.

🌞Keep Learning Keep Sharing 🌞

🌞Happy Learning 🌞

🌞Thank You !! 🌞

--

--