Praji’s Weblog

Welcome to praji’s world

LVM

leave a comment »

LVM is a method of allocating hard drive space into logical volumes that can be easily resized instead of partitions.

With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes.

The physical volumes are combined into logical volume groups, with the exception of the /boot/ partition. f

The /boot/ partition cannot be on a logical volume group because the boot loader cannot read it. If the root (/) partition is on a logical volume, create a separate /boot/ partition which is not a part of a volume group.

The logical volume group is divided into logical volumes, which are assigned mount points, such as /home and /m and file system types, such as ext2 or ext3.

When “partitions” reach their full capacity, free space from the logical volume group can be added to the logical volume to increase the size of the partition.

When a new hard drive is added to the system, it can be added to the logical volume group, and partitions that are logical volumes can be expanded.

If a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to another hard drive, the original hard drive space has to be reallocated as a different partition or not used.

Setting up LVM
———————

We will consider setting up of LVM using 3 scsi disks.

The disks are at /dev/sda, /dev/sdb, and /dev/sdc.

Step 1
———
Run pvcreate on the disks

# pvcreate /dev/sda
# pvcreate /dev/sdb
# pvcreate /dev/sdc

This creates a volume group descriptor area (VGDA) at the start of the disks.

Step 2
———–
Create a volume group

# vgcreate my_volume_group /dev/sda /dev/sdb /dev/sdc/

Run vgdisplay to verify volume group

# vgdisplay

The most important things to verify are that the first three items are correct and that the VG Size item is the proper size for the amount of space in all four of your disks.

Step 3
———-

Creating the Logical Volume

If the volume group looks correct, it is time to create a logical volume on top of the volume group.

You can make the logical volume any size you like. (It is similar to a partition on a non LVM setup.) For this example we will create just a single logical volume of size 1GB on the volume group. We will not use striping because it is not currently possible to add a disk to a stripe set after the logical volume is created.

# lvcreate -L1G -nmy_logical_volume my_volume_group
lvcreate — doing automatic backup of “my_volume_group”
lvcreate — logical volume “/dev/my_volume_group/my_logical_volume” successfully created

Step 4
———
Create an ext2 file system on the logical volume

# mke2fs /dev/my_volume_group/my_logical_volume

Step 5Test the File System

Mount the logical volume and check to make sure everything looks correct

# mount /dev/my_volume_group/my_logical_volume /mnt
# df

If everything worked properly, you should now have a logical volume with and ext2 file system mounted at /mnt.

Step 5
———–

Setting up LVM on three SCSI disks with striping
Creation of physical volume and volume group is same for striping too.

Creating the Logical Volume
————————————-
lvcreate -i3 -I4 -L1G  <_logical_volume> <volume_group>

-i number of strips
-I size of strips

++++++++++++++++++++++++++++++

Adding  a new disk to a multi-disk SCSI system

# pvscan
—————
pvscan — ACTIVE   PV “/dev/sda”  of VG “dev”   [1.95 GB / 0 free]
pvscan — ACTIVE   PV “/dev/sdb”  of VG “sales” [1.95 GB / 0 free]
pvscan — ACTIVE   PV “/dev/sdc”  of VG “ops”   [1.95 GB / 44 MB free]
pvscan — ACTIVE   PV “/dev/sdd”  of VG “dev”   [1.95 GB / 0 free]
pvscan — ACTIVE   PV “/dev/sde1” of VG “ops”   [996 MB / 52 MB free]
pvscan — ACTIVE   PV “/dev/sde2” of VG “sales” [996 MB / 944 MB free]
pvscan — ACTIVE   PV “/dev/sdf1” of VG “ops”   [996 MB / 0 free]
pvscan — ACTIVE   PV “/dev/sdf2” of VG “dev”   [996 MB / 72 MB free]
pvscan — total: 8 [11.72 GB] / in use: 8 [11.72 GB] / in no VG: 0 [0]
# df
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/dev/cvs           1342492    516468    757828  41% /mnt/dev/cvs
/dev/dev/users         2064208   2060036      4172 100% /mnt/dev/users
/dev/dev/build         1548144   1023041    525103  66% /mnt/dev/build
/dev/ops/databases     2890692   2302417    588275  79% /mnt/ops/databases
/dev/sales/users       2064208    871214   1192994  42% /mnt/sales/users
/dev/ops/batch         1032088    897122    134966  86% /mnt/ops/batch

As you can see the “dev” and “ops” groups are getting full so a new disk is purchased and added to the system. It becomes /dev/sdg.

Prepare the disk partitions

The new disk is to be shared equally between ops and dev so it is partitioned into two physical volumes /dev/sdg1 and /dev/sdg2 :

# fdisk /dev/sdg

# pvcreate /dev/sdg1
pvcreate — physical volume “/dev/sdg1” successfully created
# pvcreate /dev/sdg2
pvcreate — physical volume “/dev/sdg2” successfully created

——————————————–
Removing an Old Disk

Say you have an old IDE drive on /dev/hdb. You want to remove that old disk but a lot of files are on it.

Caution    Backup Your System

You should always backup your system before attempting a pvmove operation.

Distributing Old Extents to Existing Disks in Volume Group

If you have enough free extents on the other disks in the volume group, you have it easy. Simply run

# pvmove /dev/hdb
pvmove — moving physical extents in active volume group “dev”
pvmove — WARNING: moving of active logical volumes may cause data loss!
pvmove — do you want to continue? [y/n] y
pvmove — 249 extents of physical volume “/dev/hdb” successfully moved

This will move the allocated physical extents from /dev/hdb onto the rest of the disks in the volume group.

Note    pvmove is Slow

Be aware that pvmove is quite slow as it has to copy the contents of a disk block by block to one or more disks. If you want more steady status reports from pvmove, use the -v flag.
13.5.1.1. Remove the unused disk

We can now remove the old IDE disk from the volume group.

# vgreduce dev /dev/hdb
vgreduce — doing automatic backup of volume group “dev”
vgreduce — volume group “dev” successfully reduced by physical volume:
vgreduce — /dev/hdb

The drive can now be either physically removed when the machine is next powered down or reallocated to other users.
13.5.2. Distributing Old Extents to a New Replacement Disk

If you do not have enough free physical extents to distribute the old physical extents to, you will have to add a disk to the volume group and move the extents to it.
13.5.2.1. Prepare the disk

First, you need to pvcreate the new disk to make it available to LVM. In this recipe we show that you don’t need to partition a disk to be able to use it.

# pvcreate /dev/sdf
pvcreate — physical volume “/dev/sdf” successfully created

13.5.2.2. Add it to the volume group

As developers use a lot of disk space this is a good volume group to add it into.

# vgextend dev /dev/sdf
vgextend — INFO: maximum logical volume size is 255.99 Gigabyte
vgextend — doing automatic backup of volume group “dev”
vgextend — volume group “dev” successfully extended

13.5.2.3. Move the data

Next we move the data from the old disk onto the new one. Note that it is not necessary to unmount the file system before doing this. Although it is *highly* recommended that you do a full backup before attempting this operation in case of a power outage or some other problem that may interrupt it. The pvmove command can take a considerable amount of time to complete and it also exacts a performance hit on the two volumes so, although it isn’t necessary, it is advisable to do this when the volumes are not too busy.

# pvmove /dev/hdb /dev/sdf
pvmove — moving physical extents in active volume group “dev”
pvmove — WARNING: moving of active logical volumes may cause data loss!
pvmove — do you want to continue? [y/n] y
pvmove — 249 extents of physical volume “/dev/hdb” successfully moved

13.5.2.4. Remove the unused disk

We can now remove the old IDE disk from the volume group.

# vgreduce dev /dev/hdb
vgreduce — doing automatic backup of volume group “dev”
vgreduce — volume group “dev” successfully reduced by physical volume:
vgreduce — /dev/hdb

The drive can now be either physically removed when the machine is next powered down or reallocated to some other users.

13.3.3. Add the new disks to the volume groups

The volumes are then added to the dev and ops volume groups:

# vgextend ops /dev/sdg1
vgextend — INFO: maximum logical volume size is 255.99 Gigabyte
vgextend — doing automatic backup of volume group “ops”
vgextend — volume group “ops” successfully extended
# vgextend dev /dev/sdg2
vgextend — INFO: maximum logical volume size is 255.99 Gigabyte
vgextend — doing automatic backup of volume group “dev”
vgextend — volume group “dev” successfully extended
# pvscan
pvscan — reading all physical volumes (this may take a while…)
pvscan — ACTIVE   PV “/dev/sda”  of VG “dev”   [1.95 GB / 0 free]
pvscan — ACTIVE   PV “/dev/sdb”  of VG “sales” [1.95 GB / 0 free]
pvscan — ACTIVE   PV “/dev/sdc”  of VG “ops”   [1.95 GB / 44 MB free]
pvscan — ACTIVE   PV “/dev/sdd”  of VG “dev”   [1.95 GB / 0 free]
pvscan — ACTIVE   PV “/dev/sde1” of VG “ops”   [996 MB / 52 MB free]
pvscan — ACTIVE   PV “/dev/sde2” of VG “sales” [996 MB / 944 MB free]
pvscan — ACTIVE   PV “/dev/sdf1” of VG “ops”   [996 MB / 0 free]
pvscan — ACTIVE   PV “/dev/sdf2” of VG “dev”   [996 MB / 72 MB free]
pvscan — ACTIVE   PV “/dev/sdg1” of VG “ops”   [996 MB / 996 MB free]
pvscan — ACTIVE   PV “/dev/sdg2” of VG “dev”   [996 MB / 996 MB free]
pvscan — total: 10 [13.67 GB] / in use: 10 [13.67 GB] / in no VG: 0 [0]

Extend the file systems
———————————

The next thing to do is to extend the file systems so that the users can make use of the extra space.

There are tools to allow online-resizing of ext2 file systems but here we take the safe route and unmount the two file systems before resizing them:

# umount /mnt/ops/batch
# umount /mnt/dev/users

We then use the e2fsadm command to resize the logical volume and the ext2 file system on one operation. We are using ext2resize instead of resize2fs (which is the default command for e2fsadm) so we define the environment variable E2FSADM_RESIZE_CMD to tell e2fsadm to use that command.

# export E2FSADM_RESIZE_CMD=ext2resize
# e2fsadm /dev/ops/batch -L+500M
# e2fsadm /dev/dev/users -L+900M

13.3.5. Remount the extended volumes

We can now remount the file systems and see that the is plenty of space.

# mount /dev/ops/batch
# mount /dev/dev/users
# df
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/dev/cvs           1342492    516468    757828  41% /mnt/dev/cvs
/dev/dev/users         2969360   2060036    909324  69% /mnt/dev/users
/dev/dev/build         1548144   1023041    525103  66% /mnt/dev/build
/dev/ops/databases     2890692   2302417    588275  79% /mnt/ops/databases
/dev/sales/users       2064208    871214   1192994  42% /mnt/sales/users
/dev/ops/batch         1535856    897122    638734  58% /mnt/ops/batch

Taking a Backup Using Snapshots
————————————————

Following on from the previous example we now want to use the extra space in the “ops” volume group to make a database backup every evening. To ensure that the data that goes onto the tape is consistent we use an LVM snapshot logical volume.

A snapshot volume is a special type of volume that presents all the data that was in the volume at the time the snapshot was created. For a more detailed description, see Section 3.8, Snapshots. This means we can back up that volume without having to worry about data being changed while the backup is going on, and we don’t have to take the database volume offline while the backup is taking place.

Create the snapshot volume
————————————-

There is a little over 500 Megabytes of free space in the “ops” volume group, so we will use all of it to allocate space for the snapshot logical volume. A snapshot volume can be as large or a small as you like but it must be large enough to hold all the changes that are likely to happen to the original volume during the lifetime of the snapshot. So here, allowing 500 megabytes of changes to the database volume which should be plenty.

# lvcreate -L592M -s -n dbbackup /dev/ops/databases

If the snapshot logical volume becomes full it will become unusable so it is vitally important to allocate enough space.

. Mount the snapshot volume

We can now create a mount-point and mount the volume

# mkdir /mnt/ops/dbbackup
# mount /dev/ops/dbbackup /mnt/ops/dbbackup
mount: block device /dev/ops/dbbackup is write-protected, mounting read-only

If you are using XFS as the filesystem you will need to add the nouuid option to the mount command:

# mount /dev/ops/dbbackup /mnt/ops/dbbackup -onouuid,ro

Do the backup

To  Remove the snapshot

When the backup has finished you can now unmount the volume and remove it from the system. You should remove snapshot volume when you have finished with them because they take a copy of all data written to the original volume and this can hurt performance.

# umount /mnt/ops/dbbackup
# lvremove /dev/ops/dbbackup
lvremove — do you really want to remove “/dev/ops/dbbackup”? [y/n]: y
lvremove — doing automatic backup of volume group “ops”
lvremove — logical volume “/dev/ops/dbbackup” successfully removed

Written by praji

January 8, 2008 at 8:27 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: