Weekend Project: Migrate from Direct Partitions to LVM Volumes

8926

Has learning about Linux’s Logical Volume Manager (LVM) been on your to-do list for too long?  Then set aside some time this weekend for a little project: migrating a Linux system from traditional disk partitions to LVM’s logical volumes.

If you’ve ever run out of space in one partition on a hard disk containing several, then you’ve seen the problem. Depending on the filesystem you use, you might be able to grow the filesystem into empty space, but you are limited to what’s available in between the other partitions on disk.

LVM implements a layer of abstraction between physical disks and the “logical” disks that the filesystem sees. In order to keep the terminology clearer, LVM uses the term “volume” to describe its building blocks.  The advantage of LVM is that you can add or remove disks from the system, expand and shrink volumes as needed, create volumes that span multiple disks, and many other flexible changes not possible with partitions created directly on physical disks.

Many Linux distributions include support for LVM when performing a new install but not for updating a running system. You can update an existing installation to LVM, though, provided you have enough spare storage space to make a backup. Converting a disk to LVM involves overwriting its partition tables, so you must have enough excess capacity to make a full backup copy of the disk (or disks) you are converting.  Adding a new, high-capacity drive is the perfect time.

For example, if I have a 100GB SATA disk at /dev/sdb filled to capacity with a /home and /var/media partition each using 50GB, I could try and add a new 250GB drive and move either /home or /var/media onto it, but I must choose in advance which partition gets more space. If I switch to LVM instead, I cannot only make that decision later, but I can add more capacity later just as easily.

The Game Plan

The idea is to combine both the 100GB and 250GB disks into a single logical entity using LVM. We can then re-create the /home and /var/media partitions without worrying about the boundary between the two disks, and we can leave part of the spare capacity unused, so that we can add to either partition as needed in the future. Once we’ve filled all 350GB, however, we can add another disk to the mix with LVM and continue to expand.

To do this, we will turn each disk into an LVM “physical volume,” starting with the new 250GB disk. We’ll then create a “volume group” on top of the “physical volume” — a volume group is like a virtual hardware disk in LVM land; we only need one, but you can create several if you need to manage different filesystems according to different rules. On top of this volume group, we’ll create two “logical volumes” — one to use for /home, and one to use for /var/media. Just as the volume group is a virtualization of the hardware disk, the logical volume is a virtual container for a partition; thus the need for two.

We will then move our partitions from the 100GB disk onto the new logical volumes, and, since the 100GB disk is no longer in use, we will turn it into its own physical volume and add it to our volume group. If you stop to think about it, we could swap around the order of disks, starting by copying everything from the 100GB disk to the 250GB disk, and converting the 100GB disk to LVM first. But that just adds time.

If it seems like LVM has one-too-many layers of abstraction, don’t worry. It’s the extra layer that allows us to add and remove disks from the mix at will. Yes, it’s more complicated, but the long-term benefits are the goal.

Making the Physical Volume

To start, let’s assume that our new 250GB disk is at /dev/sdc, and that it checks out free of errors. We’ll need to erase any pre-formatted partitions on the disk and create a new partition of type “Linux LVM.” Your distribution may have graphical tools for doing this, but command-line fdisk works just as well. Run # fdisk /dev/sdc (as root) to start.

At the prompts, type n to create a new partition, p to make it primary, 1 to assign it partition number 1, Enter to accept the default starting cylinder, and +250000M to mark it 250GB in size. Then type t to set the partition type. LVM partitions are type 8e, so enter it at the prompt. Finally, type w to write the new partition table to the disk.

You can check your work with # fdisk -l; the output should include a partition at /dev/sdc1 of type “Linux LVM.”

Now that the disk is ready, we need to make LVM aware of it by declaring it a “physical volume.” Do this with # pvcreate /dev/sdc1, double-check with # pvdisplay; the output should show /dev/sdc1 as well as list its size, UUID, and other information. The “VG Name” field will be blank, because we have not added this volume to a volume group. That’s next.

The Volume Group

The volume group is the baseline on top of which we add the logical volumes and partitions. The logical volumes above can be moved, resized, added and removed, and the disks below can be added and removed, but the volume group layer remains the same.

In our example case, we will just create one volume group, named “mydata.” We do this with # vgcreate mydata /dev/sdc1. We can run # vgdisplay to get a report on the newly-minted volume group, including its size, the number of physical volumes attached, and the number of logical volumes implemented on it (which should be zero, at the moment). We can also run # pvdisplay again, and see that /dev/sdc1 is now listed as belonging to the mydata volume group.

The Logical Volumes

Now we’ll create our two logical volumes, one for /home and one for /var/media. The command to do this is lvcreate; we will need to supply a name and a starting size for each volume. We do not need to allocate the full 250GB of mydata, although we should give each more than the 50GB we know we have already filled.  Run # lvcreate –name home –size 100G mydata followed by # lvcreate –name media –size 100G mydata to assign 100GB to each.

These commands created device nodes in our system named /dev/mydata/home and /dev/mydata/media.  These devices subsequently take the place of hardware nodes like /dev/sdb4 and /dev/sdb5 when working with filesystem tools, and they are easier to remember, too. You can check their status with # lvdisplay.

If we want to change our minds about our initial disk allocation, it is easy to do so now, before we have moved over any data. We could run # lvextend -L125G /dev/mydata/media to bump the media volume up to 125GB in size, or # lvreduce -L75GB /dev/mydata/media to shrink it down to 75GB.

The Filesystems (and the files)

At long last, we can create filesystems on our new virtual devices. At this point, the process is not different than creating a new filesystem without LVM. You can run mkfs.xfs to create an XFS filesystem, mkfs.ext3 to create an Ext3 filesystem, or any other filesystem supported by Linux. Just remember, when you create your filesystem, you do so on the logical volume device, for example # mkfs.xfs /dev/mydata/media.

You will now need to move your data over from its old location on the 100GB disk onto the new LVM filesystems.  You can do this in several ways; by creating temporary mount points for the new filesystems (such as, say, /mnt/lvmhome), or the reverse — unmounting the original filesystems from their locations and attaching the new filessytems to them.  It doesn’t matter.

Likewise, to actually copy the contents of the filesystems from their old to new locations, you can use any tool you wish — go it alone with cp, look for error-checking with rsync, or, if your filesystem supports it, using a specialized tool like xfs_copy. In any case, it is of course advisable to perform integrity checking before you decommission the old 100GB disk and lose its contents entirely.

Finally, don’t forget to update /etc/fstab to reflect the new LVM volumes that will be mounted by the system at startup.

The Payoff: Adding More Storage, Painlessly

The preceding steps give you more storage for /home and /var/media than you had with the old disk, but they don’t do anything special. In fact, it is the same sequence you would follow to create LVM storage on a completely new system that you intend to run without further intervention. But to really see the power of LVM’s abstraction layer, we can add the now-unused 100GB disk to our volume group and give ourselves more flexible storage options.

The first few steps should be familiar. First, create a Linux LVM partition on the disk (which we’ll assume is /dev/sdb) with fdisk. Next, use # pvcreate /dev/sdb1 to declare /dev/sdb1 as an LVM physical volume.

At this point, however, we don’t need to create a new volume group; we just want to add /dev/sdb1 to our existing volume group, mydata. To do so, we run # vgextend mydata /dev/sdb1. We can check our work with # vgdisplay, which will show us an updated PV count and a new total VG size. That’s all there is to it.

Of course, /home and /var/media cannot currently make use of the increased capacity of mydata, because each sits inside a logical volume container and a filesystem, each of which has its own size. To increase the capacity available to either /home or /var/media, we need to first unmount the filesystem, increase the appropriate logical volume, then increase the appropriate filesystem.

For instance, # umount /home unmounts /home. Then, running # lvextend -L100G /dev/mydata/home enlarges the underlying logical volume.  Finally, # resize2fs /dev/mydata/home will resize the filesystem to automatically fill up the new size of the logical volume beneath.

This final step, of course, is filesystem-dependent. XFS and some other filesystems do not need to be unmounted to be resized, and the syntax of the appropriate tools will likely vary.  Make sure you work out the details before you dive in.

Going Further: Shrinking Storage, Snapshots, and RAID

Once you have mastered the basic use cases of LVM, you begin to see some interesting other possibilities.  For example, rarely do we want to decrease our filesystem capacity, but in some cases you may find that you overestimated the amount of space you needed for /opt, /usr/local, or /var — whatever the situation, if your filesystems are mounted on top of LVM, you can resize the offending filesystem and logical volume with ease (using lvreduce) and move on.

And just as easily as you added overall capacity with a new disk, if you have been running with more disks than you need in a particular volume group, you can remove one with vgreduce, and either add it to a needier volume group or to a different system entirely. Other LVM tools exist to perform more maintenance tasks, such as taking a snapshot of a logical volume to serve as an easy backup tool.

The question of when to set up multiple volume groups is an interesting one; the answer may depend on system policy, physical access to hardware, or other esoteric concerns. One interesting option, though, is that LVM physical volumes and volume groups can be assembled out of RAID arrays.  Setting up a RAID system is a topic for another how-to, but you can declare a RAID device (such as /dev/md0) as a physical volume, and build a volume group on top of it.  It wouldn’t make much sense to have a RAID array and non-RAID physical volumes combined in a single volume group.

LVM is a powerful and flexible system, but don’t be intimidated by the layers of virtual storage it provides.  Once you actually take the time to implement a filesystem on LVM, its design reveals itself to be quite straightforward.  There are some important limitations, such as the fact that /boot cannot reside on an LVM volume in GRUB systems, so you should always read the documentation thoroughly, but a little LVM work now could save you a lot of headaches next time you add another hard disk.