|1.||Creating the VM guest using virt-manager|
|2.||Creating the Thin LV|
|4.||Spawning Clone VMs|
|5.1||Corrupted Storage Device|
Here we build a lightweight VM for kernel and device development by leveraging LVM's thin logical volumes. Thin volumes are easy to copy, destroy, and clone. They facilitate a "stateless" VM whose rootfs is immutable between runs -- like you'd see in a container environment.
Statelessness provides all the benefits of a sanitary development environment while still allowing us to pass in a guest kernel that we can recompile in the host.
Here's how to set it all up.
Open up virt-manager and create a new VM. You can use Fedora or Ubuntu or anything that uses a traditional install disk in ISO format.
When you get to storage, just uncheck the box that says Enable storage for this virtual machine.
Give it the name kdevtest and check Customize configuration before install.
When you click Finish here, it'll bring you to the screen that lets you customize further. Minimize this for now, and we'll come back to it after we create the LVM thin volume.
(Optional) If you haven't already done so, you'll need to create a thin volume pool for your thin LV to reside in. Assuming an LVM volume group named vgmain, the command for creating the pool looks like this:
# lvcreate --type thin-pool -L 100G --name lvVMPool vgmain
With the thin-pool created, the thin LV itself is created thus:
# lvcreate --type thin -V 35G --name lvkdev --thin-pool lvVMPool vgmain
Now go back to the virt-manager Customize before Install screen.
Add a hardware device, specifically a storage device.
Check the button that says Select or create custom storage, then click on the Manage... button.
As long as you're on the screen, make sure that you have a Disk Device and that "Bus Type" is set to VirtIO.
Now, if your LVM volume group is on the lefthand side as I have here, great. If not, you can click [+] in the lower left to tell libvirt about it. Once the VG is known to libvirt, you'll see a list of its LVs on the right.
Notice that thin LVs are not on the right. This is the reason we needed to create the thin LV outside of virt-manager. This interface will only allow the creation and selection of normal LVs, not thin ones.
Click Browse Local at the bottom and enter the pathname of the thin LV manually. Using the names we chose above where the VG is vgmain, the thin-pool is lvVMPool, and the thin LV is lvkdev.
So the path you'd enter would be /dev/vgmain/lvkdev.
Click on Finish, then Apply, and you've got yourself a new guest VM.
(I love this expression).
The default graphical terminal that libvirt provides isn't necessary for the kind of command-line testing we'll be doing.
Let's get rid of the graphical terminal. We'll also configure things so that the Linux guest's console is redirected to the terminal so that we can do all our work there.
This will make our startup time faster -- a great improvement.
Use virt-manager to delete the Graphics object, the Video Device, and the SPICE channel. We won't be using these.
Now add console=ttyS0 to your kernel parameters. For example, in Fedora I'd put this argument into /etc/default/grub and then run grub2-config followed by grub2-install.
Then reboot the vm.
Making a clone VM with its own copy of the data couldn't be simpler. We simply create a snapshot of the thin volume ("a thin snapshot"), then use the virt-clone command and pass it the names of the original VM and the newly created thin volume:
# lvcreate -N lvkdev0 -s vgmain/lvkdev
We'll also need to activate it. Note the use of the -K switch here since snapshots are marked as activationskip.
# lvchange -ay -K /dev/vgmain/lvkdev0
This is really all we need for the storage device, and it's a fast operation. Now we'll create the VM using virt-clone:
virt-clone -o kdev -n kdev0 --preserve-data -f /dev/vgmain/lvkdev0
The newly created clone VM has its own COW storage backed by the original, and its own independent libvirt record.
# virsh list --all Id Name State ---------------------------- - kdev shut off - kdev0 shut off
Once automated this entire process is very brief; approximately 2 seconds on my Ryzen 5 2600 with an NVMe PCIE 3.0 Seagate FireCuda drive. This is less time than it takes to start the VM itself.
Here are some pointers in case you get stuck while using one of these VMs.
If you're doing kernel development, odds are good that you will be playing around with the kernel parameters. For instance, you might like to alter your grub kernel command line to pass arguments in. If you get this wrong, your kernel might have trouble booting from your device, or might boot it in read-only mode, preventing you from changing the arguments back.
To fix this type of problem, you want to be able to mount the guest partitions from the host.
Your distribution will usually partition the device you gave it as if it were a single physical disk. Your own host OS will not "look into" the thin volume to discover any such partitions. However, they are still accessible. To see these partitions within your thin LV, try:
# partx -s /dev/vgmain/lvkdev
This tool is reading the same information that other disk partitioning tools do, like fdisk and gdisk. To make these partitions "known" to the kernel and accessible for mounting, we can use:
# partprobe /dev/vgmain/lvkdev
You can now confirm that the kernel is aware of the new partitions using lsblk.
# lsblk ... nvme0n1 259:0 0 931.5G 0 disk ├─nvme0n1p1 259:1 0 1G 0 part /boot ├─nvme0n1p2 259:2 0 64G 0 part └─nvme0n1p3 259:3 0 866.5G 0 part └─vgmain-lvfastpool_tmeta 253:1 0 500M 0 lvm └─vgmain-lvfastpool-tpool 253:3 0 500G 0 lvm ├─vgmain-lvfastpool 253:4 0 500G 1 lvm └─vgmain-lvkdev 253:15 0 35G 0 lvm ├─vgmain-lvkdev1 253:16 0 1G 0 part └─vgmain-lvkdev2 253:17 0 34G 0 part ...
Now you can mount as usual:
# mount /dev/mapper/vgmain-lvkdev1 /mnt/kdevboot # mount /dev/mapper/vgmain-lvkdev2 /mnt/kdevroot
In this case my Fedora 33 install used btrfs for the rootfs, so there will be subvolumes within the second partition, but you get the idea.
Now you can make changes to the guest bootfs/rootfs from the host.
To clear knowledge of these partitions from the kernel, you'd first unmount them and then:
# partx -d --nr 1:2 /dev/vgmain/lvkdev
This will leave behind the device mapper links, which you can remove with:
# dmsetup remove /dev/mapper/vgmain-lvkdev1 # dmsetup remove /dev/mapper/vgmain-lvkdev2
Once again, checking with lsblk will confirm that they are removed. These would be necessary steps if you were going to deactivate the LV or VG, for example.