I just posted a new set of patches upstream which add hotplugging support for libguestfs.
Since the project started, most libguestfs programs have looked like this:
/* create the handle */ g = guestfs_create (); /* add 1 or more disks to examine */ guestfs_add_drive (g, "disk.img"); /* launch the handle */ guestfs_launch (g);
It was not possible to add further disks after launch, because of the way libguestfs works: You could not add disks to the small appliance that we use after the appliance has started up.
Except that you can: Linux and qemu have supported hotplugging disks for a long time, but we didn’t expose this through libguestfs.
Now that you can use libvirt to run the appliance it becomes relatively easy for us to add hotplugging, implemented via libvirt’s virDomainAttachDevice API. (The actual implementation inside libvirt is very complex, hence the reason why we didn’t try to reimplement it in libguestfs). From the point of view of libguestfs callers, you can now call the
add-drive* APIs after launch.
All good news? Apart from the obvious limitation that you have to be using the libvirt backend for it to work at all, is it a good idea to use hotplugging?
From a security point of view, there is a trade-off: If you have to modify lots of guests, it’s faster to use hotplugging because you save having to start a new appliance for each guest. But it’s also less secure (significantly so) because one guest may be able to exploit and hijack the appliance and interfere with the disks of other appliances.
In virt-df we accept this risk. It is largely mitigated because the disks are read-only so an exploit from one guest cannot make a permanent change to any other guest.
But if you are considering using hotplugging to modify lots of mutually untrustworthy disks, don’t do it. Either use one appliance per group of mutually trusting guests, or (easier) just launch an appliance per guest. (Reading this page may help you to get the best performance in this case).