[conspire] Lots more virtual machine (VM) fun. :-) Re: Yummy, yummy, virtual within virtual! :-)

Michael Paoli Michael.Paoli at cal.berkeley.edu
Fri Jun 7 08:12:24 PDT 2019


So ... did all the following, using qemu-kvm & virsh & friends on Debian
stable (currently using Debian GNU/Linux 9.9 (stretch) x86_64),
virtual within virtual (a virtual machine ("guest") within another
"guest"), and live migration mania without common shared storage!

So ... teensy bit of background/terminology, and at least some of what I
did.

Virtual Machine (VM) (sometimes also called "guest") - the hardware is
virtual (in software), not physical.

Physical - the physical machine, sometimes called "host", as it may host
virtual machine(s).

So, with VMs, we can do things like install and run operating systems on
VMs.  I'm (at least mostly) using qemu-kvm (at least that's how Debian
names the package).  Some years back, there were two separate projects:
QEMU - which does VMs purely by software emulation
KVM - does VMs, but takes advantage of physical hardware that has
   virtualization support - most notably for supporting VMs of
   same/similar virtual hardware, this allows the VMs within to achieve
   near native (direct on physical) levels of performance.
Anyway, the two projects merged, and Debian calls the package qemu-kvm,
other distros may name it differently.

virsh - highly handy utility of libvirt(-clients).  It allows one to do
lots of things with VMs (notably managing).  The especially cool thing
about virsh and libvirt, etc., is it handles multiple virtualization
technologies/software.  So, rather than, e.g. using one syntax for Xen
(another virtualization technology) and a totally different syntax for
qemu-kvm, by using libvirt (e.g. virsh), one gets to use a common
syntax for any supported virtualization technologies/software.  The
only occasional difference being when one gives an option or the like
to specify which virtualization technology to use - e.g. when creating
a VM.

Live Migrations.  That's when one moves a VM from one physical host to
another, with the VM running the whole time.  This generally also
includes all networking connectivity persisting along the way - e.g.
existing or in progress TCP connections, etc., continue as usual.

So ... classicly with live migrations, and VMs using persistent storage
(disks/drives), and most notably read-write (rw), needs to have common
storage - most notably both physicals must have access to the same
storage of any VMs doing live migration (e.g. such as via
NAS, SAN, or other common storage such as SCSI with both hosts having
controllers on the same SCSI bus).  Well...

virsh's live migration capabilities have this wonderful option:
--copy-storage-all
What it does, and can be used on live migrations, is it automagically
live copies the storage between the two physicals of the VM - no need
for any common storage between the two physicals.  What it does behind
the scenes, is it sets up and uses network block devices (another
cool Linux capability - one can have a block device - such as drive
storage, presented as a block device over the network),
so, it switches the running VM to use network block device(s),
then it mirrors the storage between the two physicals, until they're
maintained in sync, and after migration, it "breaks" (separates)
the mirrors, leaving the VM running on the newly copied set on
the to/target physical - all the while, the VM consistently accessing
its storage, and the VM itself never really knowing the difference.

Virtual within virtual.  Maybe not exactly the Holy Grail of
virtualization, but if nothing else, it's a good test/demonstration of
thoroughness of one's virtualization technology.  If one can use the
virtualization technology to run a VM within a VM (of the same
virtualization technology) ... and notwithstanding resource limits,
do that nested arbitrarily deep, that's a fairly significant
milestone in virtualization.

Anyway, did that fairly recently (VM within VM).
Also fairly recently got some kinks worked out on live migrations with
--copy-storage-all
(made some minor changes for that to work on the current Debian stable,
I'd early used it and had it (mostly? significantly(!)) working on
Debian oldstable (what had been the Debian stable before the current
Debian stable).

Ah, but lots of fun with live migrations,
--copy-storage-all
and also virtual within virtual.

So ... what did I do?
Two physical hosts/machines, call them: p1 and p2
Three VMs, call the first two: vh1 and vh2
We call these vh1 and vh2, because they are VMs, and also
they themselves sometimes host other VM(s), so think *both* Vm *and*
"Host" (in the Vm Hosting sense).
And call our 3rd VM:
s
I set that one up not using hardware virtualization support (think S for
Software (only) and Smaller), and it's a fair bit Smaller than
vh1 and vh2.

So, what did I do with all these? ;-)
Set up all the VMs: vh1, vh2, s
(and all of them also running Debian stable ... not that that need be
the case - I also have VMs that run Debian unstable, and Debian
oldstable).
Got all three of them (vh1, vh2, s) running on p1
Did lots 'o live migrations:
s to p2 & back
s to vh1
vh1 to p2
s to vh2
vh1 to p1
s to vh1
s to p1
And the entire time, all 3 VMs remained up and running,
and even persisted an active ssh connection to s throughout (it was
outputting the time, about every second, for me to watch it).
Note also that when s was running on vh1, and vh1 was moved to p2,
also, even in that case, s continued running atop vh1 the entire
time, even though its host vh1 (itself a VM) was live migrated to p2.

> From: "Michael Paoli" <Michael.Paoli at cal.berkeley.edu>
> Subject: Yummy, yummy, virtual within virtual!  :-)
> Date: Mon, 03 Jun 2019 08:29:40 -0700

> Now, more typically I'd do:
> $ sudo virsh start name_of_VM --console
> But instead here I'm skipping all the console boot stuff
> that would otherwise be seen on the serial console (I also
> have the "quiet" option *not* present on the kernel options/arguments).
>
> $ sudo virsh start host1; sleep 30; sudo virsh console host1; sudo  
> virsh list --all
> Domain host1 started
>
> Connected to domain host1
> Escape character is ^]
>
> Debian GNU/Linux 9 debian ttyS0
>
> debian login: root
> Password:
> Last login: Mon Jun  3 06:32:36 UTC 2019 on ttyS0
> Linux debian 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64
>
> The programs included with the Debian GNU/Linux system are free software;
> the exact distribution terms for each program are described in the
> individual files in /usr/share/doc/*/copyright.
>
> Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
> permitted by applicable law.
> root at debian:~#
>
> virsh list --all
>  Id    Name                           State
> ----------------------------------------------------
>  -     swvirtonly                     shut off
>
> root at debian:~# virsh start swvirtonly; sleep 30; virsh console  
> swvirtonly; virsh list --all
> Connected to domain swvirtonly
> Escape character is ^]
> ... [lots 'o serial console output - should've allowed more than
> ... 30 seconds - virtual *without* hardware virtualization support
> ... is significantly slower]
>
> Debian GNU/Linux 9 debian ttyS0
>
> debian login: root
> Password:
> Last login: Mon Jun  3 14:14:20 UTC 2019 on ttyS0
> Linux debian 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64
>
> The programs included with the Debian GNU/Linux system are free software;
> the exact distribution terms for each program are described in the
> individual files in /usr/share/doc/*/copyright.
>
> Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
> permitted by applicable law.
> root at debian:~#
>
> And, yes, I do have a (testing) use case.
> I want to test some mucking about with the virtualization software
> itself ... but at least first without doing it on physical host,
> and see if I'm able to make the changes I want ... before doing
> it "for real" on the production physical hosts.
> If all appears well in (my) testing, then I'd next do
> physical that doesn't have VMs running on it,
> the move VMs to there, make sure they still run okay,
> then same changes to VM software on the other nominally
> primary physical host - while no VMs are running there ...
> then move the VMs back to the nominal primary.
> And also ... test in VM first?  Yes, the changes may be
> hazardous/difficult to back out of on the physical.
> On the virtual(s) dang easy - can always copy the full
> data and metadata first (disk image and virtual machine
> configuration data).




More information about the conspire mailing list