Bug#800845: autopkgtest: Add support for nested VMs
christian at iwakd.de
Fri Mar 4 11:54:52 UTC 2016
On 03/04/2016 12:11 PM, Martin Pitt wrote:
>>> As I already wrote before, I really don't like this as a default, as
>>> it introduces unpredictability into testbeds. We've seen tests that
>>> behave differently with different CPU models (X.org's LLVM
>>> software render and mesa for example), and getting random test
>>> passes/failures depending on which hw you run it on is really
>> Isn't this then a bug of the testbed?
> How so? Host hardware having different capabilities is not a bug :-)
> But I'd like to abstract away from this as much as possible for
> production runs, i. e. make the test environment predictable as much
> as we can.
Ok, so it kind of depends on how you view testing. There are two
ways of looking at this: my perspective when I wrote that was
that variations in the hardware base will allow you to find bugs
in your software more easily - whereas you can also have the
perspective that tests are primarily for regression testing.
I think both views are valid.
>> Of course, we could pick SandyBridge for Intel and something else
>> for AMD (no idea what would be a good idea there, I haven't followed
>> AMD for a while) and reduce the possible CPUs to just 2.
> That sounds like a reasonable first start. So if the host has "vmx",
> use SandyBridge, if it has "svm" use Opteron_G5, and if it has
> neither, just continue to use whatever QEMU defaults to?
I actually have a better idea: QEMU can set individual CPU flags.
I just tried the following:
The booted instance contains /dev/kvm on my system and the kvm_intel
driver is loaded (without me having to do so explicitly) and running
QEMU inside with KVM support works.
Technically, we should also add +lahf_lm there, because that's also
part of what KVM needs to do virtualization properly. Since this
feature is tied to virtualization, I don't think there are CPUs out
there that support vmx/svm but not lahf_lm. (But even if: QEMU will
then just print a warning and continue, it's not fatal.)
So we could maybe do the following:
- if x86_64 and default QEMU command:
- if vmx in CPU flags:
use -cpu kvm64,+vmx,+lahf_lm
- else if svm in CPU flags:
use -cpu kvm64,+svm,+lahf_lm
- otherwise: don't pass -cpu
That way, you have the same CPU on any hardware, with the exception
of the vendor-specific VT extensions.
If you're agreeable to that, I'll prepare a patch.
> That said, at least on the machines I tested, nested kvm does work in
> principle (maybe some subtle bugs) with the default -cpu too.
No, it doesn't. Nested QEMU does (and hence adt-virt-qemu will run
now that you merged baseimage support), but the inner QEMU is not
using KVM then.
Try that on your system (with current autopkgtest):
- adt-run $SOMEPKG --shell --- adt-virt-qemu /path/to/image
- Log in to that instance
Then try doing:
(Depending on your hardware.)
Either will fail because of missing CPU flags. /dev/kvm will not
(Unless you have a QEMU that defaults to some other CPU type by
default OR you explicitly set the CPU type, perhaps in some
configuration you forgot about.)
>> This is a tangent, but wasn't somebody working on qemu testbeds for
>> the Debian CI?
> I'm not aware of that. However, it's more realistic to use the ssh
> runner with the nova setup script, given that production debci already
> runs in the cloud. (This won't be able to use this nesting trick,
This is kind of a bummer for me, because I'd really like to have
continuous integration for actual functionality that relies on
nested VMs. But in principle qemu should run in the cloud, right?
So it's only a question of whether it's somehow possible to get
a working base image in there?
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 801 bytes
Desc: OpenPGP digital signature
More information about the autopkgtest-devel