autopkgtest: Generic Test Harness / Network Client Service testing
christian at iwakd.de
Thu Sep 17 16:34:46 UTC 2015
Am 2015-09-16 09:56, schrieb Martin Pitt:
> sorry for the ridiculously long time to reply -- we discussed that at
> debconf already, but for the sake of the mail archives I'll put that
> here too.
Actually, we didn't have an opportunity to discuss it at DebConf, but
thanks for answering nevertheless. :-)
> Christian Seiler [2015-06-26 15:32 +0200]:
>> The patch itself contains a file debian/tests/testlib.py, which is
>> basically a python library providing all sorts of useful helpers for
>> testing daemons etc. But it's also >1000 lines long and there are a
>> couple of things in there that I'd like to improve. A quick
>> in Debian revealed that this library has already been copied to 3
>> Debian packages:
>> Personally, I think having an embedded copy of the whole thing is
>> probably a bad idea in the long run - especially if I start
>> it for the convenience of the package I'm currently working on,
>> effectively creating divergent versions of the same piece of
> Centralization is usually a good thing indeed. But I'm not really
> convinced that this particular testlib.py is a good idea -- a lot of
> the functions are implemented poorly
Before starting this thread I only took a cursory glance at the
testlib.py that was attached to the bug report I mentioned. The Python
script that used testlib.py for open-iscsi seemed simple enough in the
usage of that library.
After DebConf I took a closer look at testlib.py (to also prepare a
patch for autopkgtest) and unfortunately I must agree with your
assessment: the code quality is rather lacking.
Therefore, I'm no longer advocating for including that specific library
> Making this an official API would mean that this first has to be
> essentially rewritten from scratch in a more proper way and leaving
> out all the "abstraction for abstraction's sake" wrappers -- in tests
> it's almost always better to directly say what you want to do instead
> of relying on third-party abstractions which can change underneath
> you. It would also need to grow a test suite to guarantee API
>> I think it would make sense to package that - and autopkgtest seems
>> be the best place for that (maybe an additional binary package?).
>> do you think?
> TBH, no. This is on a completely different abstraction level.
> autopkgtest is a test execution mechanism, and its interface to a
> is a blackbox executable. It should *not* get into the business of
> *how* you write your tests -- that's the level of e. g. Python's
> unittest, shunit2, JUnit, etc. Conversely, the above kind of "test
> utilities" is in no way specific to autopkgtest -- it applies equally
> well to tests that you run during "make check", for example.
> like this would fit much better into e. g. python-testtools, or
> perhaps a new python library.
I disagree here. If you look at e.g. JUnit, it consists of two parts:
a base library that provides a lot of useful helper functions for
writing tests (mainly a lot of different assertions) and a framework
for running those unit tests. (And JUnit etc. have things like
aggregation of test statistics etc., so it's not like test execution is
something that's just added on top of the frameworks - it's an integral
To me unit testing frameworks are orthogonal to what autopkgtest does:
I would put autopkgtest more in the realm of "integration testing" and
not unit testing: unit tests are things that I run from the package
build directory, so that I can check that the compiler didn't introduce
any obvious bugs in the software - whereas autopkgtest tests the "final
product" that's installed on the users' systems - to make sure that
works properly out of the box. Running standard unit tests from within
autopkgtest seems relatively pointless to me, they should rather be run
at build time directly.
Currently, autopkgtest only provides the execution environment, but
does not provide any helpers for writing tests. But for the testing of
packages there are several typical actions that many people will have
repeatedly: checking if a daemon is running after installation,
checking if a daemon has actually opened the appropriate socket,
modifying configuration files while keeping backups so they may be
restored after test completion, interacting with debconf, etc. Those
types of operations are not operations you typically do in unit tests.
So I do think that something like that could be very useful.
As said above: after taking a closer look, I agree that testlib.py is
not it. But that doesn't mean that I think it would be pointless to
add something along those lines. And it would be something that would
have to be written from scratch - so I'm not proposing to add something
immediately - I'm just asking you to reconsider your opinion on this
Also note that providing such a library doesn't mean that people HAVE
to use that it - it just means that boilerplate code for autopkgtests
of certain packages could be reduced.
>> Can one tell autopkgtest to provision two VMs? One which is
>> to provide some generic services - and then a second VM that
>> the package itself (both can see each other and IP addresses of the
>> and other VMs are available for scripts) in order to test whether it
>> works properly against the service that's set up.
> We discussed that once specifically with the QEMU runner: There was a
> patch to export its own "outside" image into the VM, so that your
> can depend on qemu-system and run a nested VM inside the main one.
> at that time nested KVM was still rather brittle (I think it's much
> better these days).
I've used nested KVM since Wheezy without any (major) issues - the only
thing is that memory requirements are not necessarily trivial - and you
need to make sure the kvm_(intel|amd) modules have the nested=1 module
option set on the outermost host, which is NOT the default. Don't know
about other architectures. (And I've only tried one nesting level, no
idea as to how far one may go here.)
> However, that would limit the test to QEMU. These days we have
> containers, which are much more widely applicable. For these cases,
> would it be enough to put the server end (NFS/iscsi etc.) into a
> container that runs in the test environment, and then use that from
> the main test environment?
Unfortunately, that's not easily possible.
NFS: there were two user-space NFSv3 servers: nfs-user-server and
unfs3, but the last Debian version that still packaged them was Lenny
(Debian 5.0) - both were already dropped with Squeeze. While unfs3
doesn't appear to be completely dead (last release was 2009, but last
SVN commit was July 2015), both aren't going to help if you want to
test NFSv4, which is what you should use if you start a new setup.
The current NFS server implementation resides in the kernel - the only
user space components required with modern NFSv4 are for Kerberos
support and ID mapping (the latter not required for non-Kerberos NFSv4
with Kernel 3.4+ - so that you can run without any userspace component
except for utilities that tell the kernel to set things up). The NFS
client is completely namespace-aware nowadays, but the NFS server isn't
(unless something was merged _very_ recently), see Debian BTS #763192:
iSCSI: There are two software server ("target") implementation
available for Linux that are known to me: LIO (merged in the kernel
upstream, its userspace component is packaged in Debian Package
targetcli) and the iSCSI enterprise target (requires DKMS module build,
Debian package iscsitarget). As far as I am aware, nobody has ever
talked about namespace awareness for either (and a quick web search
doesn't reveal anything at first glance). Even the client ("initiator")
doesn't support namespaces currently (although support is planned and
maybe it landed in 4.2 or 4.3, I haven't checked). As far as I know,
there is no userspace iSCSI target ("server") for Linux. (There's no
technical reason that there couldn't be, just that nobody ever bothered
to implement one.)
Therefore: running server-side components in containers is currently
not possible. It might be possible at some point in the future, but
from my experience with other kernel technologies in relation to
containers it will take a while even after the initial implementation
of namespace awareness until I'd consider the support robust enough.
> It becomes more interesting if you need that server for booting the
> client -- in this case you'd need to construct a client-side
> instead, and leave the server bits in the main test environment. That
> would obviously not work testing for PXE boot (as containers don't do
> that), but it should be sufficient for things like NFS or even
Well, nested VMs, were the outermost VM provides the target (or NFS
server or whatever) and the nested container then connects to the
target and tests the client package would definitely work. (Containers
for the clients not so much.) And since one has complete control over
the network configuration of the outer VM, it's also not a problem to
make sure to find a proper IP range for this stuff.
So thinking about this, the fallout of the discussion could be the
- add an additional restriction 'kvm-runnable' that makes sure that
a KVM instance may be run from inside the testbed. Together with
'isolation-machine' this will mean that either
a) the test is run inside a KVM itself with nested
b) the test is run on bare-metal where KVM can trivially be run
(unlikely, but possible, and somebody at DebConf mentioned
that Canonical did some bare-metal tests)
This is relatively generic, because it only makes sure that KVM may
be run from within the environment - so this is relatively future-
The docs for that option should explicitly mention the word 'nested'
in use case though, to make it clear that most people will just want
(Well, technically you could always run non-KVM Qemu anywhere, but
that is just far too slow to be of any use.)
Also, I don't know about current memory settings for adt-run, but
maybe 'kvm-runnable' implies that the outermost VM has enough RAM to
make sure that running two VMs (one nested) will not exhaust it.
Note that I see this as orthogonal to 'isolation-machine', so having
only 'kvm-runnable' but no 'isolation-machine' should be valid and
just imply that starting a KVM is possible - whether from within a
VM or a container shouldn't matter. (Of course, initially only the
qemu runner would support this flag, I'm just saying that
semantically I think one shouldn't imply the other.) In the distant
future this could mean that once e.g. the LIO target is namespace-
aware, I could just say I need 'isolation-container' and
'kvm-runnable' for the open-iscsi client package, start the target
in the container directly and run a single KVM for the client tests
from there - which would probably be faster than nested KVMs.
- maybe add a restriction 'needs-qcow2-baseimage' that makes sure that
a qcow2 base image of the same distribution that is bootable with
KVM (i.e. deboostrap + kernel + bootloader + sensible default
configuration, nothing else) is available in the filesystem - since
these kinds of base images are required for the VM where the test is
run anyway, this might save quite a bit of time during the test
(deboostrap does take quite a while to complete in my experience).
- add a note to autopkgtest that for running VMs one should enable
nested KVM and tell people how to do that
I think that with these ingredients I should be able to construct
autopkgtests for open-iscsi that actually tests the client in
various different ways against a real target. (Well, the
'needs-qcow2-baseimage' is strictly-speaking optional, I can just
install vmdebootstrap and use that - but tests should be as fast as
What do you think?
More information about the autopkgtest-devel