Redesigning the autopkgtest controller/workers CI system for Ubuntu and Debian

Antonio Terceiro terceiro at debian.org
Fri Mar 14 09:26:10 UTC 2014


Hello all,

On Thu, Mar 13, 2014 at 11:51:55AM +0100, Martin Pitt wrote:
> Status quo in Debian
> --------------------
> http://ci.debian.net is still fairly young, and at the moment pretty
> much an one-man show from Antonio. At this point I want to say a big
> thank you to Antonio for his great work there! It's really nice to get
> regular autopkgtest running into Debian itself, and thus get more
> attention to it by the developers (e. g. it's shown in DDPO now). It
> is also a goal in Debian to eventually gate unstable→ testing with
> autopkgtests.
>
> Antonio, correct me if I'm wrong, but the current debci setup is
> rather simple: Everything runs on just one machine, just for amd64,
> and this uses adt-virt-schroot for everything. So this doesn't scale,
> and also doesn't extend to other architectures.

the current setup on ci.debian.net is the simplest setup that could
possibly work, but debci was designed from the beginning to make scaling
possible with small extensions. See below.

> debci has a lean homegrown UI which makes it straightforward to get to
> logs of individual package versions. It also provides machine-readable
> json files for everything. It'll need to be extended to
> cover multiple architectures at least, but that doesn't seem too
> complex.

I have made quite some progress wrt the version that is published on
ci.debian.net, and I intend to deploy the latest code really soon now.

the codebase already supports:

- multiple architectures (and multiple suites FWIW).

  The only bit missing here is in the UI, so I still need to figure out
  how to organize the information from multiple arch/suite pairs. Do I
  do it in a package-centric way? In a suite-centric way?

  But right now you can already drive test runs for multiple
  architectures from a single debci setup. e.g. you could for instance
  already run tests for any architecture supported by qemu user
  emulation -- although I didn't test that yet, nor think it's useful as
  a test scenario.

- multiple backends, so it's now already possible to implement a new
  backend that will run the tests in other ways than just using
  adt-virt-schroot locally.

  Implementing a new backend is fairly simple (it's basically keeping a
  pristine testbed up to date, and when requested to run a test, make
  the right adt-run call against that testbed in a non-destructive way,
  e.g. using overlays in the case of schroot).

> Suggested new architecture
> --------------------------
> So this obviously needs some cleanup and robustification. We want to
> be able to scale effortlessly in both the numbers of workers we have
> (even dynamically add and remove them according to the current load),
> as well as support new architectures to run tests on. We also want to
> robustify the communication between britney, the autopkgtest
> controller, and the test execution workers.
> 
> So our CI team proposed to build upon the "new world technologies"
> OpenStack's swift (for distributed/redundant network data storage),
> and RabbitMQ (for distributed task queue management). Both of these
> technologies were new to me, but I played around with them a bit and I
> am convinced that these are much more robust, leaner, and easier to
> use than what we have now.
> 
> rabbitmq-server is delightfully simple to set up (apt-get install,
> that's it), and I recently figured out how to locally set up swift
> [3]. rabbitmq would replace the state file rsyncing and Jenkins' job
> control, as well has having to run jenkins-slave on the workers (which
> is quite heavy in terms of dependencies), workers would store the
> logs/artifacts into swift, and the web UI would read and present these
> from swift.
> 
> So we could have the following new architecture:
> 
>  * One host for the "controller" which runs rabbitmq-server. This can
>    (but doesn't need to be) the same server that britney is already
>    running on.
> 
>    britney sends a "test test mypkg_1.1" request to the
>    autopkgtest_amd64/autopkgtest_i386/autopkgtest_armhf etc. rabbit
>    queues.
> 
>  * A dynamic set of worker nodes which do the test execution. They
>    read from the autopkgtest_* queue which they are capable of
>    processing, run the test, and store the results in swift. This
>    should have a predictable directory hierarchy, like
>    /autopkgtest/trusty/armhf/foopkg/1.2.3-4/, so that we can avoid
>    having to send back a result pointer.
> 
>  * A swift installation, providing sufficient storage space and
>    redundancy. We already have one for CI/QA in Ubuntu, and we'll need
>    to set up one for Debian (that's the only bit that actually
>    requires some thought and knowledge).

How much space you expect this to require? Is something more complicated
than a simple filesystem location really needed?

I mean, that filesystem location could be backed up by any distributed
FS, really, but do you really need the tools to care?

>  * Each time britney runs, it checks whether there's a result for the
>    package it requested a test for in swift. That's much better than
>    reading a "results" rabbit queue, as it is resilient against race
i>    failures/interruptions in britney (i. e. you can read test results
>    not just once) and generally plays better with the stateless
>    architecture of britney.
> 
>  * We extend debci (i. e. http://ci.debian.net) for multiple
>    architectures and perhaps other missing things, and move to that as
>    a developer-facing frontend for showing artifacts and results. For
>    the Ubuntu CI dashboard etc. we could just read the files from
>    swift directly, or read the aggregated debci .json files.
> 
> I have some throwaway scripts [4] and 3 containers (swift, adt
> controller, adt slave) to evaluate how rabbitmq and swift work and how
> to use them from Python to glue all the components together. This is
> just for learning, but it shows that these APIs are quite pleasant and
> simple, and at the same time robust.

I think an architecture very similar to this could be implemented to this by
extending debci:

- add a new `remote` backend, which will implement test runs as
  follows:

  - publish a message to and autopkgtest_* asking for a given package to
    be tested
  - wait for the test to finish
  - collect results (log file + adt-run exit status)

- add a remote worker daemon that would listen to the queue, run the
  tests against a local backend (schroot, or lxc/kvm when those are
  available) and send the results back in a results queue (log file +
  autopkgtest exit status)

- make britney read from debci data API (which is the plan AFAICT for
  using the autopkgtest test results in Debian testing migration)

The only point that doesn't fit with your ideas is the fact that each
debci run currently _waits_ for all package tests to finish. The reason
for that is very prosaic: it's  just to be able to generate the needed
indexes (i.e. consolidated .json files in /data/$suite-$arch/) in a
concurrency-free way (also the overall run is inside a critical section
so you cannot have two concurrent debci runs for the same suite/arch
pair).

So even though it debci already supports testing packages in parallel,
the total test run time will be limited by the slowest package.

Depending on the latency that you need, that may not even be a problem.
For Debian, given that the archive updates updates 4 times a day, with
enough workers running in parallel that would not be problem IMO. How
often does the Ubuntu archive update?

If that comes to be a problem, I think it's not too hard to redesign the
index generation to take concurrency into account.

> Discussion
> ----------
> I'd like to know from all of you what you think about this redesign,
> whether you think it's sound, whether you already thought about/worked
> on this problem, and what is missing from this.
> 
> I'm quite happy to work on the implementation (now that the basic
> building blocks are all there this actually shouldn't take long), but
> I'd really like us to get to a common agreement how to design this for
> Debian and Ubuntu.

I hope I could put my vision in an understandable way. I am spending
most of my free time on debci nowadays, I am quite into it and I want it
to "succeed" badly; so even though I di believe debci could implement
this architecture, please take my points with a grain of salt.

> If you think it's helpful, we can also organize a Google Hangout and
> talk face to face sometime soon?

That would be nice I think. Late next week should work for me.

-- 
Antonio Terceiro <terceiro at debian.org>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.alioth.debian.org/pipermail/autopkgtest-devel/attachments/20140314/684bd5ce/attachment.sig>


More information about the autopkgtest-devel mailing list