[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH OSSTEST v5 00/24] add distro domU testing flight



Hi,

It's been a long time since v3 of this series, which was
http://article.gmane.org/gmane.comp.emulators.xen.devel/224415. That
contains the background which I won't repeat here. v4 was
http://article.gmane.org/gmane.comp.emulators.xen.devel/238558

Since v4 I've addressed the review feedback and tweaked the slack space
for non-LVM volumes to based on running some test runs (1M wasn't enough
for raw)

As last time there are some patches in here which I think will be useful
to the Intel folks doing the nested virt testing, specifically the
refactoring of how overlays and ssh host keys are done will be useful
for installing a guest to be treated as the L1 host. LongTao, I've CCd
you only on the 3 patches which I think will be of interest instead of
the full 24 patch series. Note that some of this differs this time
around based on the review feedback.

I've run some more adhoc jobs on the Cambridge instance, the results of
the last of which are at:
http://www.chiark.greenend.org.uk/~xensrcts/logs/37076/

I've also run an adhoc xen-unstable test of the new jobs there, results
at hhttp://www.chiark.greenend.org.uk/~xensrcts/logs/37077/ the
interesting jobs are *-pvgrub, *-pygrub for the bootloader tests and
*-{qcow2,raw,vhd} for the image format stuff, LVM is tested by the
existing -{xl,libvirt} jobs.

When I originally started this work I envisioned a flight running on the
main production instance (in Cambridge at the time). Now that we have
the new colo I would still consider that the main production instance is
the best home, however given the new colo is not yet up to full capacity
we could also consider running this flight in Cambridge for the time
being (once the control VM is no longer so sick). What do you think?

Also when running the adhoc tests the sheer number of jobs which are
involved (with my hope there will be more in the future as other distros
get in on the act) I was considering splitting it into multiple
distros-debian-{squeeze,wheezy,jessie,sid,daily} flights. Thoughts?

Summary of (A)cks and (M)odified:

M        TestSupport: Add helper to fetch a URL on a host
 A       TestSupport: allow caller of prepareguest_part_xencfg to specify 
viftype
 A       create_webfile: Support use with guests as well as hosts.
M        Debian: refactor code to add preseed commands to the preseed file
M        Debian: refactor preseeding of .ssh directories
MA       Debian: Refactor installation of overlays, so it can be used for 
guests too
M        Debian: add preseed_create_guest helper
 A       make-flight: Handle $BUILD_LVEXTEND_MAX in 
mfi-common:create_build_jobs()
         distros: add support for installing Debian PV guests via d-i, flight 
and jobs
         distros: support booting Debian PV (d-i installed) guests with pvgrub.
         distros: Support pvgrub for Wheezy too.
         distros: support PV guest install from Debian netinst media.
         Test pygrub and pvgrub on the regular flights
         distros: add branch infrastructure
 A       distros: Run a flight over the weekend.
         Debian: Handle lack of bootloader support in d-i on ARM.
         standalone: propagate result of command from with_logging
         ts-debian-di-install: Refactor root_disk specification
         make-flight: refactor PV debian tests
M        Add testing of non-LVM/phy disk backends.
         mfi-common: Allow make-*flight to filter the set of build jobs to 
include
         make-distros-flight: don't bother building for XSM.
         distros: email only me on play flights
         ts-debian-di-install: Use ftp.debian.org directly

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.