[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.1-testing test] 17243: regressions - FAIL

flight 17243 xen-4.1-testing real [real]

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 17208
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 17208

Tests which did not succeed, but are not blocking:
 build-armhf                   4 xen-build                    fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  b56fe0b46f98afe918455bae193d543d3ffc4598
baseline version:
 xen                  ab01d96364683bb10619b794611b9244ba5ea227

People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
  Tim Deegan <tim@xxxxxxx>

 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    

sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at

Test harness code can be found at

Not pushing.

commit b56fe0b46f98afe918455bae193d543d3ffc4598
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Mar 12 16:46:09 2013 +0100

    x86/MSI: add mechanism to fully protect MSI-X table from PV guest accesses
    This adds two new physdev operations for Dom0 to invoke when resource
    allocation for devices is known to be complete, so that the hypervisor
    can arrange for the respective MMIO ranges to be marked read-only
    before an eventual guest getting such a device assigned even gets
    started, such that it won't be able to set up writable mappings for
    these MMIO ranges before Xen has a chance to protect them.
    This also addresses another issue with the code being modified here,
    in that so far write protection for the address ranges in question got
    set up only once during the lifetime of a device (i.e. until either
    system shutdown or device hot removal), while teardown happened when
    the last interrupt was disposed of by the guest (which at least allowed
    the tables to be writable when the device got assigned to a second
    guest [instance] after the first terminated).
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    master changeset: 4245d331e0e75de8d1bddbbb518f3a8ce6d0bb7e
    master date: 2013-03-08 14:05:34 +0100

commit 672c075ab39e2dea517a75d5e6091e6df82c3f71
Author: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Date:   Tue Mar 12 16:30:41 2013 +0100

    credit1: Use atomic bit operations for the flags structure
    The flags structure is not protected by locks (or more precisely,
    it is protected using an inconsistent set of locks); we therefore need
    to make sure that all accesses are atomic-safe.  This is particulary
    important in the case of the PARKED flag, which if clobbered while
    changing the YIELD bit will leave a vcpu wedged in an offline state.
    Using the atomic bitops also requires us to change the size of the "flags"
    Spotted-by: Igor Pavlikevich <ipavlikevich@xxxxxxxxx>
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    master changeset: be6507509454adf3bb5a50b9406c88504e996d5a
    master date: 2013-03-04 13:37:39 +0100

commit d8f9d5b8b98de7d035c5064fe99a44f0a383842f
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Mar 12 16:30:04 2013 +0100

    x86: defer processing events on the NMI exit path
    Otherwise, we may end up in the scheduler, keeping NMIs masked for a
    possibly unbounded period of time (until whenever the next IRET gets
    executed). Enforce timely event processing by sending a self IPI.
    Of course it's open for discussion whether to always use the straight
    exit path from handle_ist_exception.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    master changeset: d463b005bbd6475ed930a302821efe239e1b2cf9
    master date: 2013-03-04 10:19:34 +0100

commit 16c0caad7d44e80535d44c0690a691d22f74d378
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Mar 12 16:29:11 2013 +0100

    SEDF: avoid gathering vCPU-s on pCPU0
    The introduction of vcpu_force_reschedule() in 14320:215b799fa181 was
    incompatible with the SEDF scheduler: Any vCPU using
    VCPUOP_stop_periodic_timer (e.g. any vCPU of half way modern PV Linux
    guests) ends up on pCPU0 after that call. Obviously, running all PV
    guests' (and namely Dom0's) vCPU-s on pCPU0 causes problems for those
    guests rather sooner than later.
    So the main thing that was clearly wrong (and bogus from the beginning)
    was the use of cpumask_first() in sedf_pick_cpu(). It is being replaced
    by a construct that prefers to put back the vCPU on the pCPU that it
    got launched on.
    However, there's one more glitch: When reducing the affinity of a vCPU
    temporarily, and then widening it again to a set that includes the pCPU
    that the vCPU was last running on, the generic scheduler code would not
    force a migration of that vCPU, and hence it would forever stay on the
    pCPU it last ran on. Since that can again create a load imbalance, the
    SEDF scheduler wants a migration to happen regardless of it being
    apparently unnecessary.
    Of course, an alternative to checking for SEDF explicitly in
    vcpu_set_affinity() would be to introduce a flags field in struct
    scheduler, and have SEDF set a "always-migrate-on-affinity-change"
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    master changeset: e6a6fd63652814e5c36a0016c082032f798ced1f
    master date: 2013-03-04 10:17:52 +0100

commit 44e4100a138b2caf6a0947ad7ddef65c7678dbf3
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Mar 12 16:24:58 2013 +0100

    x86: make certain memory sub-ops return valid values
    When a domain's shared info field "max_pfn" is zero,
    domain_get_maximum_gpfn() so far returned ULONG_MAX, which
    do_memory_op() in turn converted to -1 (i.e. -EPERM). Make the former
    always return a sensible number (i.e. zero if the field was zero) and
    have the latter no longer truncate return values.
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    master changeset: 7ffc9779aa5120c5098d938cb88f69a1dda9a0fe
    master date: 2013-03-04 10:16:04 +0100

commit 6b2f399604ba0d46203855658b72b9fcba76a7ac
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Mar 12 16:23:55 2013 +0100

    fix compat memory exchange op splitting
    A shift with a negative count was erroneously used here, yielding
    undefined behavior.
    Reported-by: Xi Wang <xi@xxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    master changeset: 53decd322157e922cac2988e07da6d39538c8033
    master date: 2013-03-01 16:59:49 +0100

commit 0ad719247a5072f3a1af6c77faf203e724acf2fd
Author: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date:   Tue Mar 12 16:23:15 2013 +0100

    Avoid stale pointer when moving domain to another cpupool
    When a domain is moved to another cpupool the scheduler private data 
    in vcpu and domain structures must never point to an already freed memory
    While at it, simplify sched_init_vcpu() by using DOM2OP instead VCPU2OP.
    Signed-off-by: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
    This also required commit dbfa7bba0f213b1802e1900b71bc34837c30ee52:
    xen, cpupools: Fix cpupool-move to make more consistent
    The full order for creating new private data structures when moving
    from one pool to another is now:
    * Allocate all new structures
     - Allocate a new private domain structure (but don't point there yet)
     - Allocate per-vcpu data structures (but don't point there yet)
    * Remove old structures
     - Remove each vcpu, freeing the associated data structure
     - Free the domain data structure
    * Switch to the new structures
     - Set the domain to the new cpupool, with the new private domain
     - Set each vcpu to the respective new structure, and insert
    This is in line with a (fairly reasonable) assumption in credit2 that
    the private structure of the domain will be the private structure
    pointed to by the per-vcpu private structure.
    Also fix a bug, in which insert_vcpu was called with the *old* vcpu
    ops rather than the new ones.
    Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    master changeset: 482300def7d08e773ccd2a0d978bcb9469fdd810
    master date: 2013-02-28 14:56:45 +0000

commit 677c9126023e79cb8e0a74bc002bd7af06b53214
Author: Tim Deegan <tim@xxxxxxx>
Date:   Tue Mar 12 16:22:03 2013 +0100

    vmx: fix handling of NMI VMEXIT.
    Call do_nmi() directly and explicitly re-enable NMIs rather than
    raising an NMI through the APIC. Since NMIs are disabled after the
    VMEXIT, the raised NMI would be blocked until the next IRET
    instruction (i.e. the next real interrupt, or after scheduling a PV
    guest) and in the meantime the guest will spin taking NMI VMEXITS.
    Also, handle NMIs before re-enabling interrupts, since if we handle an
    interrupt (and therefore IRET) before calling do_nmi(), we may end up
    running the NMI handler with NMIs enabled.
    Signed-off-by: Tim Deegan <tim@xxxxxxx>
    Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
    master changeset: 7dd3b06ff031c9a8c727df16c5def2afb382101c
    master date: 2013-02-28 14:00:18 +0000
(qemu changes not included)

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.