[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.2-testing test] 17604: regressions - FAIL



flight 17604 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/17604/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 17601 REGR. vs. 17481

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 17601

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install fail in 17601 like 17481

Tests which did not succeed, but are not blocking:
 build-armhf                   4 xen-build                    fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  0984c2b63a7c3e9dfa770b29dac51a5124aecaab
baseline version:
 xen                  2bebeac00164b8fd6fdc98db74df943d927aab06

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  Ben Guthro <benjamin.guthro@xxxxxxxxxx>
  Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
  George Dunlap <george.dunlap@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Keir Fraser <keir@xxxxxxx>
  Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
  Tim Deegan <tim@xxxxxxx>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0984c2b63a7c3e9dfa770b29dac51a5124aecaab
Author: Tim Deegan <tim@xxxxxxx>
Date:   Tue Apr 9 16:07:23 2013 +0200

    x86/mm/shadow: spurious warning when unmapping xenheap pages.
    
    Xenheap pages will always have an extra typecount, taken in
    share_xen_page_with_guest(), which doesn't come from a shadow PTE.
    Adjust the warning in sh_remove_all_mappings() to account for it.
    
    Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Signed-off-by: Tim Deegan <tim@xxxxxxx>
    Tested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: cfc515dabe91e3d6c690c68c6a669d6d77fb7ac4
    master date: 2013-04-04 10:14:30 +0100

commit d5483523365e85c324d06e8e2b025829aae95f41
Author: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
Date:   Tue Apr 9 16:06:44 2013 +0200

    VMX: Always disable SMEP when guest is in non-paging mode
    
    commit e7dda8ec9fc9020e4f53345cdbb18a2e82e54a65
      VMX: disable SMEP feature when guest is in non-paging mode
    
    disabled the SMEP bit if a guest VCPU was using HAP and was not
    in paging mode. However I could observe VCPUs getting stuck in
    the trampoline after the following patch in the Linux kernel
    changed the way CR4 gets set up:
      x86, realmode: read cr4 and EFER from kernel for 64-bit trampoline
    
    The change will set CR4 from already set flags which includes the
    SMEP bit. On bare metal this does not matter as the CPU is in non-
    paging mode at that time. But Xen seems to use the emulated non-
    paging mode regardless of HAP (I verified that on the guests I was
    seeing the issue, HAP was not used).
    
    Therefor it seems right to unset the SMEP bit for a VCPU that is
    not in paging-mode, regardless of its HAP usage.
    
    Signed-off-by: Stefan Bader <stefan.bader@xxxxxxxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    Acked-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
    master commit: 0d2e673a763bc7c2ddf97fed074eb691d325ecc5
    master date: 2013-04-04 10:37:19 +0200

commit 77cb78851429a5a8509e3dfed466b2580ad5c60d
Author: Ben Guthro <benjamin.guthro@xxxxxxxxxx>
Date:   Tue Apr 9 16:05:52 2013 +0200

    x86/S3: Restore broken vcpu affinity on resume
    
    When in SYS_STATE_suspend, and going through the cpu_disable_scheduler
    path, save a copy of the current cpu affinity, and mark a flag to
    restore it later.
    
    Later, in the resume process, when enabling nonboot cpus restore these
    affinities.
    
    Signed-off-by: Ben Guthro <benjamin.guthro@xxxxxxxxxx>
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    master commit: 41e71c2607e036f1ac00df898b8f4acb2d4df7ee
    master date: 2013-04-02 09:52:32 +0200

commit 5da556675105169807ae2bfe232a2cc4c3f2ae1e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Tue Apr 9 16:04:25 2013 +0200

    x86: irq_move_cleanup_interrupt() must ignore legacy vectors
    
    Since the main loop in the function includes legacy vectors, and since
    vector_irq[] gets set up for legacy vectors regardless of whether those
    get handled through the IO-APIC, it must not do anything on this vector
    range. In fact, we should never get past the move_cleanup_count check
    for IRQs not handled through the IO-APIC. Adding a respective assertion
    woulkd make those iterations more expensive (due to the lock acquire).
    
    For such an assertion to not have false positives we however ought to
    suppress setting up IRQ2 as an 8259A interrupt (which wasn't correct
    anyway), which is being done here despite the assertion not actually
    getting added.
    
    Furthermore, there's no point iterating over the vectors past
    LAST_HIPRIORITY_VECTOR, so terminate the loop accordingly.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Acked-by: Keir Fraser <keir@xxxxxxx>
    master commit: af699220ad6d111ba76fc3040342184e423cc9a1
    master date: 2013-04-02 08:30:03 +0200
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.