[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 25286: regressions - FAIL



flight 25286 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25286/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install    fail REGR. vs. 25281

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  9b51591c330672765d866a5ed4ed8e40c75bb1cf
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx>
  Fabio Fantoni <fabio.fantoni@xxxxxxx>
  Frediano Ziglio <frediano.ziglio@xxxxxxxxxx>
  Ian Cambell <ian.campbell@xxxxxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Julien Grall <julien.grall@xxxxxxxxxx>
  Liu Jinsong <jinsong.liu@xxxxxxxxx>
  Xiantoa Zhang <xiantao.zhang@xxxxxxxxx>
  Xudong Hao <xudong.hao@xxxxxxxxx>
  Yang Zhang <yang.z.zhang@xxxxxxxxx>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page 
callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Xiantoa Zhang <xiantao.zhang@xxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Ian Cambell <ian.campbell@xxxxxxxxxx>
    Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@xxxxxxx>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@xxxxxxx>
    Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@xxxxxxxxx>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@xxxxxxxxx>
    Signed-off-by: Liu Jinsong <jinsong.liu@xxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@xxxxxxxxx>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the 
same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@xxxxxxxxx>
    Reviewed-by: Christoph Egger <chegger@xxxxxxxxx>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@xxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Liu Jinsong <jinsong.liu@xxxxxxxxx>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@xxxxxxxxxx>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@xxxxxxxxxx>
    Reviewed-by: Liu Jinsong <jinsong.liu@xxxxxxxxx>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.