[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-unstable test] 18113: regressions - FAIL



flight 18113 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/18113/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              4 kernel-build              fail REGR. vs. 18111
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 18111
 build-i386                    4 xen-build                 fail REGR. vs. 18111

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  2caac1caa19bdaeb9ab14b2baf1342e00c4d0495
baseline version:
 xen                  61c6dfce3296da2643c4c4f90eaab6fa3c1cf8b3

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Julien Grall <julien.grall@xxxxxxxxxx>
  Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
  Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2caac1caa19bdaeb9ab14b2baf1342e00c4d0495
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Thu Jun 13 15:52:49 2013 +0100

    xen/arm: Use the right GICD register to initialize IRQs routing
    
    Currently IRQs routing is initialized to the wrong register and overwrites
    interrupt configuration register (ICFGRn).
    
    Reported-by: Sander Bogaert <sander.bogaert@xxxxxxxxxxxxx>
    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

commit c4cf4c30a8a282ee874a0ab8aed43493cf6a928c
Author: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Date:   Tue Jun 4 16:18:17 2013 +0100

    xen/arm: define PAGE_HYPERVISOR as WRITEALLOC
    
    Use stage 1 attribute indexes for PAGE_HYPERVISOR, the appriopriate one
    for normal memory hypervisor mappings in Xen is WRITEALLOC.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>

commit 97e3e84c6dfa94f09b97d21454d4252fc3c190d8
Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date:   Tue Jun 4 11:54:10 2013 +0100

    xen/arm64: fix stack dump in show_trace
    
    On aarch64 the frame pointer points to the next frame pointer and the return
    address is the previous stack slot (so below on the downward growing stack,
    therefore above in memory):
    
           |<RETURN ADDR>      ^addresses grow up
     FP -> |<NEXT FP>          |
           |                   |
           v                   |
           stack grows down.
    
    This is contrary to aarch32 where the frame pointer points to the return
    address and the next frame pointer is the next stack slot (so above on the
    downward growing stack, below in memory):
    
     FP -> |<RETURN ADDR>       ^addresses grow up
           |<NEXT FP>           |
           |                    |
           v                    |
           stack grows down.
    
    In addition print out LR as part of the trace, since it may contain the
    penultimate return address e.g. if the ultimate function is a leaf function.
    
    Lastly nuke some unnecessary braces.
    
    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Acked-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>

commit 0c6781ee6a5f0df55eab6be8a92853d3154c0c7b
Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date:   Thu Jun 13 16:09:03 2013 +0200

    tmem: Don't use map_domain_page for long-life-time pages
    
    When using tmem with Xen 4.3 (and debug build) we end up with:
    
    (XEN) Xen BUG at domain_page.c:143
    (XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Not tainted ]----
    (XEN) CPU:    3
    (XEN) RIP:    e008:[<ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
    ..
    (XEN) Xen call trace:
    (XEN)    [<ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
    (XEN)    [<ffff82c4c01373de>] cli_get_page+0x15e/0x17b
    (XEN)    [<ffff82c4c01377c4>] tmh_copy_from_client+0x150/0x284
    (XEN)    [<ffff82c4c0135929>] do_tmem_put+0x323/0x5c4
    (XEN)    [<ffff82c4c0136510>] do_tmem_op+0x5a0/0xbd0
    (XEN)    [<ffff82c4c022391b>] syscall_enter+0xeb/0x145
    (XEN)
    
    A bit of debugging revealed that the map_domain_page and unmap_domain_page
    are meant for short life-time mappings. And that those mappings are finite.
    In the 2 VCPU guest we only have 32 entries and once we have exhausted those
    we trigger the BUG_ON condition.
    
    The two functions - tmh_persistent_pool_page_[get,put] are used by the 
xmem_pool
    when xmem_pool_[alloc,free] are called. These xmem_pool_* function are 
wrapped
    in macro and functions - the entry points are via: tmem_malloc
    and tmem_page_alloc. In both cases the users are in the hypervisor and they
    do not seem to suffer from using the hypervisor virtual addresses.
    
    Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.