[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [xen-4.6-testing baseline-only test] 38229: regressions - FAIL



This run is configured for baseline tests only.

flight 38229 xen-4.6-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/38229/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-win7-amd64  9 windows-install    fail REGR. vs. 38211

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-rumpuserxen-amd64 15 
rumpuserxen-demo-xenstorels/xenstorels.repeat fail like 38211
 test-amd64-amd64-xl-qemut-winxpsp3  9 windows-install          fail like 38211
 test-amd64-amd64-xl-qemuu-winxpsp3  9 windows-install          fail like 38211

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-qcow2  9 debian-di-install            fail never pass
 test-armhf-armhf-xl-vhd       9 debian-di-install            fail   never pass
 test-armhf-armhf-libvirt-raw  9 debian-di-install            fail   never pass
 test-armhf-armhf-libvirt     14 guest-saverestore            fail   never pass
 test-armhf-armhf-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 9 debian-hvm-install fail 
never pass
 test-armhf-armhf-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-xsm 14 guest-saverestore            fail   never pass
 test-amd64-amd64-xl-pvh-amd  11 guest-start                  fail   never pass
 test-amd64-amd64-xl-pvh-intel 11 guest-start                  fail  never pass
 test-armhf-armhf-xl-xsm      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-xsm      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-multivcpu 12 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-midway   13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-midway   12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-libvirt-vhd 11 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 migrate-support-check 
fail never pass

version targeted for testing:
 xen                  bdc9fdf9d468cb94ca0fbed1b969c20bf173dc9b
baseline version:
 xen                  e4a1dcbfaeb4aa30101b2c9befaca9d460713396

Last test of basis    38211  2015-10-25 19:19:08 Z    5 days
Testing same since    38229  2015-10-30 17:26:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  David Vrabel <david.vrabel@xxxxxxxxxx>
  Ian Campbell <ian.campbell@xxxxxxxxxx>
  Jan Beulich <jbeulich@xxxxxxxx>
  Julien Grall <julien.grall@xxxxxxxxxx>

jobs:
 build-amd64-xsm                                              pass
 build-armhf-xsm                                              pass
 build-i386-xsm                                               pass
 build-amd64                                                  pass
 build-armhf                                                  pass
 build-i386                                                   pass
 build-amd64-libvirt                                          pass
 build-armhf-libvirt                                          pass
 build-i386-libvirt                                           pass
 build-amd64-prev                                             pass
 build-i386-prev                                              pass
 build-amd64-pvops                                            pass
 build-armhf-pvops                                            pass
 build-i386-pvops                                             pass
 build-amd64-rumpuserxen                                      pass
 build-i386-rumpuserxen                                       pass
 test-amd64-amd64-xl                                          pass
 test-armhf-armhf-xl                                          pass
 test-amd64-i386-xl                                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm                pass
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm                 pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm                pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm                 pass
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail
 test-amd64-amd64-libvirt-xsm                                 pass
 test-armhf-armhf-libvirt-xsm                                 fail
 test-amd64-i386-libvirt-xsm                                  pass
 test-amd64-amd64-xl-xsm                                      pass
 test-armhf-armhf-xl-xsm                                      pass
 test-amd64-i386-xl-xsm                                       pass
 test-amd64-amd64-xl-pvh-amd                                  fail
 test-amd64-i386-qemut-rhel6hvm-amd                           pass
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass
 test-amd64-i386-freebsd10-amd64                              pass
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass
 test-amd64-amd64-rumpuserxen-amd64                           fail
 test-amd64-amd64-xl-qemut-win7-amd64                         fail
 test-amd64-i386-xl-qemut-win7-amd64                          fail
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail
 test-amd64-i386-xl-qemuu-win7-amd64                          fail
 test-amd64-amd64-xl-credit2                                  pass
 test-armhf-armhf-xl-credit2                                  pass
 test-amd64-i386-freebsd10-i386                               pass
 test-amd64-i386-rumpuserxen-i386                             pass
 test-amd64-amd64-xl-pvh-intel                                fail
 test-amd64-i386-qemut-rhel6hvm-intel                         pass
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass
 test-amd64-amd64-libvirt                                     pass
 test-armhf-armhf-libvirt                                     fail
 test-amd64-i386-libvirt                                      pass
 test-armhf-armhf-xl-midway                                   pass
 test-amd64-amd64-migrupgrade                                 pass
 test-amd64-i386-migrupgrade                                  pass
 test-amd64-amd64-xl-multivcpu                                pass
 test-armhf-armhf-xl-multivcpu                                pass
 test-amd64-amd64-pair                                        pass
 test-amd64-i386-pair                                         pass
 test-amd64-amd64-libvirt-pair                                pass
 test-amd64-i386-libvirt-pair                                 pass
 test-amd64-amd64-amd64-pvgrub                                pass
 test-amd64-amd64-i386-pvgrub                                 pass
 test-amd64-amd64-pygrub                                      pass
 test-armhf-armhf-libvirt-qcow2                               fail
 test-amd64-amd64-xl-qcow2                                    pass
 test-armhf-armhf-libvirt-raw                                 fail
 test-amd64-i386-xl-raw                                       pass
 test-amd64-amd64-xl-rtds                                     pass
 test-armhf-armhf-xl-rtds                                     pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     pass
 test-amd64-amd64-libvirt-vhd                                 pass
 test-armhf-armhf-xl-vhd                                      fail
 test-amd64-amd64-xl-qemut-winxpsp3                           fail
 test-amd64-i386-xl-qemut-winxpsp3                            pass
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail
 test-amd64-i386-xl-qemuu-winxpsp3                            pass


------------------------------------------------------------
sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
    http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.

------------------------------------------------------------
commit bdc9fdf9d468cb94ca0fbed1b969c20bf173dc9b
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 29 13:52:02 2015 +0100

    x86: rate-limit logging in do_xen{oprof,pmu}_op()

    Some of the sub-ops are acessible to all guests, and hence should be
    rate-limited. In the xenoprof case, just like for XSA-146, include them
    only in debug builds. Since the vPMU code is rather new, allow them to
    be always present, but downgrade them to (rate limited) guest messages.

    This is CVE-2015-7971 / XSA-152.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    master commit: 95e7415843b94c346e5ba8682665f508f220e04b
    master date: 2015-10-29 13:37:19 +0100

commit 429f0cd270851462783fc6d56d6bae9cbb40bdca
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 29 13:51:24 2015 +0100

    xenoprof: free domain's vcpu array

    This was overlooked in fb442e2171 ("x86_64: allow more vCPU-s per
    guest").

    This is CVE-2015-7969 / XSA-151.

    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    master commit: 6e97c4b37386c2d09e09e9b5d5d232e37728b960
    master date: 2015-10-29 13:36:52 +0100

commit 4a32fbd95af6503ea1314ff2aa9a0b0a473d46c0
Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Date:   Thu Oct 29 13:50:59 2015 +0100

    x86/PoD: Eager sweep for zeroed pages

    Based on the contents of a guests physical address space,
    p2m_pod_emergency_sweep() could degrade into a linear memcmp() from 0 to
    max_gfn, which runs non-preemptibly.

    As p2m_pod_emergency_sweep() runs behind the scenes in a number of contexts,
    making it preemptible is not feasible.

    Instead, a different approach is taken.  Recently-populated pages are 
eagerly
    checked for reclaimation, which amortises the p2m_pod_emergency_sweep()
    operation across each p2m_pod_demand_populate() operation.

    Note that in the case that a 2M superpage can't be reclaimed as a superpage,
    it is shattered if 4K pages of zeros can be reclaimed.  This is unfortunate
    but matches the previous behaviour, and is required to avoid regressions
    (domain crash from PoD exhaustion) with VMs configured close to the limit.

    This is CVE-2015-7970 / XSA-150.

    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx>
    master commit: 101ce53266866144e724ed593173bc4098b300b9
    master date: 2015-10-29 13:36:25 +0100

commit 2c57108c36eaa10885b7d0daad534348717e4f9d
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 29 13:49:56 2015 +0100

    free domain's vcpu array

    This was overlooked in fb442e2171 ("x86_64: allow more vCPU-s per
    guest").

    This is CVE-2015-7969 / XSA-149.

    Reported-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    master commit: d46896ebbb23f3a9fef2eb6066ae614fd1acfd96
    master date: 2015-10-29 13:35:40 +0100

commit 2d094bd87072e26ac29b07917d31fcbf13892288
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 29 13:48:09 2015 +0100

    x86: guard against undue super page PTE creation

    When optional super page support got added (commit bd1cd81d64 "x86: PV
    support for hugepages"), two adjustments were missed: mod_l2_entry()
    needs to consider the PSE and RW bits when deciding whether to use the
    fast path, and the PSE bit must not be removed from L2_DISALLOW_MASK
    unconditionally.

    This is CVE-2015-7835 / XSA-148.

    Reported-by: "栾��(好�)" <shangcong.lsc@xxxxxxxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Tim Deegan <tim@xxxxxxx>
    master commit: fe360c90ea13f309ef78810f1a2b92f2ae3b30b8
    master date: 2015-10-29 13:35:07 +0100

commit df6fa370865717ee51530c0102d1e983a70d37c3
Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date:   Thu Oct 29 13:47:38 2015 +0100

    arm: handle races between relinquish_memory and free_domheap_pages

    Primarily this means XENMEM_decrease_reservation from a toolstack
    domain.

    Unlike x86 we have no requirement right now to queue such pages onto
    a separate list, if we hit this race then the other code has already
    fully accepted responsibility for freeing this page and therefore
    there is no more for relinquish_memory to do.

    This is CVE-2015-7814 / XSA-147.

    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Reviewed-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 1ef01396fdff88b1c3331a09ca5c69619b90f4ea
    master date: 2015-10-29 13:34:17 +0100

commit b18d995ca341d07a38fec04aa137e9ef85ee4dd0
Author: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date:   Thu Oct 29 13:47:10 2015 +0100

    arm: rate-limit logging from unimplemented PHYSDEVOP and HVMOP.

    These are guest accessible and should therefore be rate-limited.
    Moreover, include them only in debug builds.

    This is CVE-2015-7813 / XSA-146.

    Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 1c0e59ff15764e7b0c59282365974f5b8924ce83
    master date: 2015-10-29 13:33:38 +0100

commit ea95ecb8bf30f83b52a079cdfc824a3ba6ffd4ef
Author: Julien Grall <julien.grall@xxxxxxxxxx>
Date:   Thu Oct 29 13:46:45 2015 +0100

    arm: Support hypercall_create_continuation for multicall

    Multicall for ARM has been supported since commit f0dbdc6 "xen: arm: fully
    implement multicall interface.". Although, if an hypercall in multicall
    requires preemption, it will crash the host:

    (XEN) Xen BUG at domain.c:347
    (XEN) ----[ Xen-4.7-unstable  arm64  debug=y  Tainted:    C ]----
    [...]
    (XEN) Xen call trace:
    (XEN)    [<00000000002420cc>] hypercall_create_continuation+0x64/0x380 (PC)
    (XEN)    [<0000000000217274>] do_memory_op+0x1b00/0x2334 (LR)
    (XEN)    [<0000000000250d2c>] do_multicall_call+0x114/0x124
    (XEN)    [<0000000000217ff0>] do_multicall+0x17c/0x23c
    (XEN)    [<000000000024f97c>] do_trap_hypercall+0x90/0x12c
    (XEN)    [<0000000000251ca8>] do_trap_hypervisor+0xd2c/0x1ba4
    (XEN)    [<00000000002582cc>] guest_sync+0x88/0xb8
    (XEN)
    (XEN)
    (XEN) ****************************************
    (XEN) Panic on CPU 5:
    (XEN) Xen BUG at domain.c:347
    (XEN) ****************************************
    (XEN)
    (XEN) Manual reset required ('noreboot' specified)

    Looking to the code, the support of multicall looks valid to me, as we only
    need to fill call.args[...]. So drop the BUG();

    This is CVE-2015-7812 / XSA-145.

    Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    master commit: 29bcf64ce8bc0b1b7aacd00c8668f255c4f0686c
    master date: 2015-10-29 13:31:10 +0100

commit 566bfb1a00b3e3f6ed6610fe34a5b9ef42bcb81e
Author: Jan Beulich <jbeulich@xxxxxxxx>
Date:   Thu Oct 29 13:45:05 2015 +0100

    x86/PV: don't zero-map LDT

    This effectvely reverts the LDT related part of commit cf6d39f819
    ("x86/PV: properly populate descriptor tables"), which broke demand
    paged LDT handling in guests.

    Reported-by: David Vrabel <david.vrabel@xxxxxxxxxx>
    Diagnosed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Tested-by: David Vrabel <david.vrabel@xxxxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: 61031e64d3dafd2fb1953436444bf02eccb9b146
    master date: 2015-10-27 14:46:12 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.