[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen-unstable test] 174920: trouble: broken/fail/pass
flight 174920 xen-unstable real [real] http://logs.test-lab.xenproject.org/osstest/logs/174920/ Failures and problems with tests :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-xl-pvshim <job status> broken test-amd64-amd64-xl-credit2 <job status> broken test-amd64-amd64-libvirt-vhd <job status> broken test-amd64-amd64-xl-pvhv2-amd <job status> broken in 174906 Tests which are failing intermittently (not blocking): test-amd64-amd64-xl-pvhv2-amd 5 host-install(5) broken in 174906 pass in 174920 test-amd64-amd64-examine-bios 5 host-install broken in 174906 pass in 174920 test-amd64-amd64-xl-credit2 5 host-install(5) broken pass in 174906 test-amd64-amd64-libvirt-vhd 5 host-install(5) broken pass in 174906 test-amd64-i386-xl-pvshim 5 host-install(5) broken pass in 174906 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail in 174906 pass in 174920 test-amd64-amd64-xl-xsm 12 debian-install fail pass in 174906 Tests which did not succeed, but are not blocking: test-amd64-i386-xl-pvshim 14 guest-start fail in 174906 never pass test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 174906 never pass test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop fail like 174797 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop fail like 174797 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop fail like 174797 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop fail like 174797 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop fail like 174797 test-armhf-armhf-libvirt 16 saverestore-support-check fail like 174797 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 174797 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop fail like 174797 test-armhf-armhf-libvirt-qcow2 15 saverestore-support-check fail like 174797 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail like 174797 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop fail like 174797 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop fail like 174797 test-arm64-arm64-xl-seattle 15 migrate-support-check fail never pass test-arm64-arm64-xl-seattle 16 saverestore-support-check fail never pass test-amd64-amd64-libvirt 15 migrate-support-check fail never pass test-amd64-amd64-libvirt-xsm 15 migrate-support-check fail never pass test-amd64-i386-libvirt-xsm 15 migrate-support-check fail never pass test-amd64-i386-libvirt 15 migrate-support-check fail never pass test-arm64-arm64-xl 15 migrate-support-check fail never pass test-arm64-arm64-xl 16 saverestore-support-check fail never pass test-arm64-arm64-xl-thunderx 15 migrate-support-check fail never pass test-arm64-arm64-xl-thunderx 16 saverestore-support-check fail never pass test-arm64-arm64-xl-credit2 15 migrate-support-check fail never pass test-arm64-arm64-xl-credit2 16 saverestore-support-check fail never pass test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail never pass test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail never pass test-arm64-arm64-xl-xsm 15 migrate-support-check fail never pass test-arm64-arm64-xl-xsm 16 saverestore-support-check fail never pass test-arm64-arm64-xl-credit1 15 migrate-support-check fail never pass test-arm64-arm64-xl-credit1 16 saverestore-support-check fail never pass test-armhf-armhf-xl-arndale 15 migrate-support-check fail never pass test-armhf-armhf-xl-arndale 16 saverestore-support-check fail never pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail never pass test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail never pass test-amd64-i386-libvirt-raw 14 migrate-support-check fail never pass test-armhf-armhf-xl 15 migrate-support-check fail never pass test-armhf-armhf-xl 16 saverestore-support-check fail never pass test-arm64-arm64-xl-vhd 14 migrate-support-check fail never pass test-arm64-arm64-xl-vhd 15 saverestore-support-check fail never pass test-armhf-armhf-xl-cubietruck 15 migrate-support-check fail never pass test-armhf-armhf-xl-cubietruck 16 saverestore-support-check fail never pass test-armhf-armhf-xl-credit1 15 migrate-support-check fail never pass test-armhf-armhf-xl-credit1 16 saverestore-support-check fail never pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass test-armhf-armhf-libvirt 15 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 15 migrate-support-check fail never pass test-armhf-armhf-xl-credit2 16 saverestore-support-check fail never pass test-armhf-armhf-xl-rtds 15 migrate-support-check fail never pass test-armhf-armhf-xl-rtds 16 saverestore-support-check fail never pass test-armhf-armhf-xl-vhd 14 migrate-support-check fail never pass test-armhf-armhf-xl-vhd 15 saverestore-support-check fail never pass test-arm64-arm64-libvirt-raw 14 migrate-support-check fail never pass test-arm64-arm64-libvirt-raw 15 saverestore-support-check fail never pass test-armhf-armhf-libvirt-qcow2 14 migrate-support-check fail never pass test-armhf-armhf-libvirt-raw 14 migrate-support-check fail never pass version targeted for testing: xen 345135942bf9632eba1409ba432cfcae3b7649c7 baseline version: xen f5d56f4b253072264efc0fece698a91779e362f5 Last test of basis 174797 2022-11-17 03:03:07 Z 5 days Failing since 174809 2022-11-18 00:06:55 Z 4 days 13 attempts Testing same since 174896 2022-11-21 20:43:42 Z 1 days 3 attempts ------------------------------------------------------------ People who touched revisions under test: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Anthony PERARD <anthony.perard@xxxxxxxxxx> Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx> Jan Beulich <jbeulich@xxxxxxxx> Roger Pau Monné <roger.pau@xxxxxxxxxx> jobs: build-amd64-xsm pass build-arm64-xsm pass build-i386-xsm pass build-amd64-xtf pass build-amd64 pass build-arm64 pass build-armhf pass build-i386 pass build-amd64-libvirt pass build-arm64-libvirt pass build-armhf-libvirt pass build-i386-libvirt pass build-amd64-prev pass build-i386-prev pass build-amd64-pvops pass build-arm64-pvops pass build-armhf-pvops pass build-i386-pvops pass test-xtf-amd64-amd64-1 pass test-xtf-amd64-amd64-2 pass test-xtf-amd64-amd64-3 pass test-xtf-amd64-amd64-4 pass test-xtf-amd64-amd64-5 pass test-amd64-amd64-xl pass test-amd64-coresched-amd64-xl pass test-arm64-arm64-xl pass test-armhf-armhf-xl pass test-amd64-i386-xl pass test-amd64-coresched-i386-xl pass test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm pass test-amd64-amd64-xl-qemut-debianhvm-i386-xsm pass test-amd64-i386-xl-qemut-debianhvm-i386-xsm pass test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm pass test-amd64-i386-xl-qemuu-debianhvm-i386-xsm pass test-amd64-amd64-libvirt-xsm pass test-arm64-arm64-libvirt-xsm pass test-amd64-i386-libvirt-xsm pass test-amd64-amd64-xl-xsm fail test-arm64-arm64-xl-xsm pass test-amd64-i386-xl-xsm pass test-amd64-amd64-qemuu-nested-amd fail test-amd64-amd64-xl-pvhv2-amd pass test-amd64-i386-qemut-rhel6hvm-amd pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-dom0pvh-xl-amd pass test-amd64-amd64-xl-qemut-debianhvm-amd64 pass test-amd64-i386-xl-qemut-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-freebsd10-amd64 pass test-amd64-amd64-qemuu-freebsd11-amd64 pass test-amd64-amd64-qemuu-freebsd12-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-xl-qemut-win7-amd64 fail test-amd64-i386-xl-qemut-win7-amd64 fail test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-amd64-xl-qemut-ws16-amd64 fail test-amd64-i386-xl-qemut-ws16-amd64 fail test-amd64-amd64-xl-qemuu-ws16-amd64 fail test-amd64-i386-xl-qemuu-ws16-amd64 fail test-armhf-armhf-xl-arndale pass test-amd64-amd64-examine-bios pass test-amd64-i386-examine-bios pass test-amd64-amd64-xl-credit1 pass test-arm64-arm64-xl-credit1 pass test-armhf-armhf-xl-credit1 pass test-amd64-amd64-xl-credit2 broken test-arm64-arm64-xl-credit2 pass test-armhf-armhf-xl-credit2 pass test-armhf-armhf-xl-cubietruck pass test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict pass test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict pass test-amd64-amd64-examine pass test-arm64-arm64-examine pass test-armhf-armhf-examine pass test-amd64-i386-examine pass test-amd64-i386-freebsd10-i386 pass test-amd64-amd64-qemuu-nested-intel pass test-amd64-amd64-xl-pvhv2-intel pass test-amd64-i386-qemut-rhel6hvm-intel pass test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-amd64-dom0pvh-xl-intel pass test-amd64-amd64-libvirt pass test-armhf-armhf-libvirt pass test-amd64-i386-libvirt pass test-amd64-amd64-livepatch pass test-amd64-i386-livepatch pass test-amd64-amd64-migrupgrade pass test-amd64-i386-migrupgrade pass test-amd64-amd64-xl-multivcpu pass test-armhf-armhf-xl-multivcpu pass test-amd64-amd64-pair pass test-amd64-i386-pair pass test-amd64-amd64-libvirt-pair pass test-amd64-i386-libvirt-pair pass test-amd64-amd64-xl-pvshim pass test-amd64-i386-xl-pvshim broken test-amd64-amd64-pygrub pass test-armhf-armhf-libvirt-qcow2 pass test-amd64-amd64-xl-qcow2 pass test-arm64-arm64-libvirt-raw pass test-armhf-armhf-libvirt-raw pass test-amd64-i386-libvirt-raw pass test-amd64-amd64-xl-rtds pass test-armhf-armhf-xl-rtds pass test-arm64-arm64-xl-seattle pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow pass test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow pass test-amd64-amd64-xl-shadow pass test-amd64-i386-xl-shadow pass test-arm64-arm64-xl-thunderx pass test-amd64-amd64-examine-uefi pass test-amd64-i386-examine-uefi pass test-amd64-amd64-libvirt-vhd broken test-arm64-arm64-xl-vhd pass test-armhf-armhf-xl-vhd pass test-amd64-i386-xl-vhd pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary broken-job test-amd64-i386-xl-pvshim broken broken-job test-amd64-amd64-xl-credit2 broken broken-job test-amd64-amd64-libvirt-vhd broken broken-step test-amd64-amd64-xl-credit2 host-install(5) broken-step test-amd64-amd64-libvirt-vhd host-install(5) broken-step test-amd64-i386-xl-pvshim host-install(5) broken-job test-amd64-amd64-xl-pvhv2-amd broken Not pushing. ------------------------------------------------------------ commit 345135942bf9632eba1409ba432cfcae3b7649c7 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Mon Nov 21 12:46:39 2022 +0000 xen/flask: Wire up XEN_DOMCTL_{get,set}_paging_mempool_size These were overlooked in the original patch, and noticed by OSSTest which does run some Flask tests. Fixes: 22b20bd98c02 ("xen: Introduce non-broken hypercalls for the paging mempool size") Suggested-by: Daniel Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Jason Andryuk <jandryuk@xxxxxxxxx> Acked-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit 8746d3e2550b142cd751ca7a041a38789a020d2b Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Fri Nov 18 16:53:45 2022 +0000 tools/libxl: Fixes to libxl__domain_set_paging_mempool_size() The error message accidentally printed the bytes value as if it were kB. Furthermore, both b_info.shadow_memkb and shadow_mem are uint64_t, meaning there is a risk of overflow if the user specified a stupidly large value in the vm.cfg file. Check and reject such a condition. Fixes: 7c3bbd940dd8 ("xen/arm, libxl: Revert XEN_DOMCTL_shadow_op; use p2m mempool hypercalls") Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit 8cdfbf95b19c01fbb741c41d5ea5a94f8823964c Author: Anthony PERARD <anthony.perard@xxxxxxxxxx> Date: Mon Nov 21 12:23:01 2022 +0100 libs/light: Propagate libxl__arch_domain_create() return code Commit 34990446ca91 started to overwrite the `rc` value from libxl__arch_domain_create(), thus error aren't propagated anymore. Check `rc` value before doing the next thing. Fixes: 34990446ca91 ("libxl: don't ignore the return value from xc_cpuid_apply_policy") Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Reviewed-by: Jason Andryuk <jandryuk@xxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit 57f07cca82521088cca0c1fc36d6ffd06cb7de80 Author: Roger Pau Monné <roger.pau@xxxxxxxxxx> Date: Mon Nov 21 12:21:51 2022 +0100 efifb: ignore frame buffer with invalid configuration On one of my boxes when the HDMI cable is not plugged in the FrameBufferBase of the EFI_GRAPHICS_OUTPUT_PROTOCOL_MODE structure is set to 0 by the firmware (while some of the other fields looking plausible). Such (bogus address) ends up mapped in vesa_init(), and since it overlaps with a RAM region the whole system goes down pretty badly, see: (XEN) vesafb: framebuffer at 0x0000000000000000, mapped to 0xffff82c000201000, using 35209k, total 35209k (XEN) vesafb: mode is 0x37557x32, linelength=960, font 8x16 (XEN) vesafb: Truecolor: size=8:8:8:8, shift=24:0:8:16 (XEN) (XEN) (XEN) (XEN) (XEN) (XEN) (XEN) (XEN) �ERROR: Class:0; Subclass:0; Operation: 0 ERROR: No ConOut ERROR: No ConIn Do like Linux and prevent using the EFI Frame Buffer if the base address is 0. This is inline with the logic in Linuxes fb_base_is_valid() function at drivers/video/fbdev/efifb.c v6.0.9. See also Linux commit 133bb070e94ab41d750c6f2160c8843e46f11b78 for further reference. Also prevent using Frame Buffers that have a 0 height or width, as those are also invalid. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit db8fa01c61db0317a9ee947925226234c65d48e8 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Thu Oct 20 12:14:30 2022 +0100 xen/arm: Correct the p2m pool size calculations Allocating or freeing p2m pages doesn't alter the size of the mempool; only the split between free and used pages. Right now, the hypercalls operate on the free subset of the pool, meaning that XEN_DOMCTL_get_paging_mempool_size varies with time as the guest shuffles its physmap, and XEN_DOMCTL_set_paging_mempool_size ignores the used subset of the pool and lets the guest grow unbounded. This fixes test-pagign-mempool on ARM so that the behaviour matches x86. This is part of XSA-409 / CVE-2022-33747. Fixes: cbea5a1149ca ("xen/arm: Allocate and free P2M pages from the P2M pool") Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit 7c3bbd940dd8aeb1649734e5055798cc6f3fea4e Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Tue Oct 25 15:27:05 2022 +0100 xen/arm, libxl: Revert XEN_DOMCTL_shadow_op; use p2m mempool hypercalls This reverts most of commit cf2a68d2ffbc3ce95e01449d46180bddb10d24a0, and bits of cbea5a1149ca7fd4b7cdbfa3ec2e4f109b601ff7. First of all, with ARM borrowing x86's implementation, the logic to set the pool size should have been common, not duplicated. Introduce libxl__domain_set_paging_mempool_size() as a shared implementation, and use it from the ARM and x86 paths. It is left as an exercise to the reader to judge how libxl/xl can reasonably function without the ability to query the pool size... Remove ARM's p2m_domctl() infrastructure now the functioanlity has been replaced with a working and unit tested interface. This is part of XSA-409 / CVE-2022-33747. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit bd87315a603bf25e869e6293f7db7b1024d67999 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Thu Oct 20 12:13:46 2022 +0100 tools/tests: Unit test for paging mempool size Exercise some basic functionality of the new xc_{get,set}_paging_mempool_size() hypercalls. This passes on x86, but fails currently on ARM. ARM will be fixed up in future patches. This is part of XSA-409 / CVE-2022-33747. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit 22b20bd98c025e06525410e3ab3494d5e63489f7 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Fri Oct 21 14:13:00 2022 +0100 xen: Introduce non-broken hypercalls for the paging mempool size The existing XEN_DOMCTL_SHADOW_OP_{GET,SET}_ALLOCATION have problems: * All set_allocation() flavours have an overflow-before-widen bug when calculating "sc->mb << (20 - PAGE_SHIFT)". * All flavours have a granularity of 1M. This was tolerable when the size of the pool could only be set at the same granularity, but is broken now that ARM has a 16-page stopgap allocation in use. * All get_allocation() flavours round up, and in particular turn 0 into 1, meaning the get op returns junk before a successful set op. * The x86 flavours reject the hypercalls before the VM has vCPUs allocated, despite the pool size being a domain property. * Even the hypercall names are long-obsolete. Implement a better interface, which can be first used to unit test the behaviour, and subsequently correct a broken implementation. The old interface will be retired in due course. The unit of bytes (as opposed pages) is a deliberate API/ABI improvement to more easily support multiple page granularities. This is part of XSA-409 / CVE-2022-33747. Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> commit e5ac68a0110cb43a3a0bc17d545ae7a0bd746ef9 Author: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Date: Mon Nov 14 21:47:59 2022 +0000 x86/hvm: Revert per-domain APIC acceleration support I was really hoping to avoid this, but its now too late in the 4.17 freeze and we still don't have working fixes. The in-Xen calculations for assistance capabilities are buggy. For the avoidance of doubt, the original intention was to be able to control every aspect of a APIC acceleration so we could comprehensively test Xen's support, as it has proved to be buggy time and time again. Even after a protracted discussion on what the new API ought to mean, attempts to apply it to the existing logic have been unsuccessful, proving that the API/ABI is too complicated for most people to reason about. This reverts most of: 2ce11ce249a3981bac50914c6a90f681ad7a4222 6b2b9b3405092c3ad38d7342988a584b8efa674c leaving in place the non-APIC specific changes (minimal as they are). This takes us back to the behaviour of Xen 4.16 where APIC acceleration is configured on a per system basis. This work will be revisted in due course. Fixes: 2ce11ce249a3 ("x86/HVM: allow per-domain usage of hardware virtualized APIC") Fixes: 6b2b9b340509 ("x86: report Interrupt Controller Virtualization capabilities") Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx> (qemu changes not included)
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |