[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [ovmf bisection] complete build-amd64-libvirt
branch xen-unstable xenbranch xen-unstable job build-amd64-libvirt testid libvirt-build Tree: libvirt git://xenbits.xen.org/libvirt.git Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git Tree: ovmf https://github.com/tianocore/edk2.git Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git Tree: qemuu git://xenbits.xen.org/qemu-xen.git Tree: seabios git://xenbits.xen.org/osstest/seabios.git Tree: xen git://xenbits.xen.org/xen.git *** Found and reproduced problem changeset *** Bug is in tree: xen git://xenbits.xen.org/xen.git Bug introduced: 66dd1c62b2a3c707bd5c55750d10a8223fbd577f Bug not present: f732240fd3bac25116151db5ddeb7203b62e85ce Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/172182/ commit 66dd1c62b2a3c707bd5c55750d10a8223fbd577f Author: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Date: Fri Jul 15 22:20:24 2022 +0300 libxl: Add support for Virtio disk configuration This patch adds basic support for configuring and assisting virtio-mmio based virtio-disk backend (emulator) which is intended to run out of Qemu and could be run in any domain. Although the Virtio block device is quite different from traditional Xen PV block device (vbd) from the toolstack's point of view: - as the frontend is virtio-blk which is not a Xenbus driver, nothing written to Xenstore are fetched by the frontend currently ("vdev" is not passed to the frontend). But this might need to be revised in future, so frontend data might be written to Xenstore in order to support hotplugging virtio devices or passing the backend domain id on arch where the device-tree is not available. - the ring-ref/event-channel are not used for the backend<->frontend communication, the proposed IPC for Virtio is IOREQ/DM it is still a "block device" and ought to be integrated in existing "disk" handling. So, re-use (and adapt) "disk" parsing/configuration logic to deal with Virtio devices as well. For the immediate purpose and an ability to extend that support for other use-cases in future (Qemu, virtio-pci, etc) perform the following actions: - Add new disk backend type (LIBXL_DISK_BACKEND_STANDALONE) and reflect that in the configuration - Introduce new disk "specification" and "transport" fields to struct libxl_device_disk. Both are written to the Xenstore. The transport field is only used for the specification "virtio" and it assumes only "mmio" value for now. - Introduce new "specification" option with "xen" communication protocol being default value. - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model An example of domain configuration for Virtio disk: disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=standalone, specification=virtio'] Nothing has changed for default Xen disk configuration. Please note, this patch is not enough for virtio-disk to work on Xen (Arm), as for every Virtio device (including disk) we need to allocate Virtio MMIO params (IRQ and memory region) and pass them to the backend, also update Guest device-tree. The subsequent patch will add these missing bits. For the current patch, the default "irq" and "base" are just written to the Xenstore. This is not an ideal splitting, but this way we avoid breaking the bisectability. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx> Tested-by: Jiamei Xie <jiamei.xie@xxxxxxx> For bisection revision-tuple graph see: http://logs.test-lab.xenproject.org/osstest/results/bisect/ovmf/build-amd64-libvirt.libvirt-build.html Revision IDs in each graph node refer, respectively, to the Trees above. ---------------------------------------- Running cs-bisection-step --graph-out=/home/logs/results/bisect/ovmf/build-amd64-libvirt.libvirt-build --summary-out=tmp/172182.bisection-summary --basis-template=172136 --blessings=real,real-bisect,real-retry ovmf build-amd64-libvirt libvirt-build Searching for failure / basis pass: 172170 fail [host=himrod1] / 172136 ok. Failure / basis pass flights: 172170 / 172136 (tree with no url: minios) Tree: libvirt git://xenbits.xen.org/libvirt.git Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git Tree: ovmf https://github.com/tianocore/edk2.git Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git Tree: qemuu git://xenbits.xen.org/qemu-xen.git Tree: seabios git://xenbits.xen.org/osstest/seabios.git Tree: xen git://xenbits.xen.org/xen.git Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a8f59e2eb44199040d2e1f747a6d950a25ed0984 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 01ca29f0b17a50a94b0e232ba276c32e95d80ae3 Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd https://github.com/tianocore/edk2.git#444260d45ec2a84e8f8c192b3539a3cd5591d009-a8f59e2eb44199040d2e1f747a6d950a25ed0984 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e\ 8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#b746458e1ce1bec85e58b458386f8b7a0bedfaa6-b746458e1ce1bec85e58b458386f8b7a0bedfaa6 git://xenbits.xen.org/osstest/seabios.git#46de2eec93bffa0706e6229c0da2919763c8eb04-46de2eec93bffa0706e6229c0da2919763c8eb04 git://xenbits.xen.org/xen.git#f732240fd3bac25116151db5ddeb7203b62e85ce-01ca29f0b17a50a94b0e232ba276c32e95d80ae3 Loaded 10001 nodes in revision graph Searching for test results: 172136 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce 172151 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a8f59e2eb44199040d2e1f747a6d950a25ed0984 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 01ca29f0b17a50a94b0e232ba276c32e95d80ae3 172154 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce 172156 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a8f59e2eb44199040d2e1f747a6d950a25ed0984 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 01ca29f0b17a50a94b0e232ba276c32e95d80ae3 172158 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 04248b82f9621c511f3d5502ce74df7122ed157e 172155 [host=himrod2] 172159 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 3629759626ac7201a670a8a2d4d4a536e7443575 172162 [host=himrod2] 172163 [host=himrod2] 172166 [host=himrod2] 172161 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a8f59e2eb44199040d2e1f747a6d950a25ed0984 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 01ca29f0b17a50a94b0e232ba276c32e95d80ae3 172167 [host=himrod2] 172169 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 ca45d3cb4586372909f350e54482246f994e1bc7 172172 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 66dd1c62b2a3c707bd5c55750d10a8223fbd577f 172175 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce 172170 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd a8f59e2eb44199040d2e1f747a6d950a25ed0984 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 01ca29f0b17a50a94b0e232ba276c32e95d80ae3 172178 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 66dd1c62b2a3c707bd5c55750d10a8223fbd577f 172180 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce 172182 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 66dd1c62b2a3c707bd5c55750d10a8223fbd577f Searching for interesting versions Result found: flight 172136 (pass), for basis pass For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce, results HASH(0x5618bf8a6a38) HASH(0x5618bf8a51b0) HASH(0x5618bf8b1088) HASH(0x5618bf8b9070) Result found: flight 172151 (fail), for basis failure (at ancestor ~5390) Repro found: flight 172154 (pass), for basis pass Repro found: flight 172156 (fail), for basis failure 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd 444260d45ec2a84e8f8c192b3539a3cd5591d009 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b746458e1ce1bec85e58b458386f8b7a0bedfaa6 46de2eec93bffa0706e6229c0da2919763c8eb04 f732240fd3bac25116151db5ddeb7203b62e85ce No revisions left to test, checking graph state. Result found: flight 172136 (pass), for last pass Result found: flight 172172 (fail), for first failure Repro found: flight 172175 (pass), for last pass Repro found: flight 172178 (fail), for first failure Repro found: flight 172180 (pass), for last pass Repro found: flight 172182 (fail), for first failure *** Found and reproduced problem changeset *** Bug is in tree: xen git://xenbits.xen.org/xen.git Bug introduced: 66dd1c62b2a3c707bd5c55750d10a8223fbd577f Bug not present: f732240fd3bac25116151db5ddeb7203b62e85ce Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/172182/ commit 66dd1c62b2a3c707bd5c55750d10a8223fbd577f Author: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Date: Fri Jul 15 22:20:24 2022 +0300 libxl: Add support for Virtio disk configuration This patch adds basic support for configuring and assisting virtio-mmio based virtio-disk backend (emulator) which is intended to run out of Qemu and could be run in any domain. Although the Virtio block device is quite different from traditional Xen PV block device (vbd) from the toolstack's point of view: - as the frontend is virtio-blk which is not a Xenbus driver, nothing written to Xenstore are fetched by the frontend currently ("vdev" is not passed to the frontend). But this might need to be revised in future, so frontend data might be written to Xenstore in order to support hotplugging virtio devices or passing the backend domain id on arch where the device-tree is not available. - the ring-ref/event-channel are not used for the backend<->frontend communication, the proposed IPC for Virtio is IOREQ/DM it is still a "block device" and ought to be integrated in existing "disk" handling. So, re-use (and adapt) "disk" parsing/configuration logic to deal with Virtio devices as well. For the immediate purpose and an ability to extend that support for other use-cases in future (Qemu, virtio-pci, etc) perform the following actions: - Add new disk backend type (LIBXL_DISK_BACKEND_STANDALONE) and reflect that in the configuration - Introduce new disk "specification" and "transport" fields to struct libxl_device_disk. Both are written to the Xenstore. The transport field is only used for the specification "virtio" and it assumes only "mmio" value for now. - Introduce new "specification" option with "xen" communication protocol being default value. - Add new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model An example of domain configuration for Virtio disk: disk = [ 'phy:/dev/mmcblk0p3, xvda1, backendtype=standalone, specification=virtio'] Nothing has changed for default Xen disk configuration. Please note, this patch is not enough for virtio-disk to work on Xen (Arm), as for every Virtio device (including disk) we need to allocate Virtio MMIO params (IRQ and memory region) and pass them to the backend, also update Guest device-tree. The subsequent patch will add these missing bits. For the current patch, the default "irq" and "base" are just written to the Xenstore. This is not an ideal splitting, but this way we avoid breaking the bisectability. Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx> Tested-by: Jiamei Xie <jiamei.xie@xxxxxxx> Revision graph left in /home/logs/results/bisect/ovmf/build-amd64-libvirt.libvirt-build.{dot,ps,png,html,svg}. ---------------------------------------- 172182: tolerable ALL FAIL flight 172182 ovmf real-bisect [real] http://logs.test-lab.xenproject.org/osstest/logs/172182/ Failures :-/ but no regressions. Tests which did not succeed, including tests which could not be run: build-amd64-libvirt 6 libvirt-build fail baseline untested jobs: build-amd64-libvirt fail ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |