[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen-unstable bisection] complete test-amd64-i386-xl-qemut-debianhvm-amd64-xsm
branch xen-unstable xenbranch xen-unstable job test-amd64-i386-xl-qemut-debianhvm-amd64-xsm testid xen-boot Tree: linux git://xenbits.xen.org/linux-pvops.git Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git Tree: qemuu git://xenbits.xen.org/qemu-xen.git Tree: xen git://xenbits.xen.org/xen.git *** Found and reproduced problem changeset *** Bug is in tree: xen git://xenbits.xen.org/xen.git Bug introduced: c5b9805bc1f793177779ae342c65fcc201a15a47 Bug not present: b199c44afa3a0d18d0e968e78a590eb9e69e20ad Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/106197/ commit c5b9805bc1f793177779ae342c65fcc201a15a47 Author: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Date: Wed Feb 22 14:38:06 2017 +0100 efi: create new early memory allocator There is a problem with place_string() which is used as early memory allocator. It gets memory chunks starting from start symbol and goes down. Sadly this does not work when Xen is loaded using multiboot2 protocol because then the start lives on 1 MiB address and we should not allocate a memory from below of it. So, I tried to use mem_lower address calculated by GRUB2. However, this solution works only on some machines. There are machines in the wild (e.g. Dell PowerEdge R820) which uses first ~640 KiB for boot services code or data... :-((( Hence, we need new memory allocator for Xen EFI boot code which is quite simple and generic and could be used by place_string() and efi_arch_allocate_mmap_buffer(). I think about following solutions: 1) We could use native EFI allocation functions (e.g. AllocatePool() or AllocatePages()) to get memory chunk. However, later (somewhere in __start_xen()) we must copy its contents to safe place or reserve it in e820 memory map and map it in Xen virtual address space. This means that the code referring to Xen command line, loaded modules and EFI memory map, mostly in __start_xen(), will be further complicated and diverge from legacy BIOS cases. Additionally, both former things have to be placed below 4 GiB because their addresses are stored in multiboot_info_t structure which has 32-bit relevant members. 2) We may allocate memory area statically somewhere in Xen code which could be used as memory pool for early dynamic allocations. Looks quite simple. Additionally, it would not depend on EFI at all and could be used on legacy BIOS platforms if we need it. However, we must carefully choose size of this pool. We do not want increase Xen binary size too much and waste too much memory but also we must fit at least memory map on x86 EFI platforms. As I saw on small machine, e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more than 200 entries. Every entry on x86-64 platform is 40 bytes in size. So, it means that we need more than 8 KiB for EFI memory map only. Additionally, if we use this memory pool for Xen and modules command line storage (it would be used when xen.efi is executed as EFI application) then we should add, I think, about 1 KiB. In this case, to be on safe side, we should assume at least 64 KiB pool for early memory allocations. Which is about 4 times of our earlier calculations. However, during discussion on Xen-devel Jan Beulich suggested that just in case we should use 1 MiB memory pool like it is in original place_string() implementation. So, let's use 1 MiB as it was proposed. If we think that we should not waste unallocated memory in the pool on running system then we can mark this region as __initdata and move all required data to dynamically allocated places somewhere in __start_xen(). 2a) We could put memory pool into .bss.page_aligned section. Then allocate memory chunks starting from the lowest address. After init phase we can free unused portion of the memory pool as in case of .init.text or .init.data sections. This way we do not need to allocate any space in image file and freeing of unused area in the memory pool is very simple. Now #2a solution is implemented because it is quite simple and requires limited number of changes, especially in __start_xen(). New allocator is quite generic and can be used on ARM platforms too. Though it is not enabled on ARM yet due to lack of some prereq. List of them is placed before ebmalloc code. Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Julien Grall <julien.grall@xxxxxxx> Reviewed-by: Doug Goldstein <cardoe@xxxxxxxxxx> Tested-by: Doug Goldstein <cardoe@xxxxxxxxxx> For bisection revision-tuple graph see: http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-debianhvm-amd64-xsm.xen-boot.html Revision IDs in each graph node refer, respectively, to the Trees above. ---------------------------------------- Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-debianhvm-amd64-xsm.xen-boot --summary-out=tmp/106197.bisection-summary --basis-template=105933 --blessings=real,real-bisect xen-unstable test-amd64-i386-xl-qemut-debianhvm-amd64-xsm xen-boot Searching for failure / basis pass: 106160 fail [host=chardonnay0] / 105966 [host=italia0] 105946 [host=chardonnay1] 105933 [host=pinot1] 105919 [host=italia1] 105900 [host=baroque0] 105896 [host=huxelrebe1] 105873 [host=rimava1] 105861 [host=baroque1] 105840 [host=elbling0] 105821 [host=nobling1] 105804 [host=italia0] 105790 [host=nobling0] 105784 [host=huxelrebe0] 105766 [host=nocera1] 105756 [host=fiano0] 105742 [host=elbling1] 105728 [host=nocera0] 105707 [host=merlot0] 105669 [host=chardonnay1] 105659 ok. Failure / basis pass flights: 106160 / 105659 (tree with no url: minios) (tree with no url: ovmf) (tree with no url: seabios) Tree: linux git://xenbits.xen.org/linux-pvops.git Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git Tree: qemuu git://xenbits.xen.org/qemu-xen.git Tree: xen git://xenbits.xen.org/xen.git Latest b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713 Basis pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 5cd2e1739763915e6b4c247eef71f948dc808bd5 9d5617cd89e7b285afa785b9a9d3495f4e2dffae Generating revisions with ./adhoc-revtuple-generator git://xenbits.xen.org/linux-pvops.git#b65f2f457c49b2cfd7967c34b7a0b04c25587f13-b65f2f457c49b2cfd7967c34b7a0b04c25587f13 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#b669e922b37b8957248798a5eb7aa96a666cd3fe-8b4834ee1202852ed83a9fc61268c65fb6961ea7 git://xenbits.xen.org/qemu-xen.git#5cd2e1739763915e6b4c247eef71f948dc808bd5-57e8fbb2f702001a18bd81e9fe31b26d94247ac9 git://xenbits.xen.org/xen.git#9d5617cd89e7b285afa785b9a9d3495f4e2dffae-1f24be6c945c8f8e25547aed4a56c092133df713 Loaded 7004 nodes in revision graph Searching for test results: 105659 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 5cd2e1739763915e6b4c247eef71f948dc808bd5 9d5617cd89e7b285afa785b9a9d3495f4e2dffae 105669 [host=chardonnay1] 105707 [host=merlot0] 105728 [host=nocera0] 105790 [host=nobling0] 105756 [host=fiano0] 105742 [host=elbling1] 105784 [host=huxelrebe0] 105766 [host=nocera1] 105804 [host=italia0] 105821 [host=nobling1] 105840 [host=elbling0] 105896 [host=huxelrebe1] 105919 [host=italia1] 105861 [host=baroque1] 105873 [host=rimava1] 105900 [host=baroque0] 105933 [host=pinot1] 105946 [host=chardonnay1] 105966 [host=italia0] 105994 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 80a7d04f532ddc3500acd7988917708a536ae15f 106102 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713 106081 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 cf5e1a74b9687be3d146e59ab10c26be6da9d0d4 106122 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713 106138 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713 106170 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 728e90b41d46c1c1c210ac496204efd51936db75 89bca190824e223316bb5a6dc3024bc9aaf02ce1 106179 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad 106167 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713 106155 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 5cd2e1739763915e6b4c247eef71f948dc808bd5 9d5617cd89e7b285afa785b9a9d3495f4e2dffae 106174 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe e88462aaa2f19e1238e77c1bcebbab7ef5380d7a d37f938821d7d9d3feb48225f88820102d90bc05 106177 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 08c008de9c7d3ac71f71c87cc04a47819ca228dc 36b35babdf381b8e2843d37e29da2ddc01ec3282 106178 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 435ae6afed876e47a8a6b12364ff1ec7a180b24f 106160 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8b4834ee1202852ed83a9fc61268c65fb6961ea7 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 1f24be6c945c8f8e25547aed4a56c092133df713 106180 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 9180f53655245328f06c5051d3298376cb5771b1 106184 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 c5b9805bc1f793177779ae342c65fcc201a15a47 106188 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad 106189 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 c5b9805bc1f793177779ae342c65fcc201a15a47 106192 pass b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad 106197 fail b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 c5b9805bc1f793177779ae342c65fcc201a15a47 Searching for interesting versions Result found: flight 105659 (pass), for basis pass Result found: flight 106102 (fail), for basis failure Repro found: flight 106155 (pass), for basis pass Repro found: flight 106160 (fail), for basis failure 0 revisions at b65f2f457c49b2cfd7967c34b7a0b04c25587f13 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b669e922b37b8957248798a5eb7aa96a666cd3fe 57e8fbb2f702001a18bd81e9fe31b26d94247ac9 b199c44afa3a0d18d0e968e78a590eb9e69e20ad No revisions left to test, checking graph state. Result found: flight 106179 (pass), for last pass Result found: flight 106184 (fail), for first failure Repro found: flight 106188 (pass), for last pass Repro found: flight 106189 (fail), for first failure Repro found: flight 106192 (pass), for last pass Repro found: flight 106197 (fail), for first failure *** Found and reproduced problem changeset *** Bug is in tree: xen git://xenbits.xen.org/xen.git Bug introduced: c5b9805bc1f793177779ae342c65fcc201a15a47 Bug not present: b199c44afa3a0d18d0e968e78a590eb9e69e20ad Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/106197/ commit c5b9805bc1f793177779ae342c65fcc201a15a47 Author: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Date: Wed Feb 22 14:38:06 2017 +0100 efi: create new early memory allocator There is a problem with place_string() which is used as early memory allocator. It gets memory chunks starting from start symbol and goes down. Sadly this does not work when Xen is loaded using multiboot2 protocol because then the start lives on 1 MiB address and we should not allocate a memory from below of it. So, I tried to use mem_lower address calculated by GRUB2. However, this solution works only on some machines. There are machines in the wild (e.g. Dell PowerEdge R820) which uses first ~640 KiB for boot services code or data... :-((( Hence, we need new memory allocator for Xen EFI boot code which is quite simple and generic and could be used by place_string() and efi_arch_allocate_mmap_buffer(). I think about following solutions: 1) We could use native EFI allocation functions (e.g. AllocatePool() or AllocatePages()) to get memory chunk. However, later (somewhere in __start_xen()) we must copy its contents to safe place or reserve it in e820 memory map and map it in Xen virtual address space. This means that the code referring to Xen command line, loaded modules and EFI memory map, mostly in __start_xen(), will be further complicated and diverge from legacy BIOS cases. Additionally, both former things have to be placed below 4 GiB because their addresses are stored in multiboot_info_t structure which has 32-bit relevant members. 2) We may allocate memory area statically somewhere in Xen code which could be used as memory pool for early dynamic allocations. Looks quite simple. Additionally, it would not depend on EFI at all and could be used on legacy BIOS platforms if we need it. However, we must carefully choose size of this pool. We do not want increase Xen binary size too much and waste too much memory but also we must fit at least memory map on x86 EFI platforms. As I saw on small machine, e.g. IBM System x3550 M2 with 8 GiB RAM, memory map may contain more than 200 entries. Every entry on x86-64 platform is 40 bytes in size. So, it means that we need more than 8 KiB for EFI memory map only. Additionally, if we use this memory pool for Xen and modules command line storage (it would be used when xen.efi is executed as EFI application) then we should add, I think, about 1 KiB. In this case, to be on safe side, we should assume at least 64 KiB pool for early memory allocations. Which is about 4 times of our earlier calculations. However, during discussion on Xen-devel Jan Beulich suggested that just in case we should use 1 MiB memory pool like it is in original place_string() implementation. So, let's use 1 MiB as it was proposed. If we think that we should not waste unallocated memory in the pool on running system then we can mark this region as __initdata and move all required data to dynamically allocated places somewhere in __start_xen(). 2a) We could put memory pool into .bss.page_aligned section. Then allocate memory chunks starting from the lowest address. After init phase we can free unused portion of the memory pool as in case of .init.text or .init.data sections. This way we do not need to allocate any space in image file and freeing of unused area in the memory pool is very simple. Now #2a solution is implemented because it is quite simple and requires limited number of changes, especially in __start_xen(). New allocator is quite generic and can be used on ARM platforms too. Though it is not enabled on ARM yet due to lack of some prereq. List of them is placed before ebmalloc code. Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> Acked-by: Julien Grall <julien.grall@xxxxxxx> Reviewed-by: Doug Goldstein <cardoe@xxxxxxxxxx> Tested-by: Doug Goldstein <cardoe@xxxxxxxxxx> pnmtopng: 249 colors found Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-xl-qemut-debianhvm-amd64-xsm.xen-boot.{dot,ps,png,html,svg}. ---------------------------------------- 106197: tolerable ALL FAIL flight 106197 xen-unstable real-bisect [real] http://logs.test-lab.xenproject.org/osstest/logs/106197/ Failures :-/ but no regressions. Tests which did not succeed, including tests which could not be run: test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 6 xen-boot fail baseline untested jobs: test-amd64-i386-xl-qemut-debianhvm-amd64-xsm fail ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Explanation of these reports, and of osstest in general, is at http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary _______________________________________________ osstest-output mailing list osstest-output@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/cgi-bin/mailman/listinfo/osstest-output
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |