[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 6/9] automation: qemu-alpine-arm64: Cleanup and fixes


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Thu, 22 Sep 2022 15:40:55 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Oj0Jye2TY14wGofloU/Wk7Zk6VQgePIK69TIoc4uMt4=; b=AiOV4U8lEi0x4qYGlInNKZ+LC5bcYe+DmmlrEHZuZCsHN3LG+cnf63TpQ4Y+KMLCGa0VQVSjR1GefYf1rcA/HFWkSK06CWdqbH7o63G33khH9ce0w9gysB6uufSZLr5+yiWGqOa+8xyOdlqKK1Jq/c8ufrvXbXwVNOhkok/C/WPHudohmOqAsNusiOLJUKKUbiBTOL4y7LEyqKeUD0+ip5PT3NcSJ7kDsvWTDPPUFHJu+DJrzL46BN/V4ccOjBKP02Tdk0sJDfC3XDb4alYLUhzL7B9EDj9wA8sFSOOL5l4pXqzBajTMZMEvbud2BdHSKekew3Q1X8020U/iSkuGmg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Vt4KOlHJAS9iaDkANyHSTnKDH6+L0usSGCNRJiwo+nJaIDnqe9D1qDQM8/SpXMJeU7Dc3EOG5OHhsqsX1PImpcTKtDMNHtxI9w6geCdTCOIiYpmbOeeky+PioW3kXqsTfNFLHuLu2emtppXIKtqHHfqKrdLOIknVIUolT+bn0QldtXeRNEq8nZ2vjw2Wfk0o1t/IRRFVS54GzjSIhIn0l6Ycbn3xjDEefnghCQxioOgRmi2Bn0GZZpFipuOcYIJbeyRgm4bse9N7PxTttKl/+ep21pLDPqk8H5uWr1FZ/32grFqeuPVGo5ovVDfrJavssLO3hgKZPmaJ6fJJTLbEBw==
  • Cc: Michal Orzel <michal.orzel@xxxxxxx>, Doug Goldstein <cardoe@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Thu, 22 Sep 2022 13:41:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Perform the following cleanup:
- rename the device tree from virt-gicv3 to virt-gicv2 as the GIC version
  used in this test is v2,
- use fdtput to perform modifications on the dtb,
- use DEBIAN_FRONTENT=noninteractive to prevent interactive prompt being
  stuck waiting for answer other than "yes",
- fix the number of cpus in the device tree because currently we generate
  it with a single cpu and try to run QEMU with two,
- fix the memory size we pass when generating QEMU device tree as it does
  not match the memory size with what we run QEMU.

Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
---
 automation/scripts/qemu-alpine-arm64.sh | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/automation/scripts/qemu-alpine-arm64.sh 
b/automation/scripts/qemu-alpine-arm64.sh
index f4ac2d856fa0..7b52d77d3c84 100755
--- a/automation/scripts/qemu-alpine-arm64.sh
+++ b/automation/scripts/qemu-alpine-arm64.sh
@@ -2,6 +2,7 @@
 
 set -ex
 
+export DEBIAN_FRONTENT=noninteractive
 apt-get -qy update
 apt-get -qy install --no-install-recommends u-boot-qemu \
                                             u-boot-tools \
@@ -73,18 +74,17 @@ curl -fsSLO 
https://github.com/qemu/qemu/raw/v5.2.0/pc-bios/efi-virtio.rom
 ./binaries/qemu-system-aarch64 \
    -machine virtualization=true \
    -cpu cortex-a57 -machine type=virt \
-   -m 1024 -display none \
-   -machine dumpdtb=binaries/virt-gicv3.dtb
+   -m 2048 -smp 2 -display none \
+   -machine dumpdtb=binaries/virt-gicv2.dtb
+
 # XXX disable pl061 to avoid Linux crash
-dtc -I dtb -O dts binaries/virt-gicv3.dtb > binaries/virt-gicv3.dts
-sed 's/compatible = "arm,pl061.*/status = "disabled";/g' 
binaries/virt-gicv3.dts > binaries/virt-gicv3-edited.dts
-dtc -I dts -O dtb binaries/virt-gicv3-edited.dts > binaries/virt-gicv3.dtb
+fdtput binaries/virt-gicv2.dtb -p -t s /pl061@9030000 status disabled
 
 # ImageBuilder
 echo 'MEMORY_START="0x40000000"
-MEMORY_END="0x80000000"
+MEMORY_END="0xC0000000"
 
-DEVICE_TREE="virt-gicv3.dtb"
+DEVICE_TREE="virt-gicv2.dtb"
 XEN="xen"
 DOM0_KERNEL="Image"
 DOM0_RAMDISK="xen-rootfs.cpio.gz"
-- 
2.25.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.