[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2


  • To: Julien Grall <julien@xxxxxxx>
  • From: Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • Date: Fri, 14 Jun 2024 07:54:36 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6UGpXkunA9fOoYkDFW40muvm+1IVctnokFDRn8Lwnqk=; b=eNdz8YxevacI6iEGciN7f7Vqit9mMwZoVq4q0cgKyvorZdJiHCw6uVYZayHOVjT2gkzm2t/sJ5egF4N0aB48Ic3HA+0jYO8K2NYoGAw/gDqlzhFwqGAJOnPzbBuk1pf2rqihi+BIgezXs7UkFJ4GZe1GSgsHvnJW+lsoFsmVErUtqo3LrVH4BASuO0pXnkyJKn8qCmbTJ91e5cllQj2J8pYVNpMie1zmXoDtV3seamp7Cuw8kMNmk0EhSdKw9HWuuSeGV809lf8cWHs1HwvRJulvEfz1F/mWrQB7MVlqzXguxNYbA/1jxRxRE5G9cTtQ5T7V/4RbEbl0O8HvLP01rA==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6UGpXkunA9fOoYkDFW40muvm+1IVctnokFDRn8Lwnqk=; b=JsfgRNvwsQCYN0hVYpH8hzUi5Eh7lPP+HgGG1IdxjmZkeb9JOmeDZIeQE3gCTNpU1QA/URoP7KIGlX33O2GDuTT1d1mzxricK9xGwA2mfAlzcQK9BkfL2gs0tUaxIq0KMtUBmyhGNhv2cvKXLh0UMY+sunZOpp72DxUlOrpvgZASPvXGD1Om/ggfvkw2Rm9HGOIsCVRoK1ic1ko7Ivd1bz6/DD/vBTmYRnNSpt3Wp67ESH+WqDC2I2Mrxs0Gj04elnf2F0g+NbujjWjzJ1u+yzjItwm9/sAcN3nXsOe98B4U08zImMgImFyLZmorJaFidaS+/vELwq3WHE4AxK3DFQ==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass; b=Cj9SCKfASqQYXfl3ql5z0nZxgE+iKTtBUEVCukHzMHd3uo/TlxrYMmf+QsGa3EFMHJFZ+c8O+ytHHvdjbjNwm+iKN9kqMiUi1s/YvYEjBQxDlFY0hrxAxEG7XsC+v56CqlzjchrOLRoJUh/aB8LE77ySMaMUPvqyAn9I9k57dSSxCNncJYRfvnI7d9HO2UbtU9kHEMy/BUPLTDE7Cd42H0+DqW2EO01mNNZIt5mwbiv6NrhDJ4EU5XdpOlzfMIfO27FAZIQKKIRYdSf03eOfNHdlWR2IjqOzpnasLFQr+CC1Jn8Lg2o7LaGJlCdgtPMWqBvxLY+qFRWsUZAF6D0eSw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dtQMQnki+MuLrpc5/CmT8qi91BXhh4njHKFYDv+Hk2NLz1T0M+9E5vSRUP7j8l5YdeabN2/r7olZbyiQRe5Wh8fwpCw/RjVqbAmX/SD3jOA1ijlddeQQPZuCygBGIVbkLCPG+6++cX+tyLXrFAB9Pk8tWfiCEAkPB5h99fHQuIEFf4Ck4zBAJ5g00nQFM5bA/3XcxptZFVKchdn1/rGb1FHb4foNEWv0MzvextHJ+NnA2CPU0YgWBiZY50f4qNcG0djt6K1CoaX6Jvx/MlFAWzILYm5C3efNzPa7+7LNcppWooCHKH1GoWKqtYO22+4dmU93ES+/vHVN1fA/fAQ+qw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Michal Orzel <michal.orzel@xxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>
  • Delivery-date: Fri, 14 Jun 2024 07:58:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHarde1wBZMIqRdOku1vS0eoyuKdLHCnAmAgAAByACAAxERgIABVZYA
  • Thread-topic: [PATCH v4 0/7] Static shared memory followup v2 - pt2

Hi Julien,

> On 13 Jun 2024, at 12:31, Julien Grall <julien@xxxxxxx> wrote:
> 
> Hi,
> 
> On 11/06/2024 13:42, Michal Orzel wrote:
>>> We would like this serie to be in Xen 4.19, there was a misunderstanding on 
>>> our side because we thought
>>> that since the serie was sent before the last posting date, it could have 
>>> been a candidate for merging in the
>>> new release, instead after speaking with Julien and Oleksii we are now 
>>> aware that we need to provide a
>>> justification for its presence.
>>> 
>>> Pros to this serie is that we are closing the circle for static shared 
>>> memory, allowing it to use memory from
>>> the host or from Xen, it is also a feature that is not enabled by default, 
>>> so it should not cause too much
>>> disruption in case of any bugs that escaped the review, however we’ve 
>>> tested many configurations for that
>>> with/without enabling the feature if that can be an additional value.
>>> 
>>> Cons: we are touching some common code related to p2m, but also there the 
>>> impact should be minimal because
>>> the new code is subject to l2 foreign mapping (to be confirmed maybe from a 
>>> p2m expert like Julien).
>>> 
>>> The comments on patch 3 of this serie are addressed by this patch:
>>> https://patchwork.kernel.org/project/xen-devel/patch/20240528125603.2467640-1-luca.fancellu@xxxxxxx/
>>> And the serie is fully reviewed.
>>> 
>>> So our request is to allow this serie in 4.19, Oleksii, ARM maintainers, do 
>>> you agree on that?
>> As a main reviewer of this series I'm ok to have this series in. It is 
>> nicely encapsulated and the feature itself
>> is still in unsupported state. I don't foresee any issues with it.
> 
> There are changes in the p2m code and the memory allocation for boot domains. 
> So is it really encapsulated?
> 
> For me there are two risks:
> * p2m (already mentioned by Luca): We modify the code to put foreign mapping. 
> The worse that can happen if we don't release the foreign mapping. This would 
> mean the page will not be freed. AFAIK, we don't exercise this path in the CI.
> * domain allocation: This mainly look like refactoring. And the path is 
> exercised in the CI.
> 
> So I am not concerned with the domain allocation one. @Luca, would it be 
> possible to detail how did you test the foreign pages were properly removed?

So at first we tested the code, with/without the static shared memory feature 
enabled, creating/destroying guest from Dom0 and checking that everything
was ok.

After a chat on Matrix with Julien he suggested that using a virtio-mmio disk 
was better to stress out the foreign mapping looking for
regressions.

Luckily I’ve found this slide deck from @Oleksandr: 
https://static.linaro.org/connect/lvc21/presentations/lvc21-314.pdf

So I did a setup using fvp-base, having a disk with two partitions containing 
Dom0 rootfs and DomU rootfs, Dom0 sees
this disk using VirtIO block.

The Dom0 rootfs contains the virtio-disk backend: 
https://github.com/xen-troops/virtio-disk

And the DomU XL configuration is using these parameters:

cmdline="console=hvc0 root=/dev/vda rw"
disk = ['/dev/vda2,raw,xvda,w,specification=virtio’]

Running the setup and creating/destroying a couple of times the guest is not 
showing regressions, here an example of the output:

root@fvp-base:/opt/xtp/guests/linux-guests# xl create -c 
linux-ext-arm64-stresstests-rootfs.cfg
Parsing config from linux-ext-arm64-stresstests-rootfs.cfg
main: read frontend domid 2
  Info: connected to dom2

demu_seq_next: >XENSTORE_ATTACHED
demu_seq_next: domid = 2
demu_seq_next: devid = 51712
demu_seq_next: filename[0] = /dev/vda2
demu_seq_next: readonly[0] = 0
demu_seq_next: base[0]     = 0x2000000
demu_seq_next: irq[0]      = 33
demu_seq_next: >XENEVTCHN_OPEN
demu_seq_next: >XENFOREIGNMEMORY_OPEN
demu_seq_next: >XENDEVICEMODEL_OPEN
demu_seq_next: >XENGNTTAB_OPEN
demu_initialize: 1 vCPU(s)
demu_seq_next: >SERVER_REGISTERED
demu_seq_next: ioservid = 0
demu_seq_next: >RESOURCE_MAPPED
demu_seq_next: shared_iopage = 0x7f80c58000
demu_seq_next: >SERVER_ENABLED
demu_seq_next: >PORT_ARRAY_ALLOCATED
demu_seq_next: >EVTCHN_PORTS_BOUND
demu_seq_next: VCPU0: 3 -> 6
demu_register_memory_space: 2000000 - 20001ff
  Info: (virtio/mmio.c) virtio_mmio_init:165: 
virtio-mmio.devices=0x200@0x2000000:33
demu_seq_next: >DEVICE_INITIALIZED
demu_seq_next: >INITIALIZED
IO request not ready
(XEN) d2v0 Unhandled SMC/HVC: 0x84000050
(XEN) d2v0 Unhandled SMC/HVC: 0x8600ff01
(XEN) d2v0: vGICD: RAZ on reserved register offset 0x00000c
(XEN) d2v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
(XEN) d2v0: vGICR: SGI: unhandled word write 0x000000ffffffff to ICACTIVER0
[    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd0f0]
[    0.000000] Linux version 6.1.25 (lucfan01@e125770) (aarch64-poky-linux-gcc 
(GCC) 12.2.0, GNU ld (GNU Binutils) 2.40.20230119) #4 SMP PREEMPT Thu Jun 13 
21:55:06 UTC 2024
[    0.000000] Machine model: XENVM-4.19
[    0.000000] Xen 4.19 support found
[    0.000000] efi: UEFI not found.
[    0.000000] NUMA: No NUMA configuration found

[...]

[    0.737758] virtio_blk virtio0: 1/0/0 default/read/poll queues
demu_detect_mappings_model: Use foreign mapping (addr 0x5d660000)
[    0.764258] virtio_blk virtio0: [vda] 747094 512-byte logical blocks (383 
MB/365 MiB)
[    0.781866] Invalid max_queues (4), will use default max: 1.

[...]

INIT: Entering runlevel: 5
Configuring network interfaces... ip: SIOCGIFFLAGS: No such device
Starting syslogd/klogd: done

Poky (Yocto Project Reference Distro) 4.2.1 stressrootfs /dev/hvc0

stressrootfs login: [   62.593440] cfg80211: failed to load regulatory.db

Poky (Yocto Project Reference Distro) 4.2.1 stressrootfs /dev/hvc0

stressrootfs login: root
root@stressrootfs:~# ls /
bin         etc         lost+found  proc        sys         var
boot        home        media       run         tmp
dev         lib         mnt         sbin        usr
root@stressrootfs:~#

[...]

root@fvp-base:/opt/xtp/guests/linux-guests# xl destroy 2
  Error: reading frontend state failed

main: lost connection to dom2
demu_teardown: <INITIALIZED
demu_teardown: <DEVICE_INITIALIZED
demu_deregister_memory_space: 2000000
demu_teardown: <EVTCHN_PORTS_BOUND
demu_teardown: <PORT_ARRAY_ALLOCATED
demu_teardown: VCPU0: 6
demu_teardown: <SERVER_ENABLED
demu_teardown: <RESOURCE_MAPPED
demu_teardown: <SERVER_REGISTERED
demu_teardown: <XENGNTTAB_OPEN
demu_teardown: <XENDEVICEMODEL_OPEN
demu_teardown: <XENFOREIGNMEMORY_OPEN
demu_teardown: <XENEVTCHN_OPEN
demu_teardown: <XENSTORE_ATTACHED
  Info: disconnected from dom2

root@fvp-base:/opt/xtp/guests/linux-guests# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  1024     2     r-----      66.6
root@fvp-base:/opt/xtp/guests/linux-guests#


Cheers,
Luca


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.