[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 0/9] gnttab: further work from XSA-380 / -382 context
- To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
- From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
- Date: Fri, 7 Oct 2022 18:09:38 +0000
- Accept-language: en-GB, en-US
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8zW/ySgS+9lnKtxD52Ap3bwv+jUBAFcFRypajQ4fiiY=; b=PCffC4v/bSPtYdMCFYsdKtGnq+I1u7dRT+rw7iLNDP73txae4C/CvRC1KstIRIgAL/q/KsIqRyeda6CTKyYv/Tnh3sGUxWCD5OEwmPS8Molkol5aweXdQqEJXoqaSUWWQFUJftqWfW8oNzGrdjr6pgmnAT4R5rX8lcqQ9ITy10QJPRArcvfoCzZ9xOYpuqNbiHU/cSOKBwS9qG6XYQY35/zhOG5+UXqA/XuEj1cAQYhkFmT0M4vTFwn6jSTAB9fdTSprZB9sLjiiKMHmPC3IhNKmkNhgrlU2ChqZxo/CZ6uvLumdiufAezdZ73RgM53Wt67mhrRfoMOKoTTVKaFV8g==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=adwIheHNMlgIVqaNggpWUxkduE/5Ov8EJe7WYKub0ACYyyy3DV68p0G7NjWXRLdEAWLE7TWaPGKgfMaZkoD6gq8AcdKOzZlDPJ9pU6X2ZW7zScUnZ4orD3cVsh3g+sOV/9vsDMvVielDw/yKzT6WSoSKtDmE21FJW7ZBvby6KdGtIPx3jo/AHiyJq4N65c3fH3+uW8BO1x4NIqP0EcCzLAR5Fk3xIIWRV/d8+zPgTSUgkY7qmucqYGLB48XNwoOJQ4BjMmmR6dBxin7O5mf2RTQvOqrvRupv/chs1l6VJn2N2OBwjfgQeR0QRFnuvsT1nGQxmqhs4h5ehhWt2HBXtQ==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
- Cc: George Dunlap <George.Dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
- Delivery-date: Fri, 07 Oct 2022 18:10:04 +0000
- Ironport-data: A9a23:ZEXkk60JRUTkdVqwkPbD5Qlwkn2cJEfYwER7XKvMYLTBsI5bpzYBz jEbWmuHO6zZNmrweoxwadnn9UsGupCDmtVhGwo5pC1hF35El5HIVI+TRqvS04F+DeWYFR46s J9OAjXkBJppJpMJjk71atANlVEliefSAOKU5NfsYkhZXRVjRDoqlSVtkus4hp8AqdWiCkaGt MiaT/f3YTdJ4BYpdDNJg06/gEk35q6r4GtH5gZWic1j5zcyqVFEVPrzGonpR5fIatE8NvK3Q e/F0Ia48gvxl/v6Ior4+lpTWhRiro/6ZWBiuFIPM0SRqkEqShgJ+rQ6LJIhhXJ/0F1lqTzTJ OJl7vRcQS9xVkHFdX90vxNwS0mSNoUekFPLzOTWXWV+ACQqflO1q8iCAn3aMqVE6LtpCG9zr sUDaw4GTE6xocKN7+OSH7wEasQLdKEHPas5k1Q5lHTzK6ZjRprOBaLX+dVfwTE8wNhUGurTb NYYbjwpawncZxpIOREcD5dWcOWA3yGjNWEH7g/F4/NpsgA/zyQouFTpGPPTdsaHWoN+mUGAq 3id12/4HgsbJJqUzj/tHneE1rafw32nANh6+LuQ6uROh3CvmV4oJCZGaUee8eCykHyGcocKQ 6AT0m90xUQoz2SpRNTgWxyzoFafowURHdFXFoUS+AyLj6bZ/QudLmwFVSJaLswrstcsQj4n3 UPPmMnmbRRur7+9WX+b7q2Trz65JW4SN2BqTS0ZSQoI5fHzrYd1iQjAJv54C7K8hNDxHTD2w hiJoTI4irFVitQEv42k+XjXjjTqoYLGJjPZ/S3SV2Ohqwl/NIisYtXx7UCBtKgRaoGEUlOGo X4I3dCE6/wDBo2MkyrLR/gRGLau5LCONzi0bUNTIqTNPg+FoxaLFb28KhkkTKu1Gq7ooQPUX XI=
- Ironport-hdrordr: A9a23:sCQDD6jW3FUN1Rs8wokAuG7wIXBQX3l13DAbv31ZSRFFG/FwyP rCoB1L73XJYWgqM03IwerwQ5VpQRvnhP1ICRF4B8buYOCUghrTEGgE1/qv/9SAIVy1ygc578 tdmsdFebrN5DRB7PoSpTPIa+rIo+P3v5xA592uqUuFJDsCA84P0+46MHfjLqQcfnglOXNNLu v52iMxnUvERZ14VKSGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3z8FOgk/gIsKwC zgqUjU96+ju/a0xlv3zGnI9albn9Pn159qGNGMsM4IMT/h4zzYJLiJGofy/wzdktvfrWrCo+ O85yvI+P4DrE85S1vF4ycFHTOQlgrGpUWSkGNwykGT3PARDAhKd/apw7gpPCcxonBQwu2Vms hwrh2knosSAhXakCvn4d/UExlsi0qvuHIn1fUelnpFTOIlGfZsRKEkjTRo+a07bVTHwZFiFP MrANDX5f5Qf1/fZ3fFvnN3yNjpWngoBB+JTkULp8TQilFt7TtE5lpdwNZakmYL9Zo7RZUB7+ PYMr5wnLULSsMNd6pyCOoIXMPyAG3QRhDHNn6UPD3cZek6EmOIr4Sy7KQ+5emsdpBNxJwumI 7ZWFcdrmI2c1KGM7z74HSKyGG5fIyQZ0Wf9igF3ekJhlTVfsuaDQSTDFYzjsCnv/ITRsXGRv fbAuMlP8Pe
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
- Thread-index: AQHXmmIPP24RGEexIEa6TxPxC+1g3q4FcpIAgABIlwA=
- Thread-topic: [PATCH 0/9] gnttab: further work from XSA-380 / -382 context
On 07/10/2022 14:49, Jan Beulich wrote:
> On 26.08.2021 12:06, Jan Beulich wrote:
>> The first four patches can be attributed to the former, the last four
>> patches to the latter. The middle patch had been submitted standalone
>> before, has a suitable Reviewed-by tag, but also has an objection by
>> Andrew pending, which unfortunately has lead to this patch now being
>> stuck. Short of Andrew being willing to settle the disagreement more
>> with Julien than with me (although I'm on Julien's side), I have no
>> idea what to do here.
>>
>> There's probably not much interrelation between the patches, so they
>> can perhaps go in about any order.
>>
>> 1: defer allocation of maptrack frames table
>> 2: drop a redundant expression from gnttab_release_mappings()
>> 3: fold recurring is_iomem_page()
>> 4: drop GNTMAP_can_fail
>> 5: defer allocation of status frame tracking array
> Just to make "official" what I've said in the course of the resource
> management discussion at the event in Cambridge: I'm withdrawing 1
> and 5, in the expectation that eager/lazy allocation of resources
> will become a property to be honored uniformly for a guest. With 2,
> 3, 4, and 6 already having gone in, it would still be nice to
> (finally) have feedback on ...
To these issues in particular, there was specific work done in 4.16 to
address the impasse in a way which got the savings in most cases, but
without increasing the risk of hitting known buggy paths in guest drivers.
For patch 5, the ABI max version is now known to domain_create(), so the
status array allocation can safely be skipped when gnttab v2 isn't
available.
This gets the saving Julien wants on ARM, and on x86 we ought to
encourage people to restricting to v1 where possible, so they get the
savings too.
For patch 1, I agree with the observation that 1024 maptrack frames is a
silly default to have. Adjusting this appropriately will result in the
kind of savings wanted in the patch, without modifying the hypervisor
directly.
We should have a patch to xl (or is it libxl?) to make a better choice
than blindly picking 1024.
Having written this out, something does strike me as odd. These limits
are specified in frames, and for the gnttab limit, this equates into a
known number of grants. But struct maptrack is internal Xen, so the
toolstack selecting 1024 frames can't know how many grant maps that
actually equates to in Xen. Right now, 1024 maptrack frames equates to
2^18 mappings (I think).
For traditional server-virt VMs, they can have 0 because the frontends
should never be mapping grants. We actually already do this for the
xenstored stubdom. The same is almost certainly true for qemu stubdoms.
For VMs in a bit of a more interesting configuration, e.g. with virtio,
then they need "some".
For device driver VMs, they do need a dom0-like mapping capabilities.
Either way, there is definitely room to improve the status quo without
impacting runtime safety. If patches were to appear promptly, I think
there's probably wiggle room to consider them for 4.17 at this point.
~Andrew
|