[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.16 v4] gnttab: allow setting max version per-domain
On 30/10/2021 08:53, Roger Pau Monné wrote: > On Fri, Oct 29, 2021 at 05:39:52PM +0100, Andrew Cooper wrote: >> On 29/10/2021 08:59, Roger Pau Monne wrote: >>> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c >>> index e510395d08..f94f0f272c 100644 >>> --- a/xen/common/grant_table.c >>> +++ b/xen/common/grant_table.c >>> @@ -1917,11 +1918,33 @@ active_alloc_failed: >>> } >>> >>> int grant_table_init(struct domain *d, int max_grant_frames, >>> - int max_maptrack_frames) >>> + int max_maptrack_frames, unsigned int options) >>> { >>> struct grant_table *gt; >>> + unsigned int max_grant_version = options & >>> XEN_DOMCTL_GRANT_version_mask; >>> int ret = -ENOMEM; >>> >>> + if ( max_grant_version == XEN_DOMCTL_GRANT_version_default ) >>> + max_grant_version = opt_gnttab_max_version; >>> + if ( !max_grant_version ) >>> + { >>> + dprintk(XENLOG_INFO, "%pd: invalid grant table version 0 >>> requested\n", >>> + d); >>> + return -EINVAL; >>> + } >>> + if ( max_grant_version > opt_gnttab_max_version ) >>> + { >>> + dprintk(XENLOG_INFO, >>> + "%pd: requested grant version (%u) greater than supported >>> (%u)\n", >>> + d, max_grant_version, opt_gnttab_max_version); >>> + return -EINVAL; >>> + } >> I think this wants to live in sanitise_domain_config() along with all >> the other auditing of flags and settings. > The reason to place those there is that the sanity checks for the > other grant table related parameters (max_grant_frames and > max_maptrack_frames) are performed in this function also. I think it's > better to keep the checks together. > > We should consider exporting the relevant values from grant table > code and then moving all the checks to sanitise_domain_config, but > likely a follow up work given the current point in the release. > >> Also, it can be simplified: >> >> if ( max_grant_version < 1 || >> max_grant_version > opt_gnttab_max_version ) >> { >> dprintk(XENLOG_INFO, "Requested gnttab max version %u outside of >> supported range [%u, %u]\n", ...); >> } > It was originally done this way so that the first check > (!max_grant_version) could be adjusted when support for > max_grant_version == 0 was introduced [0] in order to signal the > disabling of grant tables altogether. > >> >>> + if ( unlikely(max_page >= PFN_DOWN(TB(16))) && is_pv_domain(d) && >>> + max_grant_version < 2 ) >>> + dprintk(XENLOG_INFO, >>> + "%pd: host memory above 16Tb and grant table v2 >>> disabled\n", >>> + d); >> This is rather more complicated. >> >> For PV, this going wrong in the first place is conditional on CONFIG_BIGMEM. >> For HVM, it the guest address size, not the host. >> For ARM, I don't even know, because I've lost track of which bits of the >> ABI are directmap in an otherwise translated domain. > This was only aiming to cover the PV case, which I think it's the more > likely one. It's possible there's people attempting to create PV > guests on a 16TB machine, but I think it's more unlikely that the > guest itself will have 16TB of RAM. > >> I think it is probably useful to do something about it, but probably not >> in this patch. > I'm fine with this, we had no warning at all before, so I don't think > it should be a hard requirement to add one now. It would be nice if > there's consensus, but otherwise let's just skip it. > >> Perhaps modify domain_set_alloc_bitsize() to impose an upper limit for >> the "host memory size matters" cases? >> >> For the guest address size cases, this possibly wants to feed in to the >> max policy calculations in the same way that shadow kinda does. > So grant table version will basically clamp the amount of memory a > guest can use? > > What about guests that doesn't use grant tables at all, do we expect > those to set the future max_grant_version to 0 in order to avoid the > clamping without having to expose grant v2? > >>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h >>> index 51017b47bc..0ec57614bd 100644 >>> --- a/xen/include/public/domctl.h >>> +++ b/xen/include/public/domctl.h >>> @@ -89,14 +89,20 @@ struct xen_domctl_createdomain { >>> /* >>> * Various domain limits, which impact the quantity of resources >>> * (global mapping space, xenheap, etc) a guest may consume. For >>> - * max_grant_frames and max_maptrack_frames, < 0 means "use the >>> - * default maximum value in the hypervisor". >>> + * max_grant_frames, max_maptrack_frames and max_gnttab_version < 0 >>> + * means "use the default maximum value in the hypervisor". >>> */ >>> uint32_t max_vcpus; >>> uint32_t max_evtchn_port; >>> int32_t max_grant_frames; >>> int32_t max_maptrack_frames; >>> >>> +/* Grant version, use low 4 bits. */ >>> +#define XEN_DOMCTL_GRANT_version_mask 0xf >>> +#define XEN_DOMCTL_GRANT_version_default 0xf >> This needs to be a toolstack decision, not something in Xen. This >> doesn't fix the case where VMs can't cope with change underfoot. >> >> It is fine for the user say "use the default", but this must be turned >> into an explicit 1 or 2 by the toolstack, so that the version(s) visible >> to the guest remains invariant while it is booted. > Please bear with me, as I'm afraid I don't understand why this is > relevant. Allowed max grant version can only change as a result of a > migration No. Allowed max grant version is (well - needs to be) a fixed property of the VM, even across migration. It was a fundamentally mistake to ever have gnttab v2 active by default, without an enumeration, and with no way of turning it off. Same too for evtchn, but we've already taken a patch to knobble fifo support. The toolstack needs to explicitly select v1 or v2. It's fine to pick a default on behalf a user which doesn't care, but what moves in the migrate stream must an explicit, unambiguous value, so the destination Xen and toolstack can reconstruct the VM exactly. "default" is ambiguous, and cannot be recovered after the fact. In particular, a vm with no explicit configuration, when booted on a Xen with gnttab limited to v1 on the command line, should not have v2 become accessible by migrating to a second Xen with no command line limit. It is fine, if that VM subsequently reboots, to find that v2 is now available. ~Andrew
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |