[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 2/7] xen/memory: Fix acquire_resource size semantics
On Tue, Jan 12, 2021 at 3:57 PM Andrew Cooper <amc96@xxxxxxxxx> wrote: > > On 12/01/2021 20:15, Julien Grall wrote: > > Hi Andrew, > > > > On Tue, 12 Jan 2021 at 19:49, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > > wrote: > >> Calling XENMEM_acquire_resource with a NULL frame_list is a request for the > >> size of the resource, but the returned 32 is bogus. > >> > >> If someone tries to follow it for XENMEM_resource_ioreq_server, the acquire > >> call will fail as IOREQ servers currently top out at 2 frames, and it is > >> only > >> half the size of the default grant table limit for guests. > > I haven't yet looked at the code, just wanted to seek some clarification > > here. > > Are we expecting someone else outside the tools (e.g. QEMU) to rely on > > the new behavior? If so, what would happen if such code ran on older > > Xen? > > > > IOW, will that code require some compatibility layer to function? > > This is total mess. > > Requesting the size of a resource was never implemented for userspace. > The two current users of this interface (domain builder for gnttab > seeding, qemu for ioreq server) make blind mapping calls with a priori > knowledge of the offsets and sizes. > > The next patch in this series adds the xenforeignmemory_resource_size() > API for userspace, so we can reliably say that anything built against > anything earlier than Xen 4.15 isn't making size requests. > > The added complication is that if you have Xen 4.15, and Linux 4.18 or > later without the kernel fix provided by Roger (which is CC stable), > you'll get a bogus 32 back from the size requests. > > But that is fine because nothing is making size requests yet. > > The first concrete user of size requests will be Michał's Processor > Trace series, where there is a daemon to map the PT buffer and shuffle > the contents into a file. That will also depend on new content in 4.15. > > At some point in the future, Qemu is going to have to change it's > approach, when we want to support more than 128 vcpus. This is the > point at which the synchronous ioreq page needs to turn into two pages. > In practice, qemu could make a size call, or could make the same > calculation as Xen makes, as Qemu does know d->max_vcpus via other means. > > > Honestly, I'm expecting that Linux stable will do the rounds faster than > Xen 4.15 gets rolled out to distros. The chances of having a bleeding > edge Xen, and not an up-to-date (or at least patched) dom0 kernel is > minimal. For our usecase it's not great that we would need a newer kernel then what standard distros ship, but as a workaround compiling the newer version of xen-privcmd as an out-of-tree lkm with the fix applied is something we can live with while the distros catch up. Here: https://github.com/tklengyel/xen-privcmd-lkm Tamas
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |