[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 REPOST 03/12] x86/mm: add HYPERVISOR_memory_op to acquire guest resources
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 29 August 2017 10:28 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap > <George.Dunlap@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; xen- > devel@xxxxxxxxxxxxxxxxxxxx > Subject: RE: [Xen-devel] [PATCH v2 REPOST 03/12] x86/mm: add > HYPERVISOR_memory_op to acquire guest resources > > >>> On 29.08.17 at 11:13, <Paul.Durrant@xxxxxxxxxx> wrote: > >> -----Original Message----- > >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > >> Sent: 29 August 2017 10:00 > >> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > >> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap > >> <George.Dunlap@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; xen- > >> devel@xxxxxxxxxxxxxxxxxxxx > >> Subject: RE: [Xen-devel] [PATCH v2 REPOST 03/12] x86/mm: add > >> HYPERVISOR_memory_op to acquire guest resources > >> > >> >>> On 29.08.17 at 10:32, <Paul.Durrant@xxxxxxxxxx> wrote: > >> >> From: Wei Liu [mailto:wei.liu2@xxxxxxxxxx] > >> >> Sent: 28 August 2017 16:01 > >> >> On Tue, Aug 22, 2017 at 03:50:57PM +0100, Paul Durrant wrote: > >> >> > + > >> >> > +/* > >> >> > + * Get the pages for a particular guest resource, so that they can be > >> >> > + * mapped directly by a tools domain. > >> >> > + */ > >> >> > +#define XENMEM_acquire_resource 28 > >> >> > +struct xen_mem_acquire_resource { > >> >> > + /* IN - the domain whose resource is to be mapped */ > >> >> > + domid_t domid; > >> >> > + /* IN - the type of resource (defined below) */ > >> >> > + uint16_t type; > >> >> > + > >> >> > +#define XENMEM_resource_grant_table 0 > >> >> > + > >> >> > + /* > >> >> > + * IN - a type-specific resource identifier, which must be zero > >> >> > + * unless stated otherwise. > >> >> > + */ > >> >> > + uint32_t id; > >> >> > + /* IN - number of (4K) frames of the resource to be mapped */ > >> >> > + uint32_t nr_frames; > >> >> > + /* IN - the index of the initial frame to be mapped */ > >> >> > + uint64_aligned_t frame; > >> >> > + /* IN/OUT - If the tools domain is PV then, upon return, > >> >> > gmfn_list > >> >> > + * will be populated with the MFNs of the resource. > >> >> > + * If the tools domain is HVM then it is expected that, > >> >> > on > >> >> > + * entry, gmfn_list will be populated with a list of > >> >> > GFNs > >> >> > + * that will be mapped to the MFNs of the resource. > >> >> > + */ > >> >> > + XEN_GUEST_HANDLE(xen_pfn_t) gmfn_list; > >> >> > >> >> Why is it not possible to make PV does the same thing as HVM? > >> > > >> > Because PV guests don't use a P2M as such. > >> > >> They certainly do, just Xen can't rely on (and hence use) it. > > > > Oh I know they have one but, as you say, Xen can't use it do put resources > > at a particular guest location. > > > >> > >> > An HVM guest can pass GFNs in and > >> > say 'I want the resource mapped here'. A PV guest can't do that since > >> > it's > >> > using MFNs directly... it has to deal with the resource wherever it may > be. > >> > >> Xen does, however, maintain the M2P, so it would not be impossible > >> to return GFNs here for PV guests, requiring the caller to translate > >> them back to MFNs if so desired. > > > > That's possible, but still different to and HVM caller, which will pass GFNs > > in rather than using any values returned. So I don't really see any > advantage > > in that. > > What's wrong with PV passing in PFNs, and Xen installing the > resulting translations into the M2P right away (leaving it to the > caller to just fix up its P2M)? That would sufficiently > parallel XENMEM_exchange, for example. How would that work when the mfns are assigned to a different domain? E.g. when I acquire domU's grant table for mapping in dom0, I don't want to take ownership of the mfns... they still belong to the domU. Paul > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |