[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 02/12] x86/mm: add HYPERVISOR_memory_op to acquire guest resources
> -----Original Message----- > From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan > Beulich > Sent: 25 September 2017 14:50 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen- > devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [Xen-devel] [PATCH v7 02/12] x86/mm: add > HYPERVISOR_memory_op to acquire guest resources > > >>> On 18.09.17 at 17:31, <paul.durrant@xxxxxxxxxx> wrote: > > Certain memory resources associated with a guest are not necessarily > > present in the guest P2M and so are not necessarily available to be > > foreign-mapped by a tools domain unless they are inserted, which risks > > shattering a super-page mapping. > > For grant tables I can see this as the primary issue, but isn't the > goal of not exposing IOREQ server pages an even more important > aspect, and hence more relevant to mention here? > > > NOTE: Whilst the new op is not intrinsicly specific to the x86 architecture, > > I have no means to test it on an ARM platform and so cannot verify > > that it functions correctly. Hence it is currently only implemented > > for x86. > > Which will require things to be moved around later on. May I > instead suggest to put it in common code and simply have a > small #ifdef somewhere causing the operation to fail early for > ARM? Ok. If you prefer that approach then I'll do that. > > > --- a/xen/arch/x86/mm.c > > +++ b/xen/arch/x86/mm.c > > @@ -4768,6 +4768,107 @@ int xenmem_add_to_physmap_one( > > return rc; > > } > > > > +static int xenmem_acquire_grant_table(struct domain *d, > > I don't think static functions need a xenmem_ prefix. > Ok. > > +static int xenmem_acquire_resource(xen_mem_acquire_resource_t > *xmar) > > const? Yes. > > > +{ > > + struct domain *d, *currd = current->domain; > > + unsigned long *mfn_list; > > + int rc; > > + > > + if ( xmar->nr_frames == 0 ) > > + return -EINVAL; > > + > > + d = rcu_lock_domain_by_any_id(xmar->domid); > > + if ( d == NULL ) > > + return -ESRCH; > > + > > + rc = xsm_domain_memory_map(XSM_TARGET, d); > > + if ( rc ) > > + goto out; > > + > > + mfn_list = xmalloc_array(unsigned long, xmar->nr_frames); > > Despite XSA-77 there should not be any new disaggregation related > security risks (here: memory exhaustion and long lasting loops). > Large requests need to be refused or broken up. Ok, I'll put an upper limit on frames. > > > --- a/xen/common/grant_table.c > > +++ b/xen/common/grant_table.c > > @@ -3607,38 +3607,58 @@ int mem_sharing_gref_to_gfn(struct > grant_table *gt, grant_ref_t ref, > > } > > #endif > > > > -int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn, > > - mfn_t *mfn) > > -{ > > - int rc = 0; > > > > - grant_write_lock(d->grant_table); > > +static mfn_t gnttab_get_frame_locked(struct domain *d, unsigned long > idx) > > +{ > > + struct grant_table *gt = d->grant_table; > > + mfn_t mfn = INVALID_MFN; > > > > - if ( d->grant_table->gt_version == 0 ) > > - d->grant_table->gt_version = 1; > > + if ( gt->gt_version == 0 ) > > + gt->gt_version = 1; > > > > - if ( d->grant_table->gt_version == 2 && > > + if ( gt->gt_version == 2 && > > (idx & XENMAPIDX_grant_table_status) ) > > { > > idx &= ~XENMAPIDX_grant_table_status; > > I don't see the use of this bit mentioned anywhere in the public > header addition. Good point, I do need to mention v2 now that Juergen has revived it. > > > +mfn_t gnttab_get_frame(struct domain *d, unsigned long idx) > > +{ > > + mfn_t mfn; > > + > > + grant_write_lock(d->grant_table); > > + mfn = gnttab_get_frame_locked(d, idx); > > + grant_write_unlock(d->grant_table); > > Hmm, a read lock is insufficient here only because of the possible > bumping of the version from 0 to 1 afaict, but I don't really see > why what now becomes gnttab_get_frame_locked() does that in > the first place. It may be the case that, after Juergen's recent re-arrangements, that the version will always be set by this point. I'll check and drop to and ASSERT on the version and read-lock if I can. > > > --- a/xen/include/public/memory.h > > +++ b/xen/include/public/memory.h > > @@ -650,7 +650,43 @@ struct xen_vnuma_topology_info { > > typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t; > > DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t); > > > > -/* Next available subop number is 28 */ > > +#if defined(__XEN__) || defined(__XEN_TOOLS__) > > + > > +/* > > + * Get the pages for a particular guest resource, so that they can be > > + * mapped directly by a tools domain. > > + */ > > +#define XENMEM_acquire_resource 28 > > +struct xen_mem_acquire_resource { > > + /* IN - the domain whose resource is to be mapped */ > > + domid_t domid; > > + /* IN - the type of resource (defined below) */ > > + uint16_t type; > > + > > +#define XENMEM_resource_grant_table 0 > > + > > + /* > > + * IN - a type-specific resource identifier, which must be zero > > + * unless stated otherwise. > > + */ > > + uint32_t id; > > + /* IN - number of (4K) frames of the resource to be mapped */ > > + uint32_t nr_frames; > > + /* IN - the index of the initial frame to be mapped */ > > + uint64_aligned_t frame; > > There are 32 bits of unnamed padding ahead of this field - please > name it and check it's set to zero. Ah yes, for some reason the #define above had made me think things were aligned at this point. > > > + /* IN/OUT - If the tools domain is PV then, upon return, gmfn_list > > + * will be populated with the MFNs of the resource. > > + * If the tools domain is HVM then it is expected that, on > > s/tools domain/calling domain/ (twice)? Ok. > > > + * entry, gmfn_list will be populated with a list of GFNs > > s/will be/is/ to further emphasize the input vs output difference? Ok. > > > + * that will be mapped to the MFNs of the resource. > > + */ > > + XEN_GUEST_HANDLE(xen_pfn_t) gmfn_list; > > What about a 32-bit x86 tool stack domain as caller here? I > don't think you want to limit it to just the low 16Tb? Nor are > you adding a compat translation layer. Ok. Good point, even though somewhat unlikely. > > > +}; > > +typedef struct xen_mem_acquire_resource > xen_mem_acquire_resource_t; > > + > > +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */ > > Also please group this with the other similar section, without > introducing a second identical #if. After all we have ... > > > +/* Next available subop number is 29 */ > > ... this to eliminate the need to have things numerically sorted in > this file. Ok. Paul > > Jan > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > https://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |