[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 15/16] Infrastructure for manipulating 3-level event channel pages
>>> On 31.01.13 at 15:43, Wei Liu <wei.liu2@xxxxxxxxxx> wrote: > +static long __map_l3_arrays(struct domain *d, xen_pfn_t *pending, > + xen_pfn_t *mask, int nr_pages) > +{ > + int rc; > + void *mapping; > + struct page_info *pginfo; > + unsigned long gfn; > + int pending_count = 0, mask_count = 0; > + > +#define __MAP(src, dst, cnt) \ > + for ( (cnt) = 0; (cnt) < nr_pages; (cnt)++ ) \ > + { \ > + rc = -EINVAL; \ > + gfn = (src)[(cnt)]; \ > + pginfo = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC); \ > + if ( !pginfo ) \ > + goto err; \ > + if ( !get_page_type(pginfo, PGT_writable_page) ) \ > + { \ > + put_page(pginfo); \ > + goto err; \ > + } \ > + mapping = __map_domain_page_global(pginfo); \ > + if ( !mapping ) \ > + { \ > + put_page_and_type(pginfo); \ > + rc = -ENOMEM; \ > + goto err; \ > + } \ > + (dst)[(cnt)] = mapping; \ > + } > + > + __MAP(pending, d->evtchn_pending, pending_count) > + __MAP(mask, d->evtchn_mask, mask_count) > +#undef __MAP > + > + rc = 0; > + > + err: > + return rc; > +} So this alone already is up to 16 pages per guest, and hence a theoretical maximum of 512k pages, i.e. 2G mapped space. The global page mapping area, however, is only 1Gb in size on x86-64 (didn't check ARM at all)... Which is why I said that you need to at least explain why bumping that address range isn't necessary (i.e. if we think that we really don't want to support the maximum number of guests allowed in theory, and that their amount is really always going to be low enough to also not run into resource conflicts with other users of the interface). > +static long evtchn_register_3level(evtchn_register_3level_t *arg) > +{ > + struct domain *d = current->domain; > + struct vcpu *v; > + int rc = 0; > + xen_pfn_t evtchn_pending[EVTCHN_MAX_L3_PAGES]; > + xen_pfn_t evtchn_mask[EVTCHN_MAX_L3_PAGES]; > + xen_pfn_t l2sel_mfn = 0; > + xen_pfn_t l2sel_offset = 0; > + > + if ( d->evtchn_level == EVTCHN_3_LEVEL ) > + { > + rc = -EINVAL; > + goto out; > + } > + > + if ( arg->nr_vcpus > d->max_vcpus || > + arg->nr_pages > EVTCHN_MAX_L3_PAGES ) > + { > + rc = -EINVAL; > + goto out; > + } > + > + memset(evtchn_pending, 0, sizeof(xen_pfn_t) * EVTCHN_MAX_L3_PAGES); > + memset(evtchn_mask, 0, sizeof(xen_pfn_t) * EVTCHN_MAX_L3_PAGES); > + > +#define __COPY_ARRAY(_d, _s, _nr) \ > + do { \ > + if ( copy_from_guest((_d), (_s), (_nr)) ) \ > + { \ > + rc = -EFAULT; \ > + goto out; \ > + } \ > + } while (0) > + __COPY_ARRAY(evtchn_pending, arg->evtchn_pending, arg->nr_pages); > + __COPY_ARRAY(evtchn_mask, arg->evtchn_mask, arg->nr_pages); > +#undef __COPY_ARRAY I don't think this really benefits from using the __COPY_ARRAY() macro. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |