[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 8/9] vm_event: Add vm_event_ng interface
>>> On 05.06.19 at 19:01, <ppircalabu@xxxxxxxxxxxxxxx> wrote: > On Tue, 2019-06-04 at 15:43 +0100, Andrew Cooper wrote: >> On 30/05/2019 15:18, Petre Pircalabu wrote: >> > +static int vm_event_channels_alloc_buffer(struct >> > vm_event_channels_domain *impl) >> > +{ >> > + int i, rc = -ENOMEM; >> > + >> > + for ( i = 0; i < impl->nr_frames; i++ ) >> > + { >> > + struct page_info *page = alloc_domheap_page(impl->ved.d, >> > 0); >> >> This creates pages which are reference-able (in principle) by the >> guest, >> and are bounded by d->max_pages. Not by a HVM one, because they can't reference pages by MFN. Or else, as Petre implies, the ioreq approach would be wrong, too. >> Both of these are properties of the existing interface which we'd >> prefer >> to remove. > The allocation mechanism is similar with the one used by ioreq (the > main difference is the number of pages). Question is whether here you want to use the "caller owned" variant. I haven't thought through whether this would actually be better, so it's merely a remark. >> > + if ( !page ) >> > + goto err; >> > + >> > + if ( !get_page_and_type(page, impl->ved.d, >> > PGT_writable_page) ) >> > + { >> > + rc = -ENODATA; >> > + goto err; >> > + } >> > + >> > + impl->mfn[i] = page_to_mfn(page); >> > + } >> > + >> > + impl->slots = (struct vm_event_slot *)vmap(impl->mfn, impl- >> > >nr_frames); >> >> You appear to have opencoded vmalloc() here. Is there any reason not >> to >> use that? >> > > The problem with vmalloc is that if the pages are not assigned to a > specific domain the remapping fails in the monitor domain. > e.g.: > ... > (XEN) mm.c:1015:d0v2 pg_owner d1 l1e_owner d0, but real_pg_owner d-1 > (XEN) mm.c:1091:d0v7 Error getting mfn 5fbf53 (pfn ffffffffffffffff) > from L1 entry 80000005fbf53227 for l1e_owner d0, pg_owner d1 In which case maybe use vmalloc() and then assign_pages()? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |