[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] x86: mapping from Dom0 to DomU
>>> On 21.04.17 at 15:04, <andr2000@xxxxxxxxx> wrote: > Hi, all! > > I am working on a zero-copy scenario for x86 > and for that I am mapping pages from Dom0 to DomU > (yes, I know there are at least security concerns). > > Everything is just fine, e.g. I can map grefs from Dom0 in DomU > with gnttab_map_refs, until I try to mmap those pages in DomU > with vm_insert_page and Xen starts to complain: > > (XEN) mm.c:989:d1v0 pg_owner 1 l1e_owner 1, but real_pg_owner 0 > (XEN) mm.c:1061:d1v0 Error getting mfn 20675a (pfn 1ac8de) from L1 entry > 800000020675a027 for l1e_owner=1, pg_owner=1 > > So, the offending(?) code is at [1] which doesn't allow the mapping > I want. When removed with > > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c > index f35e3116bb25..aeb93be8b529 100644 > --- a/xen/arch/x86/mm.c > +++ b/xen/arch/x86/mm.c > @@ -980,6 +980,7 @@ get_page_from_l1e( > * dom0, until pvfb supports granted mappings. At that time this > * minor hack can go away. > */ > +#if 0 > if ( (real_pg_owner == NULL) || (pg_owner == l1e_owner) || > xsm_priv_mapping(XSM_TARGET, pg_owner, real_pg_owner) ) > { > @@ -988,6 +989,7 @@ get_page_from_l1e( > real_pg_owner?real_pg_owner->domain_id:-1); > goto could_not_pin; > } > +#endif > pg_owner = real_pg_owner; > } > I can successfully mmap and use pages from Dom0 in DomU. > > Can anybody please explain why the use-case I am trying to implement > is treated as error from Xen's POV and what would be the right way to > do so? Granted pages can be mapped only through the grant-table hypercall, see public/grant_table.h's explanation of GNTMAP_host_map used with out without GNTMAP_contains_pte. Other mapping attempts have to be refused, or else the accounting done by the grant table code would be undermined. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |