[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Creating a magic page for PV mem_access



> >   At 19:11 +0000 on 03 Jun (1370286719), Aravindh Puthiyaparambil
> > (aravindp)
> >>
> >> wrote:
> >>>>> I am trying to create a magic / special page for the PV
> >>>>> mem_access. I am mimicking what is being done for the console page
> >>>>> (alloc_magic_pages()) on the tools side. On the hypervisor side, I
> >>>>> am planning on stashing the address of this page in the pv_domain
> >>>>> structure akin to how the special pages are stored in params[] of
> >>>>> the hvm_domain structure.
> >>>>
> >>>> OK, can you back up a bit and describe what you're going to use
> >>>> this page for?  A PV domain's 'magic' pages may not be quite what you
> want.
> >>>> First, they're owned by the guest, so the guest can write to them
> >>>> (and so they can't be trusted for doing hypervisor->dom0
> >> communications).
> >>>
> >>> You are right. I didn't realize magic pages are writable by the guest.
> >>> So this is not a good option.
> >
> > BTW, are the HVM special pages (access, paging, sharing) accessible by the
> guest kernel?
> 
> Aravindh,
> I am responsible for this mess, so I'll add some information here.
> 
> Yes the guest kernel can see these pages. Because of the way the e820 is laid
> out, the guest kernel should never venture in there, but nothing prevents a
> cunning/mischievous guest from doing so.
> 
> Importantly, the pages are not populated by the builder or BIOS. If you look
> at the in-tree tools (xenpaging, xen-access), the pages are populated,
> mapped, and removed from the physmap in a compact yet non-atomic
> sequence. In this manner, the pages are no longer available to the guest by
> the end of that sequence, and will be automatically garbage collected once
> the ring-consuming dom0 tool dies. There is a window of opportunity for the
> guest to screw things, and the recommendation is to carry out the above
> sequence with the domain paused, to make it atomic guest-wise.

This is what I am doing too. I was worried about the case when the special page 
is pre-populated and a malicious guest has mapped in this page before a 
mem_event listener has been attached to the domain. But I guess removing from 
the physmap should cause the earlier mapping to become invalid.
 
> I discussed this with Stefano at a Hackathon about a year ago, briefly. The
> consensus was that the "true" solution is to create a new (set of)
> XENMAPSPACE_* variants for the xen add to physmap calls.
> 
> >
> >>>> And second, I'm not sure that mem_access pages really need to
> >>>> saved/restored with the rest of the VM -- I'd have thought that you
> >>>> could just set up a new, empty ring on the far side.
> >>>
> >>> I am trying to mimic what is being done in the HVM side for
> >>> mem_event pages. In setup_guest() (xc_hvm_build_x86.c), I see
> "special pages"
> >>> being created for console, paging, access and sharing ring pages.
> >>> Then
> >>> xc_set_hvm_param() is used to inform the hypervisor. When a
> >> mem_event
> >>> / mem_access client comes up, it uses xc_get_hvm_param() to get the
> >>> pfn and maps it in. I want to do something similar for PV.
> >>
> >> Yep.  I think it might be better to invent up a new interface for
> >> those pages, rather than using domain memory for them.  We can then
> >> deprecate the old HVM-specific params interface.
> >
> > Are you saying that these pages (access, sharing, paging) should live in
> Dom0 instead of the domain's memory and only be created when a
> mem_event listener is active? Or am I completely off track?
> 
> And here is why the mess exists in the first place. Originally, these pages
> were allocated by the dom0 tool itself, and passed down to Xen, which
> would map them by resolving the user-space vaddr of the dom0 tool to the
> mfn. This had the horrible property of letting the hypervisor corrupt random
> dom0 memory if the tool crashed and the page got reused, or even if the
> page got migrated by the Linux kernel.

Yes, that does indeed look to be a risky approach to take. For now, I will 
stick with the HVM approach for PV guests too.

> A kernel driver in dom0 would have also solved the problem, but now you
> are involving the burden of the entire set of dom0 versions out there.
> 
> Hope this helps

That was immensely helpful. Thank you.

Aravindh


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.