[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 2/8]: PVH mmu changes



On Mon, 24 Sep 2012 13:24:12 +0100
Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> wrote:

> On Mon, 24 Sep 2012, Ian Campbell wrote:
> > On Mon, 2012-09-24 at 12:55 +0100, Stefano Stabellini wrote:
> > > There are few code style issues on this patch, I suggest you run
> > > it through scripts/checkpatch.pl, it should be able to catch all
> > > these errors.
> > 
> > It would also be nice to starting having some changelogs entries for
> > these patches for the next posting. There's a lot of complex stuff
> > > > +       return count;
> > > 
> > > Who is going to remove the corresponding mapping from the vma?
> > > Also we might be able to replace the flush_tlb_all with a
> > > flush_tlb_range.
> > 
> > I'm not convinced that a guest level TLB flush is either necessary
> > or sufficient here. What we are doing is removing entries from the
> > P2M which means that we need to do the appropriate HAP flush in the
> > hypervisor, which must necessarily invalidate any stage 1 mappings
> > which this flush might also touch (i.e. the HAP flush must be a
> > super set of this flush).
> > 
> > Without the HAP flush in the hypervisor you risk guests being able
> > to see old p2m mappings via the TLB entries which is a security
> > issue AFAICT.
> 
> Yes, you are right, we need a flush in the hypervisor to flush the
> EPT. It could probably live in the implementation of
> XENMEM_add_to_physmap.
> 
> This one should be just for the vma mappings, so in the case of
> xen_unmap_domain_mfn_range is unnecessary (given that it is
> not removing the vma mappings).


My head spins looking at INVEPT and INVVPID docs, but doesn't it already
happen in ept_set_entry():

    if ( needs_sync )
        ept_sync_domain(p2m->domain);



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.