[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 06/12] xen/arm: compile and initialize vmap



On Sat, 2013-04-27 at 15:55 +0100, Stefano Stabellini wrote:
> On Fri, 26 Apr 2013, Ian Campbell wrote:
> > On Fri, 2013-04-26 at 16:28 +0100, Stefano Stabellini wrote:
> > > Rename EARLY_VMAP_VIRT_END and EARLY_VMAP_VIRT_START to
> > > VMAP_VIRT_END and VMAP_VIRT_START.
> > > 
> > > Defining VMAP_VIRT_START triggers the compilation of common/vmap.c.
> > > 
> > > Define PAGE_HYPERVISOR and MAP_SMALL_PAGES (unused on ARM, because we
> > > only support 4K pages so as a matter of fact it is always set).
> > > 
> > > Implement map_pages_to_xen and destroy_xen_mappings.
> > > 
> > > Call vm_init from start_xen.
> > > 
> > > Changes in v4:
> > > - remove flush_tlb_local() from create_xen_entries;
> > 
> > Do you think the related one in create_p2m can go too?
> 
> No, because the flush in create_p2m is the only flush present in that
> function, and because we change p2m mappings there, we need to make sure
> that the guest doesn't keep accessing the old ones.

Isn't that flush in the wrong place then, because a guest VCPU running
on another PCPU could easily repopulate the TLB entry between that flush
and the time we actually update the PTE.

At the point of the current flush should be not remember that a flush is
required and do it at the end? Or clear+flush here and reinstate later.

BTW, this flushes all VMIDs, in principal we need to only flush the
guest's one, which requires this CPU to switch to that VMID, issue the
(TLBIALL) and switch back to the callers VMID. An optimisation to keep
in mind until later I think.

It appears to not be possible to flsh individual IPAs, so given a lack
of insight into the guest VA->IPA mapping we cannot optimise beyond a
full flush, shame!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.