[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] PATCH: cleanup of tlbflush



On Wed, May 10, 2006 at 07:38:12PM +0800, Tian, Kevin wrote:
> >From: Tristan Gingold [mailto:Tristan.Gingold@xxxxxxxx]
> >Sent: 2006年5月10日 18:47
> >>
> >> I see your concern about flush efficiency. However we still need set
> >> necessary mask bits for correctness, right?
> >Not yet, because pages are not transfered.
> 
> It's not specific to page flipping. Simple page sharing also has same 
> problem.
> 
> >
> >> It would be difficult to track
> >> exact processors which have footprint about different ungranted
> >pages.
> >> To track that list may instead pull down performance at other places.
> >> Then to set domain_dirty_cpumask as ones that domain is currently
> >> running on, can be a simple/safe way in current stage though
> >> performance may be affected.
> >Unfortunatly performance are so badly affected that using SMP-g is
> >useless!
> 
> If correctness becomes an issue, like shared va has footprint on 
> several vcpus, you have to flush tlb on multiple processors or else 
> SMP-g is broken.
> 
> After more thinking, I think there's no need for flush_tlb_mask to flush 
> both tlb all and vhpt all. Flush_tlb_mask just does as what the name 
> stands for: flushing all related TLBs indicating in the 
> domain_dirty_cpumask. Instead the affected software structures can 
> be always flushed in destroy_grant_host_mapping().
> 
> For xen/x86, destroy_grant_host_mapping clears affected pte entry in 
> writable page table or the pte entry in shadow page table based on 
> host_addr.
> 
> For xen/ia64, the vhpt table can be flushed by host_addr too, in 
> destroy_grant_host_mapping. For each requested unmap page, only 
> affected vhpt entry will be flushed and there's no need for full purge.
> 
> The key point is to pass in the gva address (host_addr) which is 
> previously mapped to granted frame. It's guest's responsibility to record 
> those mapped address and then passed in at unmap request. For 
> example, xen/x86 use pre-allocated virtual address range while xen/ia64 
> uses identity-mapped one. It's current para-driver style and we can
> trust domain since guest needs to be cooperative or else domain itself 
> is messed instead of xen.
> 
> Isaku, how about your thought on it?

I don't think that tracking virtual address cause much performance loss.
At least for vbd.
The reason is that a underlying block device doesn't need to
read its data. Then unmapping such a granted page doesn't require
any flush. (I'm just guessing. The md driver or lvm may read its
contents to calculate checksum though.)
We can enhance grant table to allow no-read/no-write(dma-only) mapping.

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.