[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] PATCH: cleanup of tlbflush



Le Mercredi 10 Mai 2006 07:39, Tian, Kevin a écrit :
> From: Tristan Gingold
>
> >Sent: 2006年5月9日 21:23
> >
> >Hi,
> >
> >this patch tries to make vtlb/vhpt management interface cleaner.  No
> >new
> >features added.
> >It also allow to flush region 7 (which fixes a bug).
> >
> >We really have to think on domain_dirty_cpumask and tlb_flush_mask.
> >Currently, domain_dirty_cpumask is never set. Setting it kills
> >performance.
> >
> >Tested by boot+halt of dom0+domU SMP.
> >
> >Tristan.
>
> Hi, Tristan,
>       Some comments here:
>
> - How about renaming include/asm-ia64/flushtlb.h to tlbflush.h, and thus
> avoid several #ifndef XEN in C linux files?
flushtlb.h is a Xen standard header.

> - I didn't find definition for flush_local_tlb_vhpt. Is it a typo?
Where is flush_local_tlb_vhpt declared ?
If you mean local_flush_tlb_all, it is declared in linux-xen/tlb.c

> - remote_flush_tlb_vhpt has mismatched logic inside. You call
> vhpt_flush_address_remote together with a pct.l, that makes people
> confusing about the exact purpose of that function.
Maybe should I write a comment.  remote_flush_tlb_vhpt should be renamed 
cpu_flush_vhpt and can work on the local cpu.

> - It's better to add vcpu_flush_vtlb_all and vcpu_flush_vtlb_range too.
vcpu_flush_vtlb_all is in fact vcpu_flush_vtlb.
vcpu_flush_vtlb_range can be added for emulation of ptc.l (which is not 
currently emulated).

> Currently due to fact that only one vtlb entry for para-domain, you always
> explicitly purge one-entry itlb/dtlb directly in all places. However you
> know some places meaning range to be purged, while others meaning
> full vtlb purge like vcpu_ptc_e. Since you're refactoring vtlb purge
> interfaces, I prefer to differentiate those places though you can implement
> *_range same as *_all in current stage. Then later interfaces can keep
> untouched even if vtlb implementation is changed.
I though I did this way: *_range functions only flush a range, while 
*_flush_vtlb functions flush the whole tlb.  Maybe should I rename the later 
into *_flush_vtlb_all ?

> - vcpu_flush_vtlb is misleading, which seems meaning
> vcpu_local_flush_vtlb, since you call vhpt_flush() and
> local_flush_tlb_all() inside which is instead a local stub. Or you may add
> check upon current pointer to add branch to call remote version if keeping
> vcpu pointer as parameter.
Using my naming scheme, vcpu_* means local, while domain_* means global.
*_range means range, while no suffix means global.
This is at least consistent.

> - Similarly, remote_flush_tlb_vhpt is not "remote" which is still used as a
> local version.
Sure, but it is static.

> So, I suggest making a full interface list first with clear name defined,
> Following are some of my thoughts:
> - Domain_ prefix: it doesn't need special local or remote version
> - Vcpu_ prefix: may need local version. Vcpu_local_function (...) means
> a local vtlb operation on current vcpu. Vcpu_function(struct vcpu*,...)
> means that operation can happen on any vcpu assigned by the first
> parameter. The vcpu pointer already includes remote possibility here.
Many of the vcpu_ functions only work on current vcpu.  I keep this 
restriction.

> - local_ prefix: just mean a local version simply related to mTLB/VHPT.
>
> By following above criteria, a possible list as below. (The local version
> can be also removed since included by generic one)
>
> /* Affects all vcpu/all vtlb/all vhpt/all tlb. Here all vhpt/tlb should
> mean the processors where domain is running on */
> void domain_flush_vtlb_all (struct domain *d) /* with IPI */
> void domain_flush_vtlb_range(struct domain *d, u64 vadr, u64 range)
> void domain_flush_destroy(struct domain *d) (A better name?)
> /* with IPI */
Ok for these.

> /* Affects target vcpu/target vtlb/all vhpt/all tlb. Here all vhpt/tlb
> should mean the processors where target vcpu is ever running on. In current
> stage, this can be considered same as above domain wise */
> void vcpu_flush_vtlb_all(struct vcpu *v)
> void vcpu_local_flush_vtlb_all(void)
> void vcpu_flush_vtlb_range(struct vcpu *v, u64 vadr, u64 range)
> void vcpu_local_flush_vtlb_range(void)
We don't need both version (ie with and without vcpu parameter).
vcpu_local_flush_vtlb_range without parameters is strange too :-)

> /* Affects target vhpt only and give caller choice how to purge target
> tlb. For example caller may issue ptc.g after vhpt purge on all
> processors like ptc.g emulation */
> void vhpt_flush_range(int cpu, u64 vadr, u64 range)
Shouldn't be exported.

> /* Affect target vhpt/target tlb. */
> void vhpt_local_flush_range(u64 vadr, u64 range)
> void vhpt_flush_all(int cpu) /* with IPI */
> void vhpt_local_flush_all(void)
Shoudn't be exported.

> /* Can base on vhpt_flush_all */
> void flush_tlb_mask(cpumask_t mask)
>
> Based on above base interfaces, people can wrap more or simply
> supplement several lines code before/after the invocation.
>
> How do you think? :-)

> BTW, agree that domain_dirty_cpumask is required. So do
> vcpu_dirty_cpumask. Not sure how much performance influence if
> we update them at context switch.
vcpu_dirty_cpumask is not used by Xen.
Updating domain_dirty_cpumask means killing performance.

> At least one simple enhancement
> we can do is to change syntax of domain_dirty_cpumask. We can
> change it to indicate processors that domain is ever running on. Then
> update point only happens at creation/destroy/migration, or even
> pause/unpause. Though this simple strategy is not fine-grained, we
> can still achieve benefit especially when domain is bound.
One use of domain_dirty_cpumask is to flush vtlb when a page is ungranted.
If this mask is ever set, doing IOs trash the machine.
IMHO, this is the next major Xen/ia64 challenge: dealing correctly with 
granted page.

Tristan.

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.