[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] [PATCH] [Resend]Enable hash vtlb



Le Lundi 10 Avril 2006 14:24, Xu, Anthony a écrit :
> Hi Tristan
[...]
> Because it is per VP VHPT, seems it is easier to support SMP-g.
>
> In my mind, it's more natural to use IPI to emulate ptc.g.
In my experience, this is very slow.  I will publish figures later.

> I know your method of emulating "ptc.g" is efficient, and works well by
> far.
>
> I saw below in SDM3 "Only one global purge transaction may be issued at a
> time by all processors, the operation is undefined otherwise. Software is
> responsible for enforcing this restriction"

> Seems you need to add a global lock to serialize "ptc.g" on all processor.
I may not correctly catch what you say.
Linux kernel must have (and has) a lock around ptc.g
Xen also has a lock for ptc.g (see  ia64_global_tlb_purge).

> If ptc.g instruction is blocked by lock, may other VCPUs in the same domain
> use the old tlb entries?
I don't really catch.  After the vcpu_ptc_ga, all old tlb entries covered by 
the address range must be invalidated.

> I will add an option, thus collision chain can be configured. Then there
> are two method to support SMP-g coexisting in VMM.
> 1. VHPT without collision chain + your approach of emulating ptc.g
> 2. VHPT with collision chain + IPI approach of emulating ptc.g.
After comments here, it doesn't seem that easy!

> Let performance data choose better one.
Sure.

Maybe we could also keep collision chain with direct flush.

> >(BTW, I'd prefer a command line option such as vtlb=vp-vhpt or
> > vtlb=lp-vhpt rather than a compile-time option).
>
> Yes, we can do that. But I don't think it's necessary, because if one LP
> only runs one VCPU, VP-VHPT is equal to LP-VHPT.
>
>
> Thanks,
> Anthony
>
> >Tristan.


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.