[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] [PATCH][RFC] per vcpu VHPT



Le Lundi 24 Juillet 2006 16:22, Isaku Yamahata a écrit :
> I sent out the old patches. sorry for that.
> attached the newest one. Please discard old ones.
>
> On Mon, Jul 24, 2006 at 09:54:28PM +0900, Isaku Yamahata wrote:
> > Hi.
> >
> > I implemented per vcpu VHPT for non-VTi domain.
> > The motivation is to alleviate vcpu migration cost between physical cpus
> > with credit scheduler.
> > If more than one vcpu of same domain, VHPT needs to be flushed every
> > vcpu switch. I'd like to avoid this scenario.
> > The patch is for discussion and performance evaluation. Not for commit.
> >
> >
> > I checked the mailing list archives and found the thread
> > Xen/ia64 - global or per VP VHPT
> > http://lists.xensource.com/archives/html/xen-devel/2005-04/msg01002.html
> >
> > The conclustion at that time isn't concluded.
> > (At least my understanding. Because the thread was very long to follow.
> > So I might be wrong, correct me.)
> > With this patch, we can measure the performance and descide to include
> > this patch or discard the idea.
> >
> >
> > This patch introduces compile time optoin,  xen_ia64_pervcpu_vhpt=y,
> > to enable this feature and xen boot time option,  pervcpu_vhpt=0
> > to disable per vcpu vhpt allocation.
> > The patch depends on tlb tracking patch which I sent before.
> > I attached these patches for convinience.
Good work.

I don't understand why the per-vcpu patch relies on tlb-tracking.  Is it for 
convienience ?

Because I like flexibility, I'd vote for integrating this patch.  However I'd 
vote for removing #if CONFIG_XEN_IA64_PERVCPU_VHPT.  The command line option 
is good and the overhead should be very small.

Tristan.

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.