[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-ia64-devel] paravirt_ops and its alternatives



Isaku Yamahata wrote:
> On Tue, Feb 05, 2008 at 10:17:10PM +0800, Dong, Eddie wrote:
>>      1: The coding style is not as good as original IVT code.
> 
> I have to agree with you here.
> 
> 
>>              For example:
>> #ifdef CONFIG_XEN
>>         mov r24=r8
>>         mov r8=r18
>>         ;;
>> (p10)   XEN_HYPER_ITC_I
>>         ;;
>> (p11)   XEN_HYPER_ITC_D
>>         ;;
>>         mov r8=r24
>>         ;;
>> #else
>>              This kind of save/restore R8 in each replacement (MACRO)
>>      is kind of not well tuned. We probably need a big IVT code
change
>>      to avoid frequent save/restore in each MACRO.
>> 
>>              This needs many effort. Of course taking shortcut before
>> 
>> into upstream.
> 
> Yes, such register value save/restore is suboptimal.

Another issue from me is that why we use R8/R9 for In/Out parameter
in Xen static hypercall. This raises us an issue to save/restore R8/R9
using
bank 0 register. static PAL call doesn't use R8/R9, should we?
Especially
pv_ops itself is Xen neutral.


> I'm guessing such overhead is relatively small compared to the
> hyperprivops overhead which issues break instruction. 

Yes, the overhead is mostly un-observable, but mainly coding style or 
code quality concern. I assume Linux guys is much more paranoid in
pursuing "best".

> So presumably for reducing such overhead, it is necessary to replace
> those break instructions with fast hyperprivops using gate page. Such
> optimization would be the next step after upstream merge though. 

Yes, this could be future effort, actually this is not a pv_ops work,
but
xen wrapper work. 

Let me create another thread for compile time dual IVT table vs. single
discussion.
thx, eddie

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.