[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Performance overhead of paravirt_ops on native identified



Xin, Xiaohui wrote:
> What I mean is that if the binary of _spin_lock is like this:
> (gdb) disassemble _spin_lock
> Dump of assembler code for function _spin_lock:
> 0xffffffff80497c0f <_spin_lock+0>:      mov    1252634(%rip),%r11        # 
> #0xffffffff805c9930 <test_lock_ops+16>
> 0xffffffff80497c16 <_spin_lock+7>:      jmpq   *%r11
> End of assembler dump.
> (gdb) disassemble
> 
> In this situation the binary contains a jump, the overhead is more than the 
> call.
> 

That's an indirect jump, though.  I don't think anyone was suggesting
using an indirect jump; the final patched version should be a direct
jump (instead of a direct call.)

I can see how indirect jumps might be slower, since they are probably
not optimized as aggressively in hardware as indirect calls -- indirect
jumps are generally used for switch tables, which often have low
predictability, whereas indirect calls are generally used for method
calls, which are (a) incredibly important for OOP languages, and (b)
generally highly predictable on the dynamic scale.

However, direct jumps and calls don't need prediction at all (although
of course rets do.)

        -hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.