[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/4] VMX: streamline entry.S code



On 26/08/2013 12:01, Jan Beulich wrote:
>>>          push %r8
>>>          push %r9
>>>          push %r10
>>>          push %r11
>>>          push %rbx
>>> +        GET_CURRENT(%rbx)
>> This seems a little less obvious.  I presume you are just breaking true
>> read-after-write data hazard on %rbx ?
> No, this is to hide the latency between loading %rbx and use of
> it in the address of a memory access.

Right - so we are talking about the same thing.

>
>>> -.globl vmx_asm_do_vmentry
>>> -vmx_asm_do_vmentry:
>> If you move the ENTRY(vmx_asm_do_vmentry) up from below, you should be
>> able to completely drop the jmp in it.
> That would be possible, at the expense of added padding. I prefer
> it the way it is now, as vmx_asm_do_vmentry is not performance
> critical (as being used exactly once per HVM vCPU).

There are a number of places where we have ENTRY()-like constructs but
don't want the padding with it.

Would an __ENTRY() macro go down well?  I can spin a patch for it.

>>  However...
>>
>>> +.Lvmx_do_vmentry:
>>>          call vmx_intr_assist
>>>          call nvmx_switch_guest
>>>          ASSERT_NOT_IN_ATOMIC
>>>  
>>> -        GET_CURRENT(%rbx)
>>> -        cli
>> The movement of this cli indicates a possible issue.
>>
>> If we have softirqs pending, we jump to .Lvmx_process_softirqs, which
>> calls do_softirq, and then jumps back to the top of .Lvmx_do_vmentry,
>> which reruns the top of do_vmentry with interrupts now enabled.
> That was this way already before. The "cli" got moved only past
> some address calculation (which clearly doesn't need to be done
> with interrupts disabled).

Sorry - I wasn't clear.  It was simply the cli moving place that caused
me to notice, rather than the behaviour actually changing.

>
>> First of all, I cant see anything in vmx_intr_assist or
>> nvmx_switch_guest which should require calling multiple times on a
>> vmentry.  They are also expecting to be called with interrupts disabled
>> (although I cant spot anything obvious in the callpath which would be
>> affected).
> And both of these functions had been called before disabling
> interrupts.

I need more coffee - I had mentally swapped cli and sti.

My point about re-executing it does still apply.  Looking at the code, I
do not believe it is correct to be executing vmx_intr_assist or
nvmx_switch_guest multiple times on a context switch to an HVM VCPU. 
vmx_intr_assist at the very least has a huge amount of work to do before
it considers exiting.

It does appear that there is possible interaction between do_softirq()
and vmx_intr_assist(), at which point vmx_intr_assist() should be run
after do_softirq(), which removes the apparently redundant run with
interrupts enabled.

>
>>> -        cmpb $0,VCPU_vmx_launched(%rbx)
>>>          pop  %r15
>>>          pop  %r14
>>>          pop  %r13
>>>          pop  %r12
>>>          pop  %rbp
>>> +        mov  %rax,%cr2
>>> +        cmpb $0,VCPU_vmx_launched(%rbx)
>> Again, I presume the move of "mov %rax,%cr2" is about the %rax data hazard?
> The %cr2 write's move is indeed debatable - I tried to get it farther
> away from the producer of the data in %rax, but it's not clear
> whether that's very useful. The second purpose was to get
> something interleaved with the many "pop"s, so that the CPU can
> get busy other than just its memory load ports. If controversial
> I'm fine with undoing that change.
>
> Jan

From my understanding of a serialising instruction, it forces the
completion of all previous instructions before starting, and prevents
the issue of any subsequent instructions until it itself has completed.

Therefore, I doubt it has the intended effect.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.