[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Remove a set operation for VCPU_KICK_SOFTIRQ when post interrupt to vm.



Zhang, Yang Z wrote on 2015-09-08:
> Liuqiming (John) wrote on 2015-09-08:
>> Ok, I will try to explain, correct me if I got anything wrong:
>> 
>> The problem here is not interrupts lost but interrupts not delivered
>> in time.
>> 
>> there are basically two path to inject an interrupt into VM  (or
>> vCPU to another vCPU):
>> Path 1, the traditional way:
>>        1) set bit  in vlapic IRR field which represent an interrupt,
>>        then kick vcpu 2) a VCPU_KICK_SOFTIRQ softirq raised 3) if
>>        VCPU_KICK_SOFTIRQ bit not set, then set it, otherwise return and
>>        do nothing 4) send an EVENT_CHECK_VECTOR IPI  to target vcpu 5)
>>        target vcpu will VMEXIT due to EXIT_REASON_EXTERNAL_INTERRUPT 6)
>>        the interrupt handler basically do nothing 7) interrupt in IRR
>>        will be evaluated 8) VCPU_KICK_SOFTIRQ will be cleared when
>>        do_softirq 9) there will be an interrupt inject into vcpu when
>>        VMENTRY Path 2, the Posted-interrupt way (current logic): 1) set
>>        bit in posted-interrupt descriptor which represent an interrupt
>>        2) if VCPU_KICK_SOFTIRQ bit not set, then set it, otherwise
>>        return and do nothing 3) send an POSTED_INTR_NOTIFICATION_VECTOR
>>        IPI to target vcpu 4) if target vcpu in non-ROOT mode it will
>>        receive the interrupt
>> immediately otherwise interrupt will be injected when VMENTRY
>> 
>> As the first operation in both path is setting a interrupt represent
>> bit, so no interrupts will lost.
>> 
>> The issue is:
>> in path 2, the first interrupt will cause VCPU_KICK_SOFTIRQ set to
>> 1, and unless a VMEXIT occured or somewhere called do_softirq
>> directly, VCPU_KICK_SOFTIRQ will not cleared, that will make the
>> later interrupts injection  ignored at step 2), which will delay irq
>> handler process in VM.
>> 
>> And because path 2 set VCPU_KICK_SOFTIRQ to 1, the kick vcpu logic
>> in path 1 will also return in step 3), which make this vcpu only can
>> handle interrupt when some other reason cause VMEXIT.
> 
> Nice catch! But without set the softirq, the interrupt delivery also will be 
> delay.
> Look at the code in vmx_do_vmentry:
> 
> cli
>         <---------------the virtual interrupt occurs here will be
> delay. Because there is no softirq pending and the following posted
> interrupt may consumed by another running VCPU, so the target VCPU
> will not aware there is pending virtual interrupt before next vmexit.
> cmp  %ecx,(%rdx,%rax,1)
> jnz  .Lvmx_process_softirqs
> 
> I need more time to think it.

Hi Qiming,

Can you help to take a look at this patch? Also, if possible, please help to do 
a testing since I don't have machine to test it. Thanks very much.

diff --git a/xen/arch/x86/hvm/vmx/entry.S b/xen/arch/x86/hvm/vmx/entry.S
index 664ed83..1ebb5d0 100644
--- a/xen/arch/x86/hvm/vmx/entry.S
+++ b/xen/arch/x86/hvm/vmx/entry.S
@@ -77,6 +77,9 @@ UNLIKELY_START(ne, realmode)
         call vmx_enter_realmode
 UNLIKELY_END(realmode)
 
+        bt   $0,VCPU_vmx_pi_ctrl(%rbx)
+        jc   .Lvmx_do_vmentry
+
         mov  %rsp,%rdi
         call vmx_vmenter_helper
         mov  VCPU_hvm_guest_cr2(%rbx),%rax
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e0e9e75..315ce65 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1678,8 +1678,9 @@ static void __vmx_deliver_posted_interrupt(struct vcpu *v)
     {
         unsigned int cpu = v->processor;
 
-        if ( !test_and_set_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))
-             && (cpu != smp_processor_id()) )
+        if ( !test_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))
+             && pi_test_on(&v->arch.hvm_vmx.pi_desc)
+             && (cpu != smp_processor_id()))
             send_IPI_mask(cpumask_of(cpu), posted_intr_vector);
     }
 }
diff --git a/xen/arch/x86/x86_64/asm-offsets.c 
b/xen/arch/x86/x86_64/asm-offsets.c
index 447c650..985725f 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -108,6 +108,7 @@ void __dummy__(void)
     OFFSET(VCPU_vmx_emulate, struct vcpu, arch.hvm_vmx.vmx_emulate);
     OFFSET(VCPU_vm86_seg_mask, struct vcpu, arch.hvm_vmx.vm86_segment_mask);
     OFFSET(VCPU_hvm_guest_cr2, struct vcpu, arch.hvm_vcpu.guest_cr[2]);
+    OFFSET(VCPU_vmx_pi_ctrl, struct vcpu, arch.hvm_vmx.pi_desc.control);
     BLANK();
 
     OFFSET(VCPU_nhvm_guestmode, struct vcpu, arch.hvm_vcpu.nvcpu.nv_guestmode);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h 
b/xen/include/asm-x86/hvm/vmx/vmx.h
index 35f804a..157a4fe 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -116,6 +116,11 @@ static inline void pi_set_on(struct pi_desc *pi_desc)
     set_bit(POSTED_INTR_ON, &pi_desc->control);
 }
 
+static inline int pi_test_on(struct pi_desc *pi_desc)
+{
+    return test_bit(POSTED_INTR_ON, &pi_desc->control);
+}
+
 static inline int pi_test_and_clear_on(struct pi_desc *pi_desc)
 {
     return test_and_clear_bit(POSTED_INTR_ON, &pi_desc->control);


Best regards,
Yang

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.