|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 15/18] vmx: Properly handle notification event when vCPU is running
Jan Beulich wrote on 2015-09-07:
>> + * jnz .Lvmx_process_softirqs
>> + *
>> + * ......
>> + *
>> + * je .Lvmx_launch
>> + *
>> + * ......
>> + *
>> + * .Lvmx_process_softirqs:
>> + * sti
>> + * call do_softirq
>> + * jmp .Lvmx_do_vmentry
>> + *
>> + * If VT-d engine issues a notification event at point 1 above, it
>> cannot
>> + * be delivered to the guest during this VM-entry without raising the
>> + * softirq in this notification handler.
>> + */
>> + raise_softirq(VCPU_KICK_SOFTIRQ);
>> +
>> + this_cpu(irq_count)++;
>> +}
>> +
>> const struct hvm_function_table * __init start_vmx(void)
>> {
>> set_in_cr4(X86_CR4_VMXE);
>> @@ -2073,7 +2119,7 @@ const struct hvm_function_table * __init
>> start_vmx(void)
>>
>> if ( cpu_has_vmx_posted_intr_processing )
>> {
>> - alloc_direct_apic_vector(&posted_intr_vector,
>> event_check_interrupt); +
>> alloc_direct_apic_vector(&posted_intr_vector,
>> pi_notification_interrupt);
>>
>> if ( iommu_intpost )
>> alloc_direct_apic_vector(&pi_wakeup_vector,
> pi_wakeup_interrupt);
>
> Considering that you do this setup independent of iommu_intpost, won't
> this (namely, but not only) for the !iommu_inpost case result in a whole
> lot of useless softirqs to be raised? IOW - shouldn't this setup be
> conditional, and shouldn't the handler also only conditionally raise the
> softirq (to as much as possible limit their amount)?
>
> Yang, in this context: Why does __vmx_deliver_posted_interrupt()
> not use cpu_raise_softirq(), instead kind of open coding it (see your
> d7dafa375b ["VMX: Add posted interrupt supporting"])?
Sorry, I am not in the context. What do you mean of using cpu_raise_softirq()
in __vmx_deliver_posted_interrupt()?
Best regards,
Yang
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |