[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [xen-unstable test] 104131: regressions - FAIL
On February 13, 2017 4:24 PM, Tian, Kevin wrote: >> From: Tian, Kevin >> Sent: Monday, February 13, 2017 4:21 PM >> >> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] >> > Sent: Wednesday, February 08, 2017 4:52 PM >> > >> > >>> On 08.02.17 at 09:27, <xuquan8@xxxxxxxxxx> wrote: >> > > Assumed vCPU is in guest_mode.. >> > > When apicv is enabled, hypervisor calls vmx_deliver_posted_intr(), >> > > then >> > > __vmx_deliver_posted_interrupt() to deliver interrupt, but no >> > > vmexit (also no >> > > vcpu_kick() ).. >> > > In __vmx_deliver_posted_interrupt(), it is __conditional__ to >> > > deliver posted interrupt. if posted interrupt is not delivered, >> > > the posted interrupt is pending until next VM entry -- by PIR to vIRR.. >> > > >> > > one condition is : >> > > In __vmx_deliver_posted_interrupt(), ' if ( >> > > !test_and_set_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu))' .. >> > > >> > > Specifically, we did verify it by RES interrupt, which is used for >> > > smp_reschedule_interrupt.. >> > > We even cost more time to deliver RES interrupt than no-apicv in >average.. >> > > >> > > If RES interrupt (no. 1) is delivered by posted way (the vcpu is >> > > still guest_mode).. when tries to deliver next-coming RES >> > > interrupt (no. 2) by posted way, The next-coming RES interrupt >> > > (no. 2) is not delivered, as we set the VCPU_KICK_SOFTIRQ bit when >> > > we deliver RES interrupt (no. 1).. >> > > >> > > Then the next-coming RES interrupt (no. 2) is pending until next >> > > VM entry -- by PIR to vIRR.. >> > > >> > > >> > > We can fix it as below(I don't think this is a best one, it is >> > > better to set the VCPU_KICK_SOFTIRQ bit, but not test it): >> > > >> > > --- a/xen/arch/x86/hvm/vmx/vmx.c >> > > +++ b/xen/arch/x86/hvm/vmx/vmx.c >> > > @@ -1846,7 +1846,7 @@ static void >__vmx_deliver_posted_interrupt(struct vcpu *v) >> > > { >> > > unsigned int cpu = v->processor; >> > > >> > > - if ( !test_and_set_bit(VCPU_KICK_SOFTIRQ, >&softirq_pending(cpu)) >> > > + if ( !test_bit(VCPU_KICK_SOFTIRQ, &softirq_pending(cpu)) >> > > && (cpu != smp_processor_id()) ) >> > > send_IPI_mask(cpumask_of(cpu), posted_intr_vector); >> > > } >> > >> > While I don't think I fully understand your description, the line >> > you change here has always been puzzling me: If we were to raise a >> > softirq here, we ought to call cpu_raise_softirq() instead of partly >> > open coding what it does. >> >> We require posted_intr_vector for target CPU to ack/deliver virtual >> interrupt in non-root mode. cpu_raise_softirq uses a different vector, >> which cannot trigger such effect. >> >> > So I think not marking that softirq >> > pending (but doing this incompletely) is a valid change in any case. >> > But I'll have to defer to Kevin in the hopes that he fully >> > understands what you explain above as well as him knowing why this >> > was a test-and-set here in the first place. >> > >> >> I agree we have a misuse of softirq mechanism here. If guest is >> already in non-root mode, the 1st posted interrupt will be directly >> delivered to guest (leaving softirq being set w/o actually incurring a >> VM-exit - breaking desired softirq behavior). Then further posted >> interrupts will skip the IPI, stay in PIR and not noted until another >> VM-exit happens. Looks Quan observes such delay of delivery in his >> experiments. >> >> I'm OK to remove the set here. Actually since it's an optimization for >> less IPIs, we'd better check softirq_pending(cpu) directly instead of >> sticking to one bit only. >> > >sent too fast... Quan, can you work out a patch following this suggestion >and see whether your slow-delivery issue is solved? >(hope I understand your issue correctly here). > Cool, Very Correct!! Sure, I will send out a patch in coming days.. Quan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |