[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6][RFC]xen: sched: convert RTDS from time to event driven model



On Fri, 2016-02-26 at 12:28 -0500, Tianyang Chen wrote:
> > So, have you made other changes wrt v6 when trying this?
> The v6 doesn't have the if statement commented out when I submitted
> it. 
> But I tried commenting it out, the assertion failed.
> 
Ok, thanks for these tests. Can you send (just quick-&-dirtily, as an
attached to a replay to this email, no need of a proper re-submission
of a new version) the patch that does this:

> rt_vcpu_sleep(): removing replenishment event if the vcpu is on 
> runq/depletedq
> rt_context_saved(): removing replenishment events if not runnable
> rt_vcpu_wake(): not checking if the event is already queued.
> 
> I added debug prints in all these functions and noticed that it could
> be 
> caused by racing between spurious wakeups and context switching. 
>
And the code that produces these debug output as well?

> (XEN) cpu1 picked idle
> (XEN) d0 attempted to change d0v1's CR4 flags 00000620 -> 00040660
> (XEN) cpu2 picked idle
> (XEN) vcpu1 sleeps on cpu
> (XEN) cpu0 picked idle
> (XEN) vcpu1 context saved not runnable
> (XEN) vcpu1 wakes up nowhere
> (XEN) cpu0 picked vcpu1
> (XEN) vcpu1 sleeps on cpu
> (XEN) cpu0 picked idle
> (XEN) vcpu1 context saved not runnable
> (XEN) vcpu1 wakes up nowhere
> (XEN) cpu0 picked vcpu1
> (XEN) cpu0 picked idle
> (XEN) vcpu1 context saved not runnable
> (XEN) cpu0 picked vcpu0
> (XEN) vcpu1 wakes up nowhere
> (XEN) cpu1 picked vcpu1     *** vcpu1 is on a cpu
> (XEN) cpu1 picked idle      *** vcpu1 is waiting to be context
> switched
> (XEN) vcpu2 wakes up nowhere
> (XEN) cpu0 picked vcpu0
> (XEN) cpu2 picked vcpu2
> (XEN) cpu0 picked vcpu0
> (XEN) cpu0 picked vcpu0
> (XEN) d0 attempted to change d0v2's CR4 flags 00000620 -> 00040660
> (XEN) cpu0 picked vcpu0
> (XEN) vcpu2 sleeps on cpu
> (XEN) vcpu1 wakes up nowhere      *** vcpu1 wakes up without sleep?
> 
> (XEN) Assertion '!__vcpu_on_replq(svc)' failed at sched_rt.c:526
> (XEN) ----[ Xen-4.7-unstable  x86_64  debug=y  Tainted:    C ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d08012a151>]
> sched_rt.c#rt_vcpu_wake+0x11f/0x17b
> ...
> (XEN) Xen call trace:
> (XEN)    [<ffff82d08012a151>] sched_rt.c#rt_vcpu_wake+0x11f/0x17b
> (XEN)    [<ffff82d08012bf2c>] vcpu_wake+0x213/0x3d4
> (XEN)    [<ffff82d08012c447>] vcpu_unblock+0x4b/0x4d
> (XEN)    [<ffff82d080169cea>] vcpu_kick+0x20/0x6f
> (XEN)    [<ffff82d080169d65>] vcpu_mark_events_pending+0x2c/0x2f
> (XEN)    [<ffff82d08010762a>]
> event_2l.c#evtchn_2l_set_pending+0xa9/0xb9
> (XEN)    [<ffff82d080108312>] send_guest_vcpu_virq+0x9d/0xba
> (XEN)    [<ffff82d080196cbe>] send_timer_event+0xe/0x10
> (XEN)    [<ffff82d08012a7b5>]
> schedule.c#vcpu_singleshot_timer_fn+0x9/0xb
> (XEN)    [<ffff82d080131978>] timer.c#execute_timer+0x4e/0x6c
> (XEN)    [<ffff82d080131ab9>] timer.c#timer_softirq_action+0xdd/0x213
> (XEN)    [<ffff82d08012df32>] softirq.c#__do_softirq+0x82/0x8d
> (XEN)    [<ffff82d08012df8a>] do_softirq+0x13/0x15
> (XEN)    [<ffff82d080243ad1>] cpufreq.c#process_softirqs+0x21/0x30
> 
> 
> So, it looks like spurious wakeup for vcpu1 happens before it was 
> completely context switched off a cpu. But rt_vcpu_wake() didn't see
> it 
> on cpu with curr_on_cpu() so it fell through the first two RETURNs.
> 
> I guess the replenishment queue check is necessary for this
> situation?
> 
Perhaps, but I first want to make sure we understand what is really
happening.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.