[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Hypercall continuation and wait_event


  • To: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • From: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>
  • Date: Mon, 9 Apr 2012 14:19:46 -0700 (PDT)
  • Delivery-date: Mon, 09 Apr 2012 21:20:23 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding; b=4RMvk7EDapwpsP24W2fN6vJcMqhz/G8k2rw/XfHrl0W3Q4v7j+CyYpr0jen/N5QiLbVZZGwGLgIHHan02e+x5WgaBVG6QLIdj/uhOR/VFM11gg5b3Tm6hIF59TcmXYZZQzt5c0viarRzpQZS7zoy9cXLICKpnJT9IhKKXXDwJwE=;
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

Keir,

Thanks again! When I used the scheme I have described, I periodically receive 
kernel errors as shown below. Notice that I use HVM domain and also 'isolcpus' 
as a Linux kernel option to prevent a dedicated VCPU from being normally used. 
A hypercall is being made from a special kernel thread (which is bind to the 
dedicated VCPU before the call).

What could be the reason of these messages? Looks like it is something related 
to a timer.


[ 1039.319957] RIP: 0010:[<ffffffff8101ba09>]  [<ffffffff8101ba09>] 
default_send_IPI_mask_sequence_phys+0x95/0xce
[ 1039.319957] RSP: 0018:ffff88007f043c28  EFLAGS: 00000046
[ 1039.319957] RAX: 0000000000000400 RBX: 0000000000000096 RCX: 0000000000000020
[ 1039.319957] RDX: 0000000000000002 RSI: 0000000000000020 RDI: 0000000000000300
[ 1039.319957] RBP: ffff88007f043c68 R08: 0000000000000000 R09: ffffffff8163eb20
[ 1039.319957] R10: ffff8800ff043bad R11: 0000000000000000 R12: 000000000000d602
[ 1039.319957] R13: 0000000000000002 R14: 0000000000000400 R15: ffffffff8163eb20
[ 1039.319957] FS:  0000000000000000(0000) GS:ffff88007f040000(0000) 
knlGS:0000000000000000
[ 1039.319957] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 1039.319957] CR2: 00007f74195d29be CR3: 000000007af4d000 CR4: 00000000000006a0
[ 1039.319957] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1039.319957] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 1039.319957] Process swapper/2 (pid: 0, threadinfo ffff88007c4ec000, task 
ffff88007c4f1650)
[ 1039.319957] Stack:
[ 1039.319957]  0000000000000002 0000000400000008 ffff88007f043c88 
0000000000002710
[ 1039.319957]  ffffffff8161a280 ffffffff8161a340 0000000000000001 
ffffffff8161a4c0
[ 1039.319957]  ffff88007f043c78 ffffffff8101ecc6 ffff88007f043c98 
ffffffff8101bb81
[ 1039.319957] Call Trace:
[ 1039.319957]  <IRQ>
[ 1039.319957]  [<ffffffff8101ecc6>] physflat_send_IPI_all+0x12/0x14
[ 1039.319957]  [<ffffffff8101bb81>] arch_trigger_all_cpu_backtrace+0x4b/0x6e
[ 1039.319957]  [<ffffffff8107a25a>] __rcu_pending+0x224/0x347
[ 1039.319957]  [<ffffffff8107aa13>] rcu_check_callbacks+0xa2/0xb4
[ 1039.319957]  [<ffffffff810469fd>] update_process_times+0x3a/0x70
[ 1039.319957]  [<ffffffff8105f815>] tick_sched_timer+0x70/0x9a
[ 1039.319957]  [<ffffffff810557c0>] __run_hrtimer.isra.26+0x75/0xce
[ 1039.319957]  [<ffffffff81055ded>] hrtimer_interrupt+0xd7/0x193
[ 1039.319957]  [<ffffffff81005f0a>] xen_timer_interrupt+0x2f/0x155
[ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
[ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
[ 1039.319957]  [<ffffffff81021945>] ? pvclock_clocksource_read+0x48/0xb4
[ 1039.319957]  [<ffffffff8107542d>] handle_irq_event_percpu+0x29/0x126
[ 1039.319957]  [<ffffffff8119064a>] ? info_for_irq+0x9/0x19
[ 1039.319957]  [<ffffffff81077b70>] handle_percpu_irq+0x39/0x4d
[ 1039.319957]  [<ffffffff81190510>] __xen_evtchn_do_upcall+0x147/0x1df
[ 1039.319957]  [<ffffffff81191eae>] xen_evtchn_do_upcall+0x27/0x39
[ 1039.319957]  [<ffffffff812987ee>] xen_hvm_callback_vector+0x6e/0x80
[ 1039.319957]  <EOI>
[ 1039.319957]  [<ffffffff8107ab83>] ? rcu_needs_cpu+0x110/0x1c1
[ 1039.319957]  [<ffffffff81020ff0>] ? native_safe_halt+0x6/0x8
[ 1039.319957]  [<ffffffff8100e8bf>] default_idle+0x27/0x44
[ 1039.319957]  [<ffffffff81007704>] cpu_idle+0x66/0xa4
[ 1039.319957]  [<ffffffff81286605>] start_secondary+0x1ac/0x1b1



Thanks,
Ruslan


----- Original Message -----
From: Keir Fraser <keir.xen@xxxxxxxxx>
To: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>; "xen-devel@xxxxxxxxxxxxx" 
<xen-devel@xxxxxxxxxxxxx>
Cc: 
Sent: Monday, April 9, 2012 8:58 PM
Subject: Re: [Xen-devel] Hypercall continuation and wait_event

It means the vcpu has an interrupt pending (in the pv case, that means an
event channel has a pending event).


On 09/04/2012 21:16, "Ruslan Nikolaev" <nruslan_devel@xxxxxxxxx> wrote:

> Keir,
> 
> Thanks for your replies! Just one more question about
> local_event_need_delivery(). Under what (common) conditions I would expect to
> have local events that need delivery?
> 
> Ruslan
> 
> 
> 
> ----- Original Message -----
> From: Keir Fraser <keir.xen@xxxxxxxxx>
> To: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>; "xen-devel@xxxxxxxxxxxxx"
> <xen-devel@xxxxxxxxxxxxx>
> Cc: 
> Sent: Monday, April 9, 2012 8:09 PM
> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
> 
> On 09/04/2012 20:18, "Ruslan Nikolaev" <nruslan_devel@xxxxxxxxx> wrote:
> 
>> Thanks for the reply.
>> 
>> Since it can take arbitrarily long for an event to arrive (e.g., it is coming
>> from a different guest on a user request), how do I need to handle this
>> case?Does it mean that I only need to make sure that nothings get scheduled
>> on
>> this VCPU in the guest?
> 
> Nothing else *can* get scheduled on this VCPU in the guest. The VCPU will
> sleep within wait_event within the hypercall context. Hence you must not
> hold any hypervisor spinlocks either, for example.
> 
>> Also, it is not exactly clear to me how wait_event avoids the need for
>> hypercall continuation. What about local_events_need_delivery() or
>> softirq_pending()? Are they going to be handled by wait_event internally?
> 
> Your VCPU gets descheduled. Hence softirq_pending() is not your concern for
> the duration that you're descheduled. And if local_event_need_delivery(),
> that's too bad, they have to wait for the vcpu to wake up on the event.
> 
> -- Keir
> 
>> Ruslan
>> 
>> 
>> 
>> 
>> 
>> 
>> ----- Original Message -----
>> From: Keir Fraser <keir.xen@xxxxxxxxx>
>> To: Ruslan Nikolaev <nruslan_devel@xxxxxxxxx>; "xen-devel@xxxxxxxxxxxxx"
>> <xen-devel@xxxxxxxxxxxxx>
>> Cc: 
>> Sent: Monday, April 9, 2012 6:54 PM
>> Subject: Re: [Xen-devel] Hypercall continuation and wait_event
>> 
>> On 09/04/2012 18:51, "Ruslan Nikolaev" <nruslan_devel@xxxxxxxxx> wrote:
>> 
>>> Hi
>>> 
>>> I am curious how I can properly support hypercall continuation and
>>> wait_event.
>>> I have a dedicated VCPU in a domain which makes a special hypercall, and the
>>> hypercall waits for certain event to arrive. I am using queues available in
>>> Xen, so wait_event will be invoked in the hypercall once its ready to accept
>>> events. However, my understanding that even though I have a dedicated VCPU
>>> for
>>> this hypercall, I still may need to support hypercall continuation properly.
>>> (Is this the case?) So, my question is how exactly the need for hypercall
>> 
>> No it's not the case, the old hypercall_create_continuation() mechanism does
>> not need to be used with wait_event().
>> 
>> -- Keir
>> 
>>> preemption may affect wait_event() and wait() operations, and where would I
>>> need to do hypercall_preempt_check()?
>>> 
>>> Thank you!
>>> Ruslan
>>> 
>>> 
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxx
>>> http://lists.xen.org/xen-devel
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.