[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PROPOSAL] Event channel for SMP-VMs: per-vCPU or per-OS?






On Tue, Oct 29, 2013 at 5:34 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>> On 29.10.13 at 10:02, Luwei Cheng <chengluwei@xxxxxxxxx> wrote:
> On Tue, Oct 29, 2013 at 4:19 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> >>> On 29.10.13 at 03:56, Luwei Cheng <chengluwei@xxxxxxxxx> wrote:
>> > Hmm.. though all vCPUs can serve the events, the hypervisor delivers the
>> > event to only "one" vCPU at at time, so only that vCPU can see this
>> event.
>> > Analytically no race condition will be introduced.
>>
>> No - an event is globally pending (at least in the old model, the
>> situation is better with the new FIFO model), i.e. if more than
>> one of the guest's vCPU-s allowed to service it would be looking
>> at it simultaneously, they'd still need to arbitrate which one
>> ought to handle it.
>>
>> So your proposed extension might need to be limited to the
>> FIFO model.
>
> Thanks for your reply. Yes, you are right. My prior description was
> incorrect.
> When there are more than one vCPUs picking the event, even without
> arbitrary, will it cause "correctness" problem? After the event is served by
> the first entered vCPU, and the rest vCPUs just have noting to do in the
> event handler (no much harm).

That really depends on the handler. Plus it might be a performance
and/or latency issue to run handlers that don't need to be run.

Jan

I think the situation is much like IO-APIC routing in physical SMP systems: 
in logical destination mode, all processors can serve I/O interrupts. 
Seemingly the current IRQ handlers can deal with it gracefully.
Compared with the potential latency issue, I think the gain of this approach 
is bigger: avoiding vCPU scheduling delays (10x ms).

Thanks,
Luwei
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.