[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PROPOSAL] Event channel for SMP-VMs: per-vCPU or per-OS?

On 10/29/2013 09:57 AM, Jan Beulich wrote:
On 29.10.13 at 10:49, Luwei Cheng <chengluwei@xxxxxxxxx> wrote:
On Tue, Oct 29, 2013 at 5:34 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
On 29.10.13 at 10:02, Luwei Cheng <chengluwei@xxxxxxxxx> wrote:
On Tue, Oct 29, 2013 at 4:19 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
On 29.10.13 at 03:56, Luwei Cheng <chengluwei@xxxxxxxxx> wrote:
Hmm.. though all vCPUs can serve the events, the hypervisor delivers the
event to only "one" vCPU at at time, so only that vCPU can see this event.
Analytically no race condition will be introduced.

No - an event is globally pending (at least in the old model, the
situation is better with the new FIFO model), i.e. if more than
one of the guest's vCPU-s allowed to service it would be looking
at it simultaneously, they'd still need to arbitrate which one
ought to handle it.

So your proposed extension might need to be limited to the
FIFO model.

Thanks for your reply. Yes, you are right. My prior description was
When there are more than one vCPUs picking the event, even without
arbitrary, will it cause "correctness" problem? After the event is
served by
the first entered vCPU, and the rest vCPUs just have noting to do in the
event handler (no much harm).

That really depends on the handler. Plus it might be a performance
and/or latency issue to run handlers that don't need to be run.

I think the situation is much like IO-APIC routing in physical SMP

Indeed, yet you draw the wrong conclusion.

in logical destination mode, all processors can serve I/O interrupts.

But only one gets delivered any individual instance - there is
arbitration being done in hardware.

Xen should be able to arbitrate which one gets the actual event delivery, right? So the only risk would be that another vcpu would notice the pending interrupt and handle it itself.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.