[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ideas Re: [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling



On Fri, Mar 4, 2016 at 10:00 PM, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
>> +/* Handle VT-d posted-interrupt when VCPU is blocked. */
>> +static void pi_wakeup_interrupt(struct cpu_user_regs *regs)
>> +{
>> +    struct arch_vmx_struct *vmx, *tmp;
>> +    spinlock_t *lock = &per_cpu(vmx_pi_blocking, smp_processor_id()).lock;
>> +    struct list_head *blocked_vcpus =
>> +             &per_cpu(vmx_pi_blocking, smp_processor_id()).list;
>> +
>> +    ack_APIC_irq();
>> +    this_cpu(irq_count)++;
>> +
>> +    spin_lock(lock);
>> +
>> +    /*
>> +     * XXX: The length of the list depends on how many vCPU is current
>> +     * blocked on this specific pCPU. This may hurt the interrupt latency
>> +     * if the list grows to too many entries.
>> +     */
>> +    list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list)
>> +    {
>
>
> My recollection of the 'most-horrible' case of this being really bad is when
> the scheduler puts the vCPU0 and VCPU1 of the guest on the same pCPU (as an 
> example)
> and they round-robin all the time.
>
> <handwaving>
> Would it be perhaps possible to have an anti-affinity flag to deter the
> scheduler from this? That is whichever struct vcpu has 'anti-affinity' flag
> set - the scheduler will try as much as it can _to not_ schedule the 'struct 
> vcpu'
> if the previous 'struct vcpu' had this flag as well on this pCPU?

Well having vcpus from the same guest on the same pcpu is problematic
for a number of reasons -- spinlocks first and foremost.  So in
general trying to avoid that would be useful for most guests.

The thing with scheduling is that it's a bit like economics: it seems
simple but it's actually not at all obvious what the emergent behavior
will be from adding a simple rule. :-)

On the whole it seems unlikely that having two vcpus on a single pcpu
is a "stable" situation -- it's likely to be pretty transient, and
thus not have a major impact on performance.

That said, the load balancing code from credit2 *should*, in theory,
make it easier to implement this sort of thing; it has the concept of
a "cost" that it's trying to minimize; so you could in theory add a
"cost" to configurations where vcpus from the same processor share the
same pcpu.  Then it's not a hard-and-fast rule: if you have more vcpus
than pcpus, the scheduler will just deal. :-)

But I think some profiling is in order before anyone does serious work on this.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.