[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ideas Re: [PATCH v14 1/2] vmx: VT-d posted-interrupt core logic handling



>>> On 08.03.16 at 18:05, <george.dunlap@xxxxxxxxxx> wrote:
> On 08/03/16 15:42, Jan Beulich wrote:
>>>>> On 08.03.16 at 15:42, <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>> On Tue, Mar 8, 2016 at 1:10 PM, Wu, Feng <feng.wu@xxxxxxxxx> wrote:
>>>>> -----Original Message-----
>>>>> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxx]
>>>>>
>>>>> 2. Try to test engineered situations where we expect this to be a
>>>>> problem, to see how big of a problem it is (proving the theory to be
>>>>> accurate or inaccurate in this case)
>>>>
>>>> Maybe we can run a SMP guest with all the vcpus pinned to a dedicated
>>>> pCPU, we can run some benchmark in the guest with VT-d PI and without
>>>> VT-d PI, then see the performance difference between these two sceanrios.
>>>
>>> This would give us an idea what the worst-case scenario would be.
>> 
>> How would a single VM ever give us an idea about the worst
>> case? Something getting close to worst case is a ton of single
>> vCPU guests all temporarily pinned to one and the same pCPU
>> (could be multi-vCPU ones, but the more vCPU-s the more
>> artificial this pinning would become) right before they go into
>> blocked state (i.e. through one of the two callers of
>> arch_vcpu_block()), the pinning removed while blocked, and
>> then all getting woken at once.
> 
> Why would removing the pinning be important?

It's not important by itself, other than to avoid all vCPU-s then
waking up on the one pCPU.

> And I guess it's actually the case that it doesn't need all VMs to
> actually be *receiving* interrupts; it just requires them to be
> *capable* of receiving interrupts, for there to be a long chain all
> blocked on the same physical cpu.

Yes.

>>>  But
>>> pinning all vcpus to a single pcpu isn't really a sensible use case we
>>> want to support -- if you have to do something stupid to get a
>>> performance regression, then I as far as I'm concerned it's not a
>>> problem.
>>>
>>> Or to put it a different way: If we pin 10 vcpus to a single pcpu and
>>> then pound them all with posted interrupts, and there is *no*
>>> significant performance regression, then that will conclusively prove
>>> that the theoretical performance regression is of no concern, and we
>>> can enable PI by default.
>> 
>> The point isn't the pinning. The point is what pCPU they're on when
>> going to sleep. And that could involve quite a few more than just
>> 10 vCPU-s, provided they all sleep long enough.
>> 
>> And the "theoretical performance regression is of no concern" is
>> also not a proper way of looking at it, I would say: Even if such
>> a situation would happen extremely rarely, if it can happen at all,
>> it would still be a security issue.
> 
> What I'm trying to get at is -- exactly what situation?  What actually
> constitutes a problematic interrupt latency / interrupt processing
> workload, how many vcpus must be sleeping on the same pcpu to actually
> risk triggering that latency / workload, and how feasible is it that
> such a situation would arise in a reasonable scenario?
> 
> If 200us is too long, and it only takes 3 sleeping vcpus to get there,
> then yes, there is a genuine problem we need to try to address before we
> turn it on by default.  If we say that up to 500us is tolerable, and it
> takes 100 sleeping vcpus to reach that latency, then this is something I
> don't really think we need to worry about.
> 
> "I think something bad may happen" is a really difficult to work with.

I understand that, but coming up with proper numbers here isn't
easy. Fact is - it cannot be excluded that on a system with
hundreds of pCPU-s and thousands or vCPU-s, that all vCPU-s
would at some point pile up on one pCPU's list.

How many would be tolerable on a single list depends upon host
characteristics, so a fixed number won't do anyway. Hence I
think the better approach, instead of improving lookup, is to
distribute vCPU-s evenly across lists. Which in turn would likely
require those lists to no longer be tied to pCPU-s, an aspect I
had already suggested during review. As soon as distribution
would be reasonably even, the security concern would vanish:
Someone placing more vCPU-s on a host than that host can
handle is responsible for the consequences. Quite contrary to
someone placing more vCPU-s on a host than a single pCPU can
reasonably handle in an interrupt handler.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.