[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/3] VT-d: Fix vt-d Device-TLB flush timeout issue.

>>> On 27.01.16 at 12:09, <quan.xu@xxxxxxxxx> wrote:
>>  On January 27, 2016 at 6:48am, <Tian, Kevin> wrote:
>> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> > Sent: Tuesday, January 26, 2016 11:53 PM
>> > Once again: Before getting started, please assess which route is going
>> > to be the better one. Remember that we had already discussed and put
>> > aside some form of deferring the hiding of devices, so if you come
>> > back with a patch doing that again, you'll have to be able to explain
>> > why the alternative(s) are worse.
>> >
>> Quan, could you list pros/cons of those alternatives based on discussion so 
>> far?
>> Then we can decide which way should be done before you go to actual coding.
>> Earlier suggestion on hiding device immediately is under the assumption that 
>> all
>> locks have been held. If this part becomes too complex, and you can explain
>> clearly that deferring the hiding action doesn't lead to any race condition, 
>> then
>> people can see why you are proposing defer again.
> The following are pros/cons of those alternatives. It is also why I propose 
> defer again.
> -- --
> 1. Hiding the devices immediately
> Pros:
>      * it makes whatever changes are ASAP after the Device-TLB flush error.
> Cons:
>      * It may break the code path.
>      * It may lead to any race condition.
>      * Hiding the devices immediately is under the assumption that all locks 
> have been held.
>       Different locking state is possible for different call trees. This part 
> becomes too complex.

So you just repeat what you've already said before. "This part
becomes too complex" you say without any kind of proof, yet
that's what we need to understand whether the alternative of
doing the locking correctly really is this bad (and I continue to
not see why it would).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.