[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/4] VT-d PI: track the number of vcpus on pi blocking list

>>> On 31.08.17 at 00:57, <chao.gao@xxxxxxxxx> wrote:
> On Wed, Aug 30, 2017 at 10:00:49AM -0600, Jan Beulich wrote:
>>>>> On 16.08.17 at 07:14, <chao.gao@xxxxxxxxx> wrote:
>>> @@ -100,6 +101,24 @@ void vmx_pi_per_cpu_init(unsigned int cpu)
>>>      spin_lock_init(&per_cpu(vmx_pi_blocking, cpu).lock);
>>>  }
>>> +static void vmx_pi_add_vcpu(struct pi_blocking_vcpu *pbv,
>>> +                            struct vmx_pi_blocking_vcpu *vpbv)
>>> +{
>>> +    ASSERT(spin_is_locked(&vpbv->lock));
>>You realize this is only a very weak check for a non-recursive lock?
> I just thought the lock should be held when adding one entry to the
> blocking list. Do you think we should remove this check or make it
> stricter?

Well, the primary purpose of my comment was to make you aware
of the fact. If the weak check is good enough for you, then fine.
Removing the check would be a bad idea imo (but see also below);
tightening might be worthwhile, but might also go too far (depending
mainly on how clearly provable it is that all callers actually hold the

>>> +    add_sized(&vpbv->counter, 1);
>>> +    ASSERT(read_atomic(&vpbv->counter));
>>Why add_sized() and read_atomic() when you hold the lock?
> In patch 3, frequent reading the counter is used to find a suitable
> vcpu and we can use add_sized() and read_atomic() to avoid acquiring the
> lock. In one word, the lock doesn't protect the counter.

In that case it would be more natural to switch to the atomic
accesses there. Plus you still wouldn't need read_atomic()
here, with the lock held. Furthermore I would then wonder
whether it wasn't better to use atomic_t for the counter at
that point. Also with a lock-less readers the requirement to
hold a lock here (rather than using suitable LOCKed accesses)
becomes questionable too.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.