[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Problems with spin_lock_irqsave() and for_each_online_cpu()



On 18/10/13 09:19, Jan Beulich wrote:
>>>> On 17.10.13 at 21:34, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> I was triaging a Coverity issue (1055455) which was complaining about
>> spinlock inbalance in common/trace.c:424.
>>
>> First of all, there is a latent bug here using "int flags" in a
>> irqsave/irqrestore.  It will be safe until bit 31 of RFLAGS is defined,
>> but will introduce subtle corruption thereafter.
>>
>> This bug was not caught by the compiler because of the
>> spin_lock_irqsave() macro which has slightly non-function-like
>> semantics.  Would it be acceptable to change spin_lock_irqsave() into a
>> static inline so can be properly typed?  (This would come with a huge
>> amount of code churn as the function would have to take flags by pointer)
> Why not simply add
>
> BUILD_BUG_ON(sizeof(f) != sizeof(_spin_lock_irqsave(l)))
>
> (or equivalent in case of header dependency issues) to the
> macro?
>
> But then again I don't see the corruption to occur when RFLAGS
> beyond bit 31 would become defined: the flags get passed to
> local_irq_restore() only, and that one's effect is "defined" to set
> IF to the intended state - what it does with the other flags isn't
> really defined (and in fact I wonder whether it really is correct
> to use a plain POPF there - imagine code fiddling with e.g. DF
> at the same time as using the proper abstractions to control IF).
>
> Jan
>

The BUILD_BUG_ON() is a rather more simple solution to the problem.

Also changing local_irq_restore() to a conditional sti will also be a
good improvement.  I shall see about spinning a patch for these issues.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.