[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 14 Dec 2022 12:29:59 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LIzrLopL3QSHlFN1s/F+V7MIbNyR4QYzULjovpeNT3w=; b=bgU6GNCWVeUU3SAUBVGiOkhNkZFfz6W4au3nEjHZakVlqCA/vrW4FYt0CVTwpH27aLKQcyNqOrllaxXKiqsKDDnSuSUztKaBetGvWZ72kLPA2iWVAv3mwDOpNTqInKFglbBVtl+Cy3qyyS8cF6O8gaBUXh3deAAlnGM0e3FIrkYcl/48fxLnIWpSJjvSaOJhs+BlLcibpZVeBnr/69chBKs0xgNfvxW3AktLyI90Ugm9iSMAU/LISjqYC/AQwSA0A2qwet92dLBtlUBTPCF3ygk1BUAdeD3HDrtJp6r6GJWNRBZc7tvJLd3Hauian3rIrFiWOEb7+OXiHRp3+a3cKw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=aVXWStHhhW+kL07F+fHInyjWhQ2yyRirtQwEoeeP1AOlUGKR/UJ55H1aLngs+JP8R1OoLHrIfDlIfcPCD6JNTC9+k9StbxqigYWYxH681OtvYA6QmZdbAtSdWWK2JPfiq9aVJwAqP5k9vNnBBrNwpEerqrri1s5hfBhGY/9vnQ8dqx5Up8g//+g2XJyyOJ15FvYHnz6OJQn1v08bvvGZME7sQyjJ0GGkYBGYrosohr3ovT2eg3bX+Ow71bvQduYv/Rwo7sTTlzEl90pPsFdOhKaH9e0xmb3Uo6pkfmzCnm9I3qzKM2/+o5iG1OlD4GxAFPM6uVb3ilDwR9ihuhRyAQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 14 Dec 2022 11:30:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 14.12.2022 12:10, Juergen Gross wrote:
> On 14.12.22 11:39, Jan Beulich wrote:
>> On 10.09.2022 17:49, Juergen Gross wrote:
>>> --- a/xen/arch/x86/mm/p2m-pod.c
>>> +++ b/xen/arch/x86/mm/p2m-pod.c
>>> @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d)
>>>   
>>>       /* After this barrier no new PoD activities can happen. */
>>>       BUG_ON(!d->is_dying);
>>> -    spin_barrier(&p2m->pod.lock.lock);
>>> +    spin_barrier(&p2m->pod.lock.lock.lock);
>>
>> This is getting unwieldy as well, and ...
>>
>>> @@ -160,21 +165,30 @@ typedef union {
>>>   
>>>   typedef struct spinlock {
>>>       spinlock_tickets_t tickets;
>>> -    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>> -#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>> -#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>> -    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>> -#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>>       union lock_debug debug;
>>>   #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>>       struct lock_profile *profile;
>>>   #endif
>>>   } spinlock_t;
>>>   
>>> +struct spinlock_recursive {
>>> +    struct spinlock lock;
>>> +    u16 recurse_cpu:SPINLOCK_CPU_BITS;
>>> +#define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>>> +#define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>> +    u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>>> +#define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>> +};
>>
>> ... I'm not sure anyway it's a good idea to embed spinlock_t inside the
>> new struct. I'd prefer if non-optional fields were always at the same
>> position, and there's not going to be that much duplication if we went
>> with
>>
>> typedef struct spinlock {
>>      spinlock_tickets_t tickets;
>>      union lock_debug debug;
>> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>      struct lock_profile *profile;
>> #endif
>> } spinlock_t;
>>
>> typedef struct rspinlock {
>>      spinlock_tickets_t tickets;
>>      u16 recurse_cpu:SPINLOCK_CPU_BITS;
>> #define SPINLOCK_NO_CPU        ((1u << SPINLOCK_CPU_BITS) - 1)
>> #define SPINLOCK_RECURSE_BITS  (16 - SPINLOCK_CPU_BITS)
>>      u16 recurse_cnt:SPINLOCK_RECURSE_BITS;
>> #define SPINLOCK_MAX_RECURSE   ((1u << SPINLOCK_RECURSE_BITS) - 1)
>>      union lock_debug debug;
>> #ifdef CONFIG_DEBUG_LOCK_PROFILE
>>      struct lock_profile *profile;
>> #endif
>> } rspinlock_t;
>>
>> This would also eliminate the size increase of recursive locks in
>> debug builds. And functions like spin_barrier() then also would
>> (have to) properly indicate what kind of lock they act on.
> 
> You are aware that this would require to duplicate all the spinlock
> functions for the recursive spinlocks?

Well, to be honest I didn't really consider this aspect, but I think
that's a reasonable price to pay (with some de-duplication potential
if we wanted to), provided we want to go this route in the first place.
The latest with this aspect in mind I wonder whether we aren't better
off with the current state (the more that iirc Andrew thinks that we
should get rid of recursive locking altogether).

Jan

> I'm not strictly against this, but before going this route I think I
> should mention the implications.
> 
> 
> Juergen




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.