[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] CPU: abstract read-mostly-ness for per-CPU cpumask_var_t variables


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 2 Feb 2026 17:05:00 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zxvFgaVgjyZInLOJCwFp5LsJs2bkiHyxCvOroDEz6iU=; b=yW/uwX8vfKXD6npFJSpFZN60V3BpzBAXwh5yaxMOH7cvthYuYzYjBTmNkgqcTszRtCPlCR3fhDRlFatkFiZ8/ABlB0CuOTvBSH5xugtyt7KhiDmFYs5n/qiu8SbA78zGTAxdjDL8qBP0O0c50NKRZEmdgR8agrPJsCkNwJ+bq5TUWZM//K0MU/UwlkTDjdeqV6Ty3JCQDa0KGKlTNeH/TSlhSf5cZyQh/xvP/qQb6lzh8LVJtxRkcqvsSAtD4R/rbcZhHw7FwJWKBOsNW7aUL+UMnBdDDL3pmKjdOf3zMqUTkZYSKLA2aoAAUOjNME6o1mCq9e0d4WVOSQWLQMW0nA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gRONBckEuKxAw6Um/81IiAFTycD54OaCcpZ5gNu0jb55nGwiQ3JCX4d8SWhpHTl9pu5zdKd4Kj1CfT60U69g9fD1MAqMwRm9lAxrHfXicugl7awHHwhd47jrB0+0xQR6+GU1GVhRrThUuEc80rN5cW4FA4SYyyLzitxSyQ7cPVRmyJjjEt264GYEZ5IdXyqPJAiz+H8/yZTz7e0aqhzi6BWVLCaPYr3ZIBNoOy1JmO6G7wZi72jYLocnOgdyJRbubLvcG6y7In5Bfzlh0eaRXwI5sRMC+vtFd9kprsALyYSPBGI/qIXiqZ7eQqjCxW70fazH885kAhOGrt11UP5NmA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Mon, 02 Feb 2026 16:05:14 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Feb 02, 2026 at 04:59:07PM +0100, Jan Beulich wrote:
> On 02.02.2026 16:50, Roger Pau Monné wrote:
> > On Wed, Nov 12, 2025 at 04:53:27PM +0100, Jan Beulich wrote:
> >> cpumask_var_t can resolve to a pointer or to an array. While the pointer
> >> typically is allocated once for a CPU and then only read (i.e. wants to be
> >> marked read-mostly), the same isn't necessarily true for the array case.
> >> There things depend on how the variable is actually used. cpu_core_mask
> >> and cpu_sibling_mask (which all architectures have inherited from x86,
> >> which in turn is possibly wrong) are altered only as CPUs are brought up
> >> or down, so may remain uniformly read-mostly. Other (x86-only) instances
> >> want to change, to avoid disturbing adjacent read-mostly data.
> >>
> >> While doing the x86 adjustment, also do one in the opposite direction,
> >> i.e. where there was no read-mostly annotation when it is applicable in
> >> the "pointer" case.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> > 
> > Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> Thanks.
> 
> >> ---
> >> Really in the pointer case it would be nice if the allocations could then
> >> also come from "read-mostly" space.
> > 
> > Hm, I guess for some of them yes, it would make sense to come from
> > __read_mostly space, but would require passing an extra parameter to
> > the DEFINE_ helper? Or introduce another variant.
> 
> Whether this could be sorted purely at the macro wrapper layer I'm not
> sure. It's the actual allocation (which alloc_cpumask_var() et al do)
> which would need to be more sophisticated than a simple _x[mz]alloc().

For the array case it could be sorted out in the macro wrapper - for
the pointer case it would need to be sorted at allocation, which makes
this quite weird to deal with.  Anyway, this is better than nothing I
guess.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.