[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] move __read_mostly to xen/cache.h



On Fri, 2023-12-22 at 12:09 +0100, Jan Beulich wrote:
> On 22.12.2023 10:39, Oleksii wrote:
> > On Tue, 2023-08-08 at 12:32 +0200, Jan Beulich wrote:
> > > On 08.08.2023 12:18, Andrew Cooper wrote:
> > > > On 08/08/2023 10:46 am, Jan Beulich wrote:
> > > > > There's no need for every arch to define its own identical
> > > > > copy.
> > > > > If down
> > > > > the road an arch needs to customize it, we can add #ifndef
> > > > > around
> > > > > the
> > > > > common #define.
> > > > > 
> > > > > To be on the safe side build-breakage-wise, change a couple
> > > > > of
> > > > > #include
> > > > > <asm/cache.h> to the xen/ equivalent.
> > > > > 
> > > > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> > > > 
> > > > Could we find a better place to put this?
> > > > 
> > > > __read_mostly is just a section.  It's relationship to the
> > > > cache is
> > > > only
> > > > microarchitectural, and is not the same kind of information as
> > > > the
> > > > rest
> > > > of cache.h
> > > > 
> > > > __ro_after_init is only here because __read_mostly is here, but
> > > > has
> > > > absolutely nothing to do with caches whatsoever.
> > > > 
> > > > If we're cleaning them up, they ought to live elsewhere.
> > > 
> > > I would be considering init.h (for having most other __section()
> > > uses,
> > > and for also needing __read_mostly), but that's not a great place
> > > to
> > > put these either. In fact I see less connection there than for
> > > cache.h.
> > > So the primary need is a good suggestion (I'm hesitant to suggest
> > > to
> > > introduce section.h just for this).
> > Andrew sent some suggestions here:
> > https://lore.kernel.org/xen-devel/3df1dad8-3476-458f-9022-160e0af57d39@xxxxxxxxxx/
> > 
> > Will that work for you?
> 
> I still need to properly look at the suggested options.
Have you had a chance to review the suggested options?

~ Oleksii



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.