[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 01/11] xen/memory: Mask XENMEMF_node() to 8 bits



For the record, the rest of the series doesn't require this patch. I just
thought it was a strictly net-positive improvement on the current behaviour.

On Mon Mar 17, 2025 at 4:33 PM GMT, Jan Beulich wrote:
> On 14.03.2025 18:24, Alejandro Vallejo wrote:
> > As it is, it's incredibly easy for a buggy call to XENMEMF_node() to
> > unintentionally overflow into bit 17 and beyond. Prevent it by masking,
> > just like MEMF_* does.
>
> Yet then ...
>
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -32,8 +32,9 @@
> >  #define XENMEMF_address_bits(x)     (x)
> >  #define XENMEMF_get_address_bits(x) ((x) & 0xffu)
> >  /* NUMA node to allocate from. */
> > -#define XENMEMF_node(x)     (((x) + 1) << 8)
> > -#define XENMEMF_get_node(x) ((((x) >> 8) - 1) & 0xffu)
> > +#define XENMEMF_node_mask   (0xffu)
> > +#define XENMEMF_node(n)     ((((n) + 1) & XENMEMF_node_mask) << 8)
>
> ... this still won't have the intended effect: Rather than spilling into
> higher bits (with a certain chance of causing an error) you now truncate
> the node number, thus making the misbehavior almost certainly silent.

It has the intended effect of containing the effects of XENMEMF_node(n) to the
bits representing such mask.

There's an error either way, and either way you'll notice quite late too. One
of them has fully undefined consequences (possibly worth an XSA for systems
with separate xenstore or driver domains). This one contains the effects of
invalid data. A later patch in the series returns EINVAL in xc_claim_pages() if
node >= 0xff to catch problematic inputs early, but that's a toolstack matter,
the API should be self-defending.

Note that this exact code is present in MEMF_node(n), in xen/mm.h Likely to
avoid the same sort of problem inside the hypervisor.

> Further, if you do this for the node, why not also for the address bits?
> (Rhetorical question; I don't really want you to do that.)
>
> Jan

Mostly because the series deals with memory management rather than anything
else. I do think address_bits should be subject to the same treatment for
identical reasons.

>
> > +#define XENMEMF_get_node(f) ((((f) >> 8) - 1) & XENMEMF_node_mask)
> >  /* Flag to populate physmap with populate-on-demand entries */
> >  #define XENMEMF_populate_on_demand (1<<16)
> >  /* Flag to request allocation only from the node specified */

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.