[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/8] pdx: introduce a new compression algorithm based on offsets between regions


  • To: Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 18 Jun 2025 15:02:34 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Anthony PERARD <anthony.perard@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 18 Jun 2025 13:02:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 11.06.2025 19:16, Roger Pau Monne wrote:
> With the appearance of Intel Sierra Forest and Granite Rapids it's not
> possible to get a production x86 host wit the following memory map:
> 
> SRAT: Node 0 PXM 0 [0000000000000000, 000000007fffffff]
> SRAT: Node 0 PXM 0 [0000000100000000, 000000407fffffff]
> SRAT: Node 1 PXM 1 [0000061e80000000, 0000065e7fffffff]
> SRAT: Node 2 PXM 2 [00000c3e80000000, 00000c7e7fffffff]
> SRAT: Node 3 PXM 3 [0000125e80000000, 0000129e7fffffff]

Looks like these aren't the numbers that the test harness uses. The main
properties (relevant for the algorithms) look to be the same, though.

> ---
> We can discuss whether we want both the fast and the slow variants.  The
> slow (brute force) was added as a result of me playing with weird region
> layouts where the fast one didn't manage to compress, or the resulting
> coefficients had a poor compression ratio.  However at this point the
> slow variant has only proven helpful in synthetic cases, I haven't (yet?)
> seen a real host memory layout that would benefit from it.

I'm not convinced we need the slow variant right away. What exactly (if
anything) is going to be wanted/needed on future hardware we'll only really
know when such arrives. However, see also my comment on xen/pdx.h.

> @@ -297,7 +309,223 @@ void __init pfn_pdx_compression_reset(void)
>      pfn_pdx_hole_shift = 0;
>  }
>  
> -#endif /* CONFIG_PDX_COMPRESSION */
> +#elif defined(CONFIG_PDX_OFFSET_COMPRESSION) /* CONFIG_PDX_MASK_COMPRESSION 
> */
> +
> +unsigned long __ro_after_init pdx_offset = ~0UL;
> +unsigned long __ro_after_init pdx_size = ~0UL;
> +
> +static unsigned long __initdata top_pfn;
> +
> +bool pdx_is_region_compressible(paddr_t base, unsigned long npages)
> +{
> +    return !pdx_size ? true
> +                     : (PFN_DOWN(base) % pdx_offset) + npages <= pdx_size;
> +}
> +
> +STATIC bool __init pfn_offset_calculate_fast(unsigned long base)
> +{
> +    unsigned long size = max((1UL << MAX_ORDER), base);

Since pfn_pdx_compression_setup(), where "base" originally comes from, has no
caller (yet), it's hard to follow what "base" is and why it would affect "size".

> +    unsigned long offset = ~0UL;
> +    unsigned int i;
> +
> +    if ( nr <= 1 )
> +        return false;
> +
> +    /* Calculate minimal offset between regions. */
> +    for ( i = 1; i < nr; i++ )
> +        offset = min(offset, ranges[i].base - ranges[i - 1].base);
> +
> +    /* Return early if offset is smaller than the minimum size. */
> +    if ( size >= offset )
> +        return false;

Comment and code are off by one with one another. I actually wonder whether, for
the scheme to be beneficial, there shouldn't be some threshold below which we
would also go without doing any compression.

> --- a/xen/include/xen/pdx.h
> +++ b/xen/include/xen/pdx.h
> @@ -65,6 +65,31 @@
>   * This scheme also holds for multiple regions, where HHHHHHH acts as
>   * the region identifier and LLLLLL fully contains the span of every
>   * region involved.
> + *
> + * ## PDX offset compression
> + *
> + * Alternative compression mechanism that relies on RAM ranges having a 
> similar
> + * size and offset between them:
> + *
> + * ┌────────┬──────────┬────────┬──────────┐   ┌────────┬──────────┐
> + * │ RAM 0  │          │ RAM 1  │          │...│ RAM N  │          │
> + * ├────────┼──────────┼────────┴──────────┘   └────────┴──────────┘
> + * │<------>│          │
> + * │  size             │
> + * │<----------------->│
> + *         offset
> + *
> + * The compression removes the holes between RAM regions:
> + *
> + * ┌────────┬────────┐   ┌────────┐
> + * │ RAM 0  │ RAM 1  │...│ RAM N  │
> + * ├────────┼────────┘   └────────┘
> + * │<------>│
> + *    size
> + *
> + * The compressed index is calculated as:
> + *
> + * index = (pfn % offset) + ((pfn / offset) * size)
>   */

Would be nice imo if the presentation here wouldn't give the impression that
the offsets are all identical, and hence the compressed map ends up being
entirely without holes. In fact I can't help the impression that with enough
nodes (but otherwise the same properties, i.e. only node 0 being special) at
least the "fast" calculation will fail. Which in turn would be an argument
to keep the "slow" one.

In fact, the alternative approach we discussed - avoiding division, but
using a table of offsets - would seem to avoid such a weakness, because the
offsets can then vary (to some degree; it still needs to be possible to
easily determine the table indexes).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.