[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 04/19] xen: introduce generic non-atomic test_*bit()


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 4 Apr 2024 15:22:42 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Shawn Anastasio <sanastasio@xxxxxxxxxxxxxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 04 Apr 2024 13:22:49 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 03.04.2024 12:19, Oleksii Kurochko wrote:
> The patch introduces the following generic functions:
> * test_bit
> * generic__test_and_set_bit
> * generic__test_and_clear_bit
> * generic__test_and_change_bit
> 
> Also, the patch introduces the following generics which are
> used by the functions mentioned above:
> * BITOP_BITS_PER_WORD
> * BITOP_MASK
> * BITOP_WORD
> * BITOP_TYPE
> 
> These functions and macros can be useful for architectures
> that don't have corresponding arch-specific instructions.
> 
> Because of that x86 has the following check in the macros test_bit(),
> __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit():
>     if ( bitop_bad_size(addr) ) __bitop_bad_size();
> It was necessary to move the check to generic code and define as 0
> for other architectures as they do not require this check.

Hmm, yes, the checks need to be in the outermost wrapper macros. While
you're abstracting other stuff to arch_*(), wouldn't it make sense to
also abstract this to e.g. arch_check_bitop_size(), with the expansion
simply being (effectively) empty in the generic fallback case?

> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -65,10 +65,164 @@ static inline int generic_flsl(unsigned long x)
>   * scope
>   */
>  
> +#define BITOP_BITS_PER_WORD 32
> +/* typedef uint32_t bitop_uint_t; */
> +#define bitop_uint_t uint32_t

So no arch overrides permitted anymore at all?

> +#define BITOP_MASK(nr)  ((bitop_uint_t)1 << ((nr) % BITOP_BITS_PER_WORD))
> +
> +#define BITOP_WORD(nr)  ((nr) / BITOP_BITS_PER_WORD)
> +
>  /* --------------------- Please tidy above here --------------------- */
>  
>  #include <asm/bitops.h>
>  
> +#ifndef bitop_bad_size
> +extern void __bitop_bad_size(void);

If not switching to arch_check_bitop_size() or alike as suggested above,
why exactly does this need duplicating here and in x86? Can't the decl
simply move ahead of the #include right above? (Sure, this will then
require that nothing needing any of the functions you move here would
still include asm/bitops.h; it would need to be xen/bitops.h everywhere.)

> +#define bitop_bad_size(addr) 0
> +#endif
> +
> +/**
> + * generic__test_and_set_bit - Set a bit and return its old value
> + * @nr: Bit to set
> + * @addr: Address to count from
> + *
> + * This operation is non-atomic and can be reordered.
> + * If two examples of this operation race, one can appear to succeed
> + * but actually fail.  You must protect multiple accesses with a lock.
> + */
> +static always_inline __pure bool
> +generic__test_and_set_bit(unsigned long nr, volatile void *addr)

Does __pure actually fit with the use of volatile? The former says multiple
accesses may be folded; the latter says they must not be.

> +{
> +    bitop_uint_t mask = BITOP_MASK(nr);
> +    volatile bitop_uint_t *p = ((volatile bitop_uint_t *)addr) + 
> BITOP_WORD(nr);

Nit: Slightly shorter line possible:

    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);

> +    bitop_uint_t old = *p;
> +
> +    *p = old | mask;
> +    return (old & mask);
> +}
> +
> +/**
> + * generic__test_and_clear_bit - Clear a bit and return its old value
> + * @nr: Bit to clear
> + * @addr: Address to count from
> + *
> + * This operation is non-atomic and can be reordered.
> + * If two examples of this operation race, one can appear to succeed
> + * but actually fail.  You must protect multiple accesses with a lock.
> + */
> +static always_inline __pure bool
> +generic__test_and_clear_bit(bitop_uint_t nr, volatile void *addr)
> +{
> +    bitop_uint_t mask = BITOP_MASK(nr);
> +    volatile bitop_uint_t *p = ((volatile bitop_uint_t *)addr) + 
> BITOP_WORD(nr);
> +    bitop_uint_t old = *p;
> +
> +    *p = old & ~mask;
> +    return (old & mask);
> +}
> +
> +/* WARNING: non atomic and it can be reordered! */
> +static always_inline __pure bool
> +generic__test_and_change_bit(unsigned long nr, volatile void *addr)
> +{
> +    bitop_uint_t mask = BITOP_MASK(nr);
> +    volatile bitop_uint_t *p = ((volatile bitop_uint_t *)addr) + 
> BITOP_WORD(nr);
> +    bitop_uint_t old = *p;
> +
> +    *p = old ^ mask;
> +    return (old & mask);
> +}
> +/**
> + * generic_test_bit - Determine whether a bit is set
> + * @nr: bit number to test
> + * @addr: Address to start counting from
> + */
> +static always_inline __pure int generic_test_bit(int nr, const volatile void 
> *addr)

Further up you use bool; why int here?

> +{
> +    const volatile bitop_uint_t *p = addr;
> +    return 1 & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD - 1)));

And reason not to use BITOP_MASK() here as well (once having switched to
bool return type)?

> +}
> +
> +static always_inline __pure bool
> +__test_and_set_bit(unsigned long nr, volatile void *addr)
> +{
> +#ifndef arch__test_and_set_bit
> +#define arch__test_and_set_bit generic__test_and_set_bit
> +#endif
> +
> +    return arch__test_and_set_bit(nr, addr);
> +}
> +#define __test_and_set_bit(nr, addr) ({             \
> +    if ( bitop_bad_size(addr) ) __bitop_bad_size(); \
> +    __test_and_set_bit(nr, addr);                   \
> +})
> +
> +static always_inline __pure bool
> +__test_and_clear_bit(bitop_uint_t nr, volatile void *addr)
> +{
> +#ifndef arch__test_and_clear_bit
> +#define arch__test_and_clear_bit generic__test_and_clear_bit
> +#endif
> +
> +    return arch__test_and_clear_bit(nr, addr);
> +}
> +#define __test_and_clear_bit(nr, addr) ({           \
> +    if ( bitop_bad_size(addr) ) __bitop_bad_size(); \
> +    __test_and_clear_bit(nr, addr);                 \
> +})
> +
> +static always_inline __pure bool
> +__test_and_change_bit(unsigned long nr, volatile void *addr)
> +{
> +#ifndef arch__test_and_change_bit
> +#define arch__test_and_change_bit generic__test_and_change_bit
> +#endif
> +
> +    return arch__test_and_change_bit(nr, addr);
> +}
> +#define __test_and_change_bit(nr, addr) ({              \
> +    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
> +    __test_and_change_bit(nr, addr);                    \
> +})
> +
> +static always_inline __pure int test_bit(int nr, const volatile void *addr)

Further up you use bool; why int here?

> +{
> +#ifndef arch_test_bit
> +#define arch_test_bit generic_test_bit
> +#endif
> +
> +    return arch_test_bit(nr, addr);
> +}
> +#define test_bit(nr, addr) ({                           \
> +    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
> +    test_bit(nr, addr);                                 \
> +})

>From here onwards, ...

> +static always_inline __pure int fls(unsigned int x)
> +{
> +    if (__builtin_constant_p(x))
> +        return generic_fls(x);
> +
> +#ifndef arch_fls
> +#define arch_fls generic_fls
> +#endif
> +
> +    return arch_fls(x);
> +}
> +
> +static always_inline __pure int flsl(unsigned long x)
> +{
> +    if (__builtin_constant_p(x))
> +        return generic_flsl(x);
> +
> +#ifndef arch_flsl
> +#define arch_flsl generic_flsl
> +#endif
> +
> +    return arch_flsl(x);
> +}

... does all of this really belong here? Neither title nor description have
any hint towards this.

>  /*
>   * Find First Set bit.  Bits are labelled from 1.
>   */

This context suggests there's a dependency on an uncommitted patch. Nothing
above says so. I guess you have a remark in the cover letter, yet imo that's
only partly helpful.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.