|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] RISCV/bitops: Use Zbb to provide arch-optimised bitops
On 25.03.2025 17:35, Oleksii Kurochko wrote:
>
> On 3/7/25 1:12 PM, Andrew Cooper wrote:
>> On 07/03/2025 12:01 pm, Jan Beulich wrote:
>>> On 07.03.2025 12:50, Oleksii Kurochko wrote:
>>>> On 3/6/25 9:19 PM, Andrew Cooper wrote:
>>>>> On 05/03/2025 7:34 am, Jan Beulich wrote:
>>>>>> I was actually hoping to eliminate BITS_PER_LONG at some point, in favor
>>>>>> of using sizeof(long) * BITS_PER_BYTE. (Surely in common code we could
>>>>>> retain a shorthand of that name, if so desired, but I see no reason why
>>>>>> each arch would need to provide all three BITS_PER_{BYTE,INT,LONG}.)
>>>>> The concern is legibility and clarity.
>>>>>
>>>>> This:
>>>>>
>>>>> ((x) ? 32 - __builtin_clz(x) : 0)
>>>>>
>>>>> is a clear expression in a way that this:
>>>>>
>>>>> ((x) ? (sizeof(int) * BITS_PER_BYTE) - __builtin_clz(x) : 0)
>>>>>
>>>>> is not. The problem is the extra binary expression, and this:
>>>>>
>>>>> ((x) ? BITS_PER_INT - __builtin_clz(x) : 0)
>>>>>
>>>>> is still clear, because the reader doesn't have to perform a multiply to
>>>>> just to figure out what's going on.
>>>>>
>>>>>
>>>>> It is definitely stupid to have each architecture provide their own
>>>>> BITS_PER_*. The compiler is in a superior position to provide those
>>>>> details, and it should be in a common location.
>>>>>
>>>>> I don't particularly mind how those constants are derived, but one key
>>>>> thing that BITS_PER_* can do which sizeof() can't is be used in
>>>>> #ifdef/etc.
>>>> What about moving them to xen/config.h? (if it isn't the best one place,
>>>> any suggestion which is better?)
>>>>
>>>> #define BYTES_PER_INT (1 << INT_BYTEORDER)
>>>> #define BITS_PER_INT (BYTES_PER_INT << 3)
>>>>
>>>> #define BYTES_PER_LONG (1 << LONG_BYTEORDER)
>>>> #define BITS_PER_LONG (BYTES_PER_LONG << 3)
>>>> #define BITS_PER_BYTE 8
>> The *_BYTEORDER's are useless indirection. BITS_PER_* should be defined
>> straight up, and this will simplify quite a lot of preprocessing.
>
> Could we really drop *_BYTEORDER?
>
> For example, LONG_BYTEORDER for Arm could be either 2 or 3 depends on Arm32
> or Arm64 is used.
The can still #define BITS_PER_LONG to 32 or 64 respectively, can't they?
>>>> Also, it seems like the follwoing could be moved there too:
>>>>
>>>> #define POINTER_ALIGN BYTES_PER_LONG
>>> This one is likely fine to move.
>>>
>>>> #define BITS_PER_LLONG 64
>>> This one is only fine to move imo when converted to
>>>
>>> #define BITS_PER_LONG (BYTES_PER_LLONG << 3)
>> Erm? That's mixing long and long long types. Presuming that's an
>> errant L, then sure.
>>
>>>> #define BITS_PER_BYTE 8
>>> Personally I'd rather leave this per-arch. The others can truly be derived;
>>> this one can't be. If we centralize, imo we should also convert the " << 3"
>>> to " * BITS_PER_BYTE".
>> It is highly unlikely that Xen will ever run on a system where CHAR_BIT
>> isn't 8.
>>
>> So I suggest it stays central to reduce complexity. If an arch ever
>> needs to change it, the complexity can be added then.
>
> Does it make sense to ifdef that? Or, at least, before defintion of
> BITS_PER_BYTE something like:
> #if CHAR_BIT != 8
> #error "CHAR_BIT isn't 8"
> #endif
Where would CHAR_BIT come from? Oh, perhaps you mean __CHAR_BIT__? If all
compilers we can build with supply that value, we could indeed centrally
use either
#define BITS_PER_BYTE __CHAR_BIT__
or
#define BITS_PER_BYTE 8
#if BITS_PER_BYTE != __CHAR_BIT__
# error "..."
#endif
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |