[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 2/6] xen: introduce DEFINE_SYMBOL



Stefano Stabellini writes ("[PATCH v10 2/6] xen: introduce DEFINE_SYMBOL"):
> Introduce a MACRO to be used to declare array variables corresponding to
> linker symbols, plus two static inline functions to be used for
> comparing and subtracting pointers with the linker symbols.
> 
> Note that the start and end symbols are declared of different types to
> help avoid errors and misusing those variables.

Firstly, sorry, but a formatting grumble:

The \ cause wrap damage on my 80-column terminal even before I quote
the email as I am doing now.  Right now it looks like this:

> +extern const type start_name[];                                        \
  \
> +extern const struct abstract_ ## name end_name[];                      \
  \

which is just awful.

I will remove some spaces from the quoted text so I can review it.
I'm not a hypervisor maintainer but I would appreciate it if you could
make the whole thing fit in 75 columns or so.

> +
> +/*
> + * Declare start and end array variables in C corresponding to existing
> + * linker symbols.
> + *
> + * Two static inline functions are declared to do comparisons and
> + * subtractions between these variables.
> + *
> + * The end variable is declared with a different type to make sure that
> + * the static inline functions cannot be misused.
> + */
> +#define DEFINE_SYMBOL(type, name, start_name, end_name)                   \
> +                                                                          \
> +struct abstract_ ## name {                                                \
> +    type _;                                                               \
> +};                                                                        \
> +                                                                          \
> +extern const type start_name[];                                           \
> +extern const struct abstract_ ## name end_name[];                         \

I have thought of a problem with this approach.

This goes wrong unless `type' is a struct type.  Because the compiler
is allowed to assume that end_name has the correct alignment for its
type.  And in some ABIs, the alignment of a struct containing (say) a
char is bigger than that of a char.  AIUI in some of the actual use
cases the linker-generated symbols may not be struct aligned.

I am not aware of a standard C type which could be used instead of
this struct.  But I think you can use the `packed' attribute to get
the right behaviour.  The GCC manual says:

 | Alignment can be decreased by specifying the 'packed' attribute.
 | See below.
   
Bizarrely, this seems only to be stated, slightly elliptically like
this, in the section on the `aligned' attribute; it's not mentioned in
`packed'.  I suggest we couple this with a compile-time assertion that
alignof is the struct is the same as alignof the type.

(FTR I think this is a largely theoretical concern because most
current ABIs including all of Xen's specify that structs inherit the
alignment of the coarsest member.)

> +static inline bool name ## _lt(const type s1[],                           \
> +                               const struct abstract_ ## name  s2[])       \
> +{                                                                         \
> +    return (uintptr_t)s1 < (uintptr_t)s2;                                 \
> +}                                                                         \

This seems right to me.

> +static inline ptrdiff_t name ## _diff(const type s1[],                    \
> +                                      const struct abstract_ ## name s2[])\
> +{                                                                         \
> +    return ((uintptr_t)s2 - (uintptr_t)s1) / sizeof(*s1);                 \

This is wrong.  The conversion to ptrdiff_t (currently done implicitly
by return) must come before the division.  Otherwise it will give the
wrong answer when s1 > s2.

Suppose 32-bit, s1=0x00000040 s2=0x00000020 sizeof=0x10, Then
s2-s1=0xffffffe0, and unsigned division gives
(s2-s1)/sizeof=0x0ffffffe.  Converstion to ptrdiff_t does not change
the bit pattern.  But we wanted 0xffffffe.

Signed integer division by a positive divisor is always defined (and
always fits) so doing the conversion first is fine.

Regards,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.