[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 15/31] x86: Generate deep dependencies of x86 features

On 05/01/16 16:54, Ian Campbell wrote:
> On Tue, 2016-01-05 at 16:42 +0000, Andrew Cooper wrote:
>>> IIRC void * arithmetic is a gcc (and presumably clang) extension, which
>>> we
>>> can specify for Xen itself, but I'm not sure about libxc (cf: recent
>>> discussions about building stuff for Windows).
>> It isn't void* arithmetic.  Simply taking a void * to facilitate
>> different underlying types for a bitmap.
>>> What actually breaks with the existing definitions?
>> Xen uses unsigned int[] for bitmaps, while libxc uses unsigned long[]. 
>> The compiler objects to the mix of pointers to different sized.
> Did you also change Xen to void somewhere in this series, or if not why not
> change libxc to use an unsigned int[] if we want them to be consistent?

Xen already uses void *, so the code using __set_bit() can use its
preferred underlying types for bitmaps.

libxc bitmaps are unsigned long *, but the featureset abi specifies
uint32_t *.  IIRC several bits of libxc code uses deliberate typecasting
to use xc_bitops.h.  void * is definitely the correct solution all around.

>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>> ---
>>>> CC: Jan Beulich <JBeulich@xxxxxxxx>
>>>> CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
>>>> CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
>>>> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
>>>> TODO: The generation of autogen.c now means that libxc needs to be
>>>> compiled
>>>> after the hypervisor, as the vpath doesn't convey the generation
>>>> information.
>>>> I need to find a way to fix this.
>>> I'd be inclined to do the autogeneration twice with the output being
>>> outside the shared directory or gaining a (tools|xen)_ prefix.
>>> It's a bit wasteful to do it twice but anything else would add an
>>> undesirable linkage between tools and hypervisor build I think. I'm
>>> certain
>>> we don't want to commit the generated files.
>> I will see how much of this I can get into the automatically generated
>> part.  The problem is that there is a header file and secondary .c file
>> which are not automatically generated, but needed in both places.  I
>> definitely do not want to duplicate those.
> The secondary.c seems ok, it'll get vpathd.
> The header, I suppose it depends which way the #include goes from the
> generated things, at worst it'd be an #ifdef or a flag to the generator I
> guess?

I will see how v2 ends up looking.

>>>> +/* Sparse feature matrix identifying all features which depend on a
>>>> feature. */
>>>> +extern const struct tagged_featureset deep_deps[];
>>> This'll be XEN_NR_FEATURESET_ENTRIES^2 entries? How big is that getting
>>> OOI?
>> It is a sparse matrix, not an n^2 matrix.  It only has entries for
>> features which sit in a dependency tree, which is vastly fewer than all
>> known features.  Currently, it is an array of 9 entries.
> I'd probably just call this a (sorted) list of dependencies, but ok then ;-
> )

In which case I don't think I have conveyed its meaning properly.  It
might be better understood in the context of patches 17 and 19.

>>>> +
>>>> +        # Expected duplicate symbols.  Discard
>>>> +        if name in ('3DNOW_ALT', ):
>>>> +            continue
>>> OOI how does this arise?
>> Bug in older AMD processors.  See the first and final hunks of c/s
>> d8ba3a9
> Lovely!
> I think I'd say "Expected duplicate values" since I thought the name
> 3DNOW_ALT was what was duplicated, which made me wonder how it compiled...
> If you dropped the appearance of being generic the comment could usefully
> give a hint about BPE, afterall this should be a pretty exceptional
> occurrence and another if with another comment for the next one doesn't
> seem too hateful.

Now I think about it, it is probably better to drop 3DNOW_ALT and just
use PBE in the AMD bugfix, beside the comment.

That way, we take a "strictly no duplicate" policy.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.