[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/7] x86: Add support for STAC/CLAC instructions



On 23/04/14 13:42, Jan Beulich wrote:
>>>> On 23.04.14 at 14:11, <feng.wu@xxxxxxxxx> wrote:
>>>> +#define ASM_AC(op)                                       \
>>>> +        pushq %rax;                                      \
>>>> +        leaq boot_cpu_data(%rip), %rax;                  \
>>>> +        btl $X86_FEATURE_SMAP-7*32, CPUINFO86_leaf7_features(%rax); \
>>>> +        jnc 881f;                                        \
>>>> +        op;                                              \
>>>> +881:    popq %rax
>>> While it is unlikely to change going forwards, $(X86_FEATURE_SMAP & 31)
>>> looks more obviously correct.
>> Okay.
>>> Also, given that boot_cpu_data(%rip) is an assembler know constant
>>> value, and CPUINFO86_leaf7_features is a constant offset from that, is
>>> there any way of persuading the assembler to conjoin the two and forgo
>>> the push, lea and pop.  (I have had a go, but lack sufficient caffeine
>>> this early in the day)
>>>
>>> ~Andrew
>> I thought about this idea before posting this version, but I didn't get a 
>> good
>> solution. I will continue to find a better way to handle this.
> btl $X86_FEATURE_SMAP & 31, ((X86_FEATURE_SMAP >> 3) & 
> ~3)+CPUINFO86_features+boot_cpu_data(%rip)
>
> (at once replacing CPUINFO86_ext_features with CPUINFO86_features,
> perhaps also changing the [inconsistent] name prefix; likely a suitable
> macro could be created usable both here and wherever
> CPUINFO86_ext_features is being used currently)
>
> Jan
>

This needs careful synchronising with the "no-smap" command line
parameter which can shoot that bit out of boot features some time after
it being set.

It might be easier just to have  bool_t __read_mostly smap_in_use;

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.