[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xc_cpuid_x86.c: Simplify masking conditions and remove redundant work





On Wed, Sep 10, 2014 at 10:09 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:

On Wed, 2014-09-10 at 21:02 +0800, z wrote:

> I am sorry for confusing. The latest one is my all-in-one version:
>
> Subject:Â Â Â Â [PATCH v2] xc_cpuid_x86.c: Simplify masking conditions
> and remove redundant work
> Date:Â Â Â Â Â ÂWed, 10 Sep 2014 18:29:00 +0800 (10/09/14 11:29:00)

Thanks, since Jan has already indicated he's happy with it I went to
apply it, however a build test resulted in:
xc_cpuid_x86.c: In function 'intel_xc_cpuid_policy':
xc_cpuid_x86.c:192:32: error: expected ')' before ';' token
xc_cpuid_x86.c:198:5: error: expected ';' before '}' token

I've folded in this change to fix it:
    @@ -189,7 +189,7 @@ static void intel_xc_cpuid_policy(
        Â/* Only a few features are advertised in Intel's 0x80000001. */
        Âregs[2] &= (bitmaskof(X86_FEATURE_LAHF_LM) |
              Âbitmaskof(X86_FEATURE_3DNOWPREFETCH) |
    -          bitmaskof(X86_FEATURE_ABM);
    +          bitmaskof(X86_FEATURE_ABM));
        Âregs[3] &= (bitmaskof(X86_FEATURE_NX) |
              Âbitmaskof(X86_FEATURE_LM) |
              Âbitmaskof(X86_FEATURE_SYSCALL) |

Please try and remember to compile test your incremental changes next
time ;-)

Ohh, I am sorry. Thank you for your nice work. Also thanks for Jan's suggestions.
I got a couple of mistakes for this submission. And thanks for your patience.

It would not be expected to happen again, to me.

Zhuo
Â

> Thank you for your suggestions and the rules. I left the community for
> a couple of years to work for some more closed projects.
>
> And I am glad to be here again. I will follow the advices you
> mentioned.

Welcome back!

Ian.





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.