|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 04/15] x86: implement data structure and CPU init flow for MBA
>>> On 05.10.17 at 06:42, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> On 17-10-03 23:52:09, Jan Beulich wrote:
>> >>> Yi Sun <yi.y.sun@xxxxxxxxxxxxxxx> 09/29/17 3:55 AM >>>
>> >On 17-09-28 05:00:09, Jan Beulich wrote:
>> >> >>> On 23.09.17 at 11:48, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
>> >> > @@ -1410,6 +1496,7 @@ static void psr_cpu_init(void)
>> >> > unsigned int socket, cpu = smp_processor_id();
>> >> > struct feat_node *feat;
>> >> > struct cpuid_leaf regs;
>> >> > + uint32_t ebx;
>> >>
>> >> Is this local variable really a big help? To me it looks like it only
>> >> makes the patch larger without actually improving anything,
>> >> and without being related to the subject of the patch.
>> >>
>> >IMHO, it can avoid the 'cpuid_count_leaf()' being repeatedly called.
>> >Without it,
>> >we have to call 'cpuid_count_leaf()' for 2 more times.
>>
>> Hmm, didn't you simply replace regs.b uses with ebx? Or did I overlook a
>> place
>> where regs is being overwritten before the last of these regs.b uses (in
>> which case
>> I think your change is fine)?
>>
> The regs is overwritten when a feature presents. The old codes are below
>
> cpuid_count_leaf(PSR_CPUID_LEVEL_CAT, 0, ®s);
> if ( regs.b & PSR_RESOURCE_TYPE_L3 )
> {
> cpuid_count_leaf(PSR_CPUID_LEVEL_CAT, 1, ®s); //It is overwritten
> here.
> ......
> }
>
> cpuid_count_leaf(PSR_CPUID_LEVEL_CAT, 0, ®s); //So, we have to call
> cpuid to get regs again.
> if ( regs.b & PSR_RESOURCE_TYPE_L2 )
> {
> cpuid_count_leaf(PSR_CPUID_LEVEL_CAT, 2, ®s);
> ......
>
> Because above reason, I defined this ebx local variable to avoid calling cpuid
> again for next feature.
I see. But then please give the variable a better name, reflecting
the data it holds.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |