[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 10/10] xen: add new Xen cpuid node for max address width info



>>> On 22.09.17 at 18:27, <jgross@xxxxxxxx> wrote:
> On 22/09/17 16:47, Jan Beulich wrote:
>>>>> On 22.09.17 at 13:41, <jgross@xxxxxxxx> wrote:
>>> --- a/xen/arch/x86/traps.c
>>> +++ b/xen/arch/x86/traps.c
>>> @@ -929,6 +929,13 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, 
> uint32_t leaf,
>>>          res->b = v->vcpu_id;
>>>          break;
>>>  
>>> +    case 5: /* PV-specific parameters */
>>> +        if ( is_hvm_domain(d) || subleaf != 0 )
>>> +            break;
>>> +
>>> +        res->a = generic_flsl(get_upper_mfn_bound()) + PAGE_SHIFT;
>>> +        break;
>> 
>> The subleaf check here should be mirrored ...
>> 
>>> --- a/xen/include/public/arch-x86/cpuid.h
>>> +++ b/xen/include/public/arch-x86/cpuid.h
>>> @@ -85,6 +85,15 @@
>>>  #define XEN_HVM_CPUID_IOMMU_MAPPINGS   (1u << 2)
>>>  #define XEN_HVM_CPUID_VCPU_ID_PRESENT  (1u << 3) /* vcpu id is present in 
>>> EBX 
> 
>>> */
>>>  
>>> -#define XEN_CPUID_MAX_NUM_LEAVES 4
>>> +/*
>>> + * Leaf 6 (0x40000x05)
>>> + * PV-specific parameters
>>> + * EAX: bits 0-7: max machine address width
>>> + */
>> 
>> ... in the comment here. This is easily doable while committing,
> 
> Up to now there is no example for this: the time leaf isn't documented
> and the HVM leaf from which I copied the subleaf check doesn't have
> anything related to it in the comments.

As (almost) always, omissions in the past shouldn't be a reason to
spread the badness.

> Do you have any special format recommendations?

 * Sub-leaf 0: EAX: bits 0-7: max machine address width

> And should I add related comments to the HVM leaf section?

Well, if you did so in a separate patch, this would certainly be
nice.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.