[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH 02/13] introduce XENMEM_reserved_device_memory_map



>>> On 16.04.15 at 16:59, <tim@xxxxxxx> wrote:
> At 17:21 +0800 on 10 Apr (1428686513), Tiejun Chen wrote:
>> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
>> index 2b5206b..36e5f54 100644
>> --- a/xen/include/public/memory.h
>> +++ b/xen/include/public/memory.h
>> @@ -574,7 +574,37 @@ struct xen_vnuma_topology_info {
>>  typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
>>  DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
>>  
>> -/* Next available subop number is 27 */
>> +/*
>> + * For legacy reasons, some devices must be configured with special memory
>> + * regions to function correctly.  The guest would take these regions
>> + * according to different user policies.
>> + */
> 
> I don't understand what this means.  Can you try to write a comment
> that would tell an OS developer:
>  - what the reserved device memory map actually means; and
>  - what this hypercall does.

For one, this is meant to be a tools only interface, hence the OS
developer shouldn't care much. And then I don't think we should
be explaining the RMRR concept here. Which would leave to add
sentence saying "This hypercall allows to retrieve ...".

>> @@ -121,6 +121,8 @@ void iommu_dt_domain_destroy(struct domain *d);
>>  
>>  struct page_info;
>>  
>> +typedef int iommu_grdm_t(xen_pfn_t start, xen_ulong_t nr, u32 id, void 
>> *ctxt);
> 
> This needs a comment describing what the return values are.

Will do.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.