[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] x86/HVM: cache attribute pinning adjustments



>>> On 03.03.16 at 12:03, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 03/03/16 10:31, Jan Beulich wrote:
>> - call hvm_get_mem_pinned_cacheattr() for RAM ranges only (requires
>>   some re-ordering in epte_get_entry_emt(), to fully handle all MMIO
>>   aspects first) - it's documented to be intended for RAM only
>> - remove unnecessary indirection for hvm_get_mem_pinned_cacheattr()'s
>>   return of the type
>> - make hvm_set_mem_pinned_cacheattr() return an error on bad domain
>>   kind or obviously bad GFN range
>> - also avoid cache flush on EPT when removing a UC- range
>> - other code structure adjustments without intended functional change
> 
> Reviewing would be far easier if these code structure changes were in a
> different patch.  In particular, half the changes to
> epte_get_entry_emt() appear to just be reordering the direct_mmio check
> in order to drop an ASSERT(), but I am not quite sure, given the
> complexity of the change.

I'll see about splitting them, albeit that's not going to help the
hard to read change to epte_get_entry_emt() (in which removal
of the ASSERT() isn't the goal, but a nice side effect).

>> @@ -587,20 +578,21 @@ static void free_pinned_cacheattr_entry(
>>      xfree(container_of(rcu, struct hvm_mem_pinned_cacheattr_range, rcu));
>>  }
>>  
>> -int32_t hvm_set_mem_pinned_cacheattr(
>> -    struct domain *d,
>> -    uint64_t gfn_start,
>> -    uint64_t gfn_end,
>> -    uint32_t  type)
>> +int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
>> +                                 uint64_t gfn_end, uint32_t type)
>>  {
>>      struct hvm_mem_pinned_cacheattr_range *range;
>>      int rc = 1;
>>  
>> -    if ( !is_hvm_domain(d) || gfn_end < gfn_start )
>> -        return 0;
>> +    if ( !is_hvm_domain(d) )
>> +        return -EOPNOTSUPP;
> 
> You introduce an asymmetry between set and get here, both in terms of
> the checks (hvm vs hvm_container), and assert vs plain failure.  Why is
> this?
> 
> I would suggest ASSERT(is_hvm_domain(d)) in both cases.

No, in the "set" case we mustn't assert, or else the caller needs
to change too. And switching to the container variant in "get"
and converting to an assert at the same time is intended too: No
caller calls this for a PV domain, and at least ept_get_entry_emt()
may call this for a PVH one (just that currently "set" won't allow
such ranges to be introduced for them - since I don't know the
reason for this, I didn't want to change the behavior in this regard).

>> --- a/xen/include/asm-x86/hvm/cacheattr.h
>> +++ b/xen/include/asm-x86/hvm/cacheattr.h
>> @@ -1,29 +1,23 @@
>>  #ifndef __HVM_CACHEATTR_H__
>>  #define __HVM_CACHEATTR_H__
>>  
>> -void hvm_init_cacheattr_region_list(
>> -    struct domain *d);
>> -void hvm_destroy_cacheattr_region_list(
>> -    struct domain *d);
>> +#include <xen/types.h>
>> +
>> +struct domain;
>> +void hvm_init_cacheattr_region_list(struct domain *d);
>> +void hvm_destroy_cacheattr_region_list(struct domain *d);
>>  
>>  /*
>>   * To see guest_fn is in the pinned range or not,
>> - * if yes, return 1, and set type to value in this range
>> - * if no,  return 0, setting type to ~0
>> - * if ambiguous, return -1, setting type to ~0 (possible only for order > 0)
>> + * if yes, return the (non-negative) type
>> + * if no or ambiguous, return a negative error code
>>   */
>> -int hvm_get_mem_pinned_cacheattr(
>> -    struct domain *d,
>> -    uint64_t guest_fn,
>> -    unsigned int order,
>> -    uint32_t *type);
>> +int hvm_get_mem_pinned_cacheattr(struct domain *d, uint64_t guest_fn,
>> +                                 unsigned int order);
> 
> fn, being the usual abbreviation for function, makes this confusing to
> read.  As it is already changing, could we change the parameter name to
> gfn or guest_frame to be more consistent?  (Perhaps even start using
> gfn_t if you are feeing keen).

Well, yes, but that will mean yet more difficult to review a patch
(and to be honest with things like this I don't really feel like splitting
things up into too fine grained pieces).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.