[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 04/12] xen/mem_event: Abstract architecture specific sanity checks



>>> On 28.08.14 at 10:40, <tamas.lengyel@xxxxxxxxxxxx> wrote:
> On Thu, Aug 28, 2014 at 8:38 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> 
>> >>> On 27.08.14 at 23:54, <tamas.lengyel@xxxxxxxxxxxx> wrote:
>> >>  On Wed, Aug 27, 2014 at 5:19 PM, Jan Beulich <JBeulich@xxxxxxxx>
>> wrote:
>> >>
>> >>> >>> On 27.08.14 at 16:06, <tklengyel@xxxxxxxxxxxxx> wrote:
>> >>> > --- a/xen/common/mem_event.c
>> >>> > +++ b/xen/common/mem_event.c
>> >>> > @@ -424,6 +424,19 @@ int __mem_event_claim_slot(struct domain *d,
>> >>> struct mem_event_domain *med,
>> >>> >          return mem_event_grab_slot(med, (current->domain != d));
>> >>> >  }
>> >>> >
>> >>> > +static inline bool_t mem_event_sanity_check(struct domain *d)
>> >>> > +{
>> >>> > +    /* Only HAP is supported */
>> >>> > +    if ( !hap_enabled(d) )
>> >>> > +        return 0;
>> >>> > +
>> >>> > +    /* Currently only EPT is supported */
>> >>> > +    if ( !cpu_has_vmx )
>> >>> > +        return 0;
>> >>> > +
>> >>> > +    return 1;
>> >>> > +}
>> >>>
>> >>> So what does it buy us to have this in a separate function, but
>> >>> still in the same common file?
>> >>>
>> >>
>> >> This patch really just sets up the ground for ARM where these checks are
>> >> not required and will just return 1.
>> >>
>> >
>> > In the next series I'll actually relocate this function into architecture
>> > specific p2m.h and rename it p2m_mem_event_sanity_check. Same for the
>> > mem_access sanity check function.
>>
>> Sounds like suboptimal patch splitting then...
> 
> Suboptimal in what sense?

Putting the function in one place first, and then immediately moving
it elsewhere. Why not put it in the final place right away?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.