[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 05/15] x86/altp2m: basic data structures and support routines.



>From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>Sent: Tuesday, July 14, 2015 6:13 AM
>
>>>> On 14.07.15 at 02:14, <edmund.h.white@xxxxxxxxx> wrote:
>> +void
>> +altp2m_vcpu_initialise(struct vcpu *v) {
>> +    if ( v != current )
>> +        vcpu_pause(v);
>> +
>> +    altp2m_vcpu_reset(v);
>> +    vcpu_altp2m(v).p2midx = 0;
>> +    atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
>> +
>> +    altp2m_vcpu_update_eptp(v);
>> +
>> +    if ( v != current )
>> +        vcpu_unpause(v);
>> +}
>> +
>> +void
>> +altp2m_vcpu_destroy(struct vcpu *v)
>> +{
>> +    struct p2m_domain *p2m;
>> +
>> +    if ( v != current )
>> +        vcpu_pause(v);
>> +
>> +    if ( (p2m = p2m_get_altp2m(v)) )
>> +        atomic_dec(&p2m->active_vcpus);
>> +
>> +    altp2m_vcpu_reset(v);
>> +
>> +    altp2m_vcpu_update_eptp(v);
>> +    altp2m_vcpu_update_vmfunc_ve(v);
>> +
>> +    if ( v != current )
>> +        vcpu_unpause(v);
>> +}
>
>There not being any caller of altp2m_vcpu_initialise() I can't judge about its
>pausing requirements, but for the destroy case I can't see what the pausing is
>good for. Considering its sole user it's also not really clear why the two 
>update
>operations need to be done while destroying.
>

initialise doesn't get called until the HVMOP implementation in a later patch. 
initialise and destroy both operate on a running domain and modify vmcs state, 
so the vcpu has to be paused.

>> @@ -6498,6 +6500,25 @@ enum hvm_intblk
>nhvm_interrupt_blocked(struct vcpu *v)
>>      return hvm_funcs.nhvm_intr_blocked(v);  }
>>
>> +void altp2m_vcpu_update_eptp(struct vcpu *v) {
>> +    if ( hvm_funcs.altp2m_vcpu_update_eptp )
>> +        hvm_funcs.altp2m_vcpu_update_eptp(v);
>> +}
>> +
>> +void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v) {
>> +    if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve )
>> +        hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v);
>> +}
>> +
>> +bool_t altp2m_vcpu_emulate_ve(struct vcpu *v) {
>> +    if ( hvm_funcs.altp2m_vcpu_emulate_ve )
>> +        return hvm_funcs.altp2m_vcpu_emulate_ve(v);
>> +    return 0;
>> +}
>
>The patch context here suggests that you're using pretty outdated a tree -
>nhvm_interrupt_blocked() went away a week ago. In line with the commit
>doing so, all of the above should become inline functions.
>

We are rebasing on staging and testing now

>> +uint16_t p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp) {
>> +    struct p2m_domain *p2m;
>> +    struct ept_data *ept;
>> +    uint16_t i;
>> +
>> +    altp2m_list_lock(d);
>> +
>> +    for ( i = 0; i < MAX_ALTP2M; i++ )
>> +    {
>> +        if ( d->arch.altp2m_eptp[i] == INVALID_MFN )
>> +            continue;
>> +
>> +        p2m = d->arch.altp2m_p2m[i];
>> +        ept = &p2m->ept;
>> +
>> +        if ( eptp == ept_get_eptp(ept) )
>> +            goto out;
>> +    }
>> +
>> +    i = INVALID_ALTP2M;
>> +
>> +out:
>
>Labels should be indented by at least one space.

Ok

>
>Pending the rename, the function may also live in the wrong file (the use of
>ept_get_eptp() suggests so even if the function itself got renamed).
>

We will look at this one to see if its just a rename. 

>> +bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, uint16_t idx) {
>> +    struct domain *d = v->domain;
>> +    bool_t rc = 0;
>> +
>> +    if ( idx > MAX_ALTP2M )
>> +        return rc;
>> +
>> +    altp2m_list_lock(d);
>> +
>> +    if ( d->arch.altp2m_eptp[idx] != INVALID_MFN )
>> +    {
>> +        if ( idx != vcpu_altp2m(v).p2midx )
>> +        {
>> +            atomic_dec(&p2m_get_altp2m(v)->active_vcpus);
>> +            vcpu_altp2m(v).p2midx = idx;
>> +            atomic_inc(&p2m_get_altp2m(v)->active_vcpus);
>
>Are the two results of p2m_get_altp2m(v) here guaranteed to be distinct? If
>they aren't, is it safe to decrement first (potentially dropping the count to
>zero)?
>

Yes on both.

>> @@ -722,6 +731,27 @@ void nestedp2m_write_p2m_entry(struct
>p2m_domain *p2m, unsigned long gfn,
>>      l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
>>
>>  /*
>> + * Alternate p2m: shadow p2m tables used for alternate memory views
>> + */
>> +
>> +/* get current alternate p2m table */ static inline struct p2m_domain
>> +*p2m_get_altp2m(struct vcpu *v) {
>> +    struct domain *d = v->domain;
>> +    uint16_t index = vcpu_altp2m(v).p2midx;
>> +
>> +    ASSERT(index < MAX_ALTP2M);
>> +
>> +    return (index == INVALID_ALTP2M) ? NULL :
>> +d->arch.altp2m_p2m[index]; }
>
>Looking at this again, I'm afraid I'd still prefer index < MAX_ALTP2M in the
>return statement (and the ASSERT() dropped): The ASSERT() does nothing in a
>debug=n build, and hence wouldn't shield us from possibly having to issue an
>XSA if somehow an access outside the array's bounds turned out possible.
>

the assert was breaking v5 anyway. BUG_ON (with the right check) is probably 
the right thing to do, as we do in the exit handling that checks for a VMFUNC 
having changed the index.
So will make that change.

>I've also avoided to repeat any of the un-addressed points that I raised 
>against
>earlier versions.
>

I went back and looked at the earlier versions of the comments on this patch 
and afaict we have either addressed (accepted) those points or described why 
they shouldn't cause changes with reasoning . so if I missed something please 
let me know.

Ravi

>Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.