[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86: generic MSRs save/restore



>>> On 13.12.13 at 18:44, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> On 13/12/2013 14:01, Jan Beulich wrote:
>> +static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
>> +{
>> +    struct vcpu *v;
>> +
>> +    for_each_vcpu ( d, v )
>> +    {
>> +        struct hvm_msr *ctxt;
>> +        unsigned int i;
>> +
>> +        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
>> +                             HVM_CPU_MSR_SIZE(msr_count_max)) )
>> +            return 1;
>> +        ctxt = (struct hvm_msr *)&h->data[h->cur];
>> +        ctxt->count = 0;
>> +
>> +        if ( hvm_funcs.save_msr )
>> +            hvm_funcs.save_msr(v, ctxt);
>> +
>> +        for ( i = 0; i < ctxt->count; ++i )
>> +            ctxt->msr[i]._rsvd = 0;
>> +
>> +        if ( ctxt->count )
>> +            h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
>> +        else
>> +            h->cur -= sizeof(struct hvm_save_descriptor);
> 
> On the last iteration of the loop, this will leave a stale CPU_MSR_CODE
> header in the area between the end of the hvm record and the maximum
> possible size of the record, which then gets copied into the toolstack-
> provided buffer.
> 
> Luckily, it does appear that xc_domain_save() does deal with this
> correctly and only send the valid subset of the entire record.  I
> presume we dont care about breaking any other toolstacks which might get
> this wrong?

Wouldn't that be similarly a problem for the variable size XSAVE
record we already have? I.e. I think tool stacks are _required_
to cope with this.

>> @@ -1139,9 +1246,22 @@ static int __init __hvm_register_CPU_XSA
>>                          HVM_CPU_XSAVE_SIZE(xfeature_mask) +
>>                              sizeof(struct hvm_save_descriptor),
>>                          HVMSR_PER_VCPU);
>> +
>> +    if ( hvm_funcs.init_msr )
>> +        msr_count_max += hvm_funcs.init_msr();
> 
> Why += as opposed to direct assignment?  Changing this value anywhere
> other than here looks as if it will lead to problems.

The intention is for the variable to get initialized to non-zero as
soon as an architectural MSR appears that might always need
saving, or for the order of the increments to not matter when
further conditionals get added here.

>> --- a/xen/include/public/arch-x86/hvm/save.h
>> +++ b/xen/include/public/arch-x86/hvm/save.h
>> @@ -592,9 +592,21 @@ struct hvm_tsc_adjust {
>>  
>>  DECLARE_HVM_SAVE_TYPE(TSC_ADJUST, 19, struct hvm_tsc_adjust);
>>  
>> +
>> +struct hvm_msr {
>> +    uint32_t count;
>> +    struct hvm_one_msr {
>> +        uint32_t index;
>> +        uint32_t _rsvd;
>> +        uint64_t val;
>> +    } msr[1 /* variable size */];
> 
> Coverity is going to complain about using this with more than 1 record,
> but I can't think of a better way of doing this without introducing a
> VLA to the public header files.

If that's a concern, I'll switch to __STDC_VERSION__/__GNUC__
dependent code here, just like we do elsewhere in the public
headers:

struct hvm_msr {
    uint32_t count;
    struct hvm_one_msr {
        uint32_t index;
        uint32_t _rsvd;
        uint64_t val;
#if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
    } msr[];
#elif defined(__GNUC__)
    } msr[0];
#else
    } msr[1 /* variable size */];
#endif
};

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.