[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1] x86/hvm: Add MSR old value



On Fri, Oct 13, 2017 at 6:17 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 13.10.17 at 12:36, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> On 13.10.2017 13:29, Jan Beulich wrote:
>>>> +        __set_bit(index + sizeof(struct monitor_msr_bitmap), bitmap);
>>>
>>> I think you miss "* 8" here - a bit position plus sizeof() doesn't
>>> produce any useful value.
>>>
>>> But what's worse - having read till the end of the patch I don't
>>> see you change any allocation, yet you clearly need to double
>>> the space now that you need two bits per MSR.
>>
>> We did this:
>>
>> diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
>> index e59f1f5..a3046c6 100644
>> --- a/xen/arch/x86/monitor.c
>> +++ b/xen/arch/x86/monitor.c
>> @@ -25,7 +25,7 @@
>>   int arch_monitor_init_domain(struct domain *d)
>>   {
>>       if ( !d->arch.monitor.msr_bitmap )
>> -        d->arch.monitor.msr_bitmap = xzalloc(struct monitor_msr_bitmap);
>> +        d->arch.monitor.msr_bitmap = xzalloc_array(struct 
>> monitor_msr_bitmap, 2);
>>
>>       if ( !d->arch.monitor.msr_bitmap )
>>           return -ENOMEM;
>> @@ -67,7 +67,7 @@ static unsigned long *monitor_bitmap_for_msr(const struct 
>> domain *d, u32 *msr)
>>       }
>>   }
>>
>> I.e., we are now allocating an array of size 2 of struct
>> monitor_msr_bitmaps with xzalloc_array().
>
> Oh, I'm not sure how I could overlook this considering that I
> specifically looked up the allocation point and searched through
> the patch for a respective change. I'm sorry for the noise in
> this regard. I do think though that the chosen model is a little
> odd and fragile, but that's something you and Tamas as the
> maintainers of the code have to judge about.
>

It looks fine to me.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.