[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct data.



>>> On 15.12.13 at 17:51, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> On 15/12/2013 00:29, Don Slutz wrote:
>>
>> I think I have corrected all coding errors (please check again). And
>> done all requested changes.  I did add the reviewed by (not sure if I
>> should since this changes a large part of the patch, but they are all
>> what Jan said).
>>
>> I have unit tested it and it appears to work the same as the previous
>> version (as expected).
>>
>> Here is the new version, also attached.
>>
>> From e0e8f5246ba492b153884cea93bfe753f1b0782e Mon Sep 17 00:00:00 2001
>> From: Don Slutz <dslutz@xxxxxxxxxxx>
>> Date: Tue, 12 Nov 2013 08:22:53 -0500
>> Subject: [PATCH v2 3/4] hvm_save_one: return correct data.
>>
>> It is possible that hvm_sr_handlers[typecode].save does not use all
>> the provided room.  In that case, using:
>>
>>    instance * hvm_sr_handlers[typecode].size
>>
>> does not select the correct instance.  Add code to search for the
>> correct instance.
>>
>> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
>> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> but this fairs no better at selecting the correct subset in the case
> that less data than hvm_sr_handlers[typecode].size is written by
> hvm_sr_handlers[typecode].save.

Oh, yes, indeed.

> It always increments by 'size' bytes, and will only copy the data back
> if the bytes under desc->instance happen to match the instance we are
> looking for.
> 
> The only solution I can see is that for the per-vcpu records, the save
> functions get refactored to take an instance ID, and only save their
> specific instance.

I don't see why you shouldn't be able to look at the descriptor
instead - that one does have the correct size, doesn't it?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.