[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [BUGFIX][PATCH 3/4] hvm_save_one: return correct data.
On 12/15/13 11:51, Andrew Cooper wrote:
On 15/12/2013 00:29, Don Slutz wrote:
I think I have corrected all coding errors (please check again).
And done all requested changes. I did add the reviewed by (not
sure if I should since this changes a large part of the patch,
but they are all what Jan said).
I have unit tested it and it appears to work the same as the
previous version (as expected).
Here is the new version, also attached.
From e0e8f5246ba492b153884cea93bfe753f1b0782e Mon Sep 17
00:00:00 2001
From: Don Slutz <dslutz@xxxxxxxxxxx>
Date: Tue, 12 Nov 2013 08:22:53 -0500
Subject: [PATCH v2 3/4] hvm_save_one: return correct data.
It is possible that hvm_sr_handlers[typecode].save does not use
all
the provided room. In that case, using:
instance * hvm_sr_handlers[typecode].size
does not select the correct instance. Add code to search for
the
correct instance.
Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
but this fairs no better at selecting the correct subset in the
case that less data than hvm_sr_handlers[typecode].size is written
by hvm_sr_handlers[typecode].save.
True, but the inverse is the case here; .save writes 'n' 'size'
blocks. Form the loop above:
if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
for_each_vcpu(d, v)
sz += hvm_sr_handlers[typecode].size;
else
sz = hvm_sr_handlers[typecode].size;
so sz is in multiples of 'size'. Normally sz == ctxt.cur. With
some offline vcpus it write fewer 'size' blocks.
It
always increments by 'size' bytes, and will only copy the data
back if the bytes under desc->instance happen to match the
instance we are looking for.
The only time it does not find one is for an offline vcpu. Try out
the unit test code in patch #1 on an unchanged xen. It should not
display anything. Then offline a cpu in a domU (echo 0 >
/sys/devices/system/cpu/cpu1/online). And with 3 vcpus, it will
report an error.
-Don Slutz
The
only solution I can see is that for the per-vcpu records, the save
functions get refactored to take an instance ID, and only save
their specific instance.
I shall have a go at this and see how invasive it is.
~Andrew
---
xen/common/hvm/save.c | 28 +++++++++++++++++++++-------
1 file changed, 21 insertions(+), 7 deletions(-)
diff --git a/xen/common/hvm/save.c b/xen/common/hvm/save.c
index de76ada..6aaea6f 100644
--- a/xen/common/hvm/save.c
+++ b/xen/common/hvm/save.c
@@ -112,13 +112,27 @@ int hvm_save_one(struct domain *d,
uint16_t typecode, uint16_t instance,
d->domain_id, typecode);
rv = -EFAULT;
}
- else if ( copy_to_guest(handle,
- ctxt.data
- + (instance *
hvm_sr_handlers[typecode].size)
- + sizeof (struct
hvm_save_descriptor),
- hvm_sr_handlers[typecode].size
- - sizeof (struct
hvm_save_descriptor)) )
- rv = -EFAULT;
+ else
+ {
+ uint32_t off;
+
+ rv = -EBADSLT;
+ for ( off = 0; off < ctxt.cur; off +=
hvm_sr_handlers[typecode].size )
+ {
+ const struct hvm_save_descriptor *desc = (void
*)&ctxt.data[off];
+
+ if ( instance == desc->instance )
+ {
+ rv = 0;
+ if ( copy_to_guest(handle,
+ ctxt.data + off +
sizeof(*desc),
+
hvm_sr_handlers[typecode].size
+ - sizeof(*desc)) )
+ rv = -EFAULT;
+ break;
+ }
+ }
+ }
xfree(ctxt.data);
return rv;
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|