|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] xen: fix initialization of wallclock time for PVHVM on migration
Call update_domain_wallclock_time on hvm_latch_shinfo_size even if
the bitness of the guest has already been set, this fixes the problem
with the wallclock not being set for PVHVM guests on resume from
migration.
Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
Cc: Jan Beulich <JBeulich@xxxxxxxx>
Cc: Keir Fraser <keir@xxxxxxx>
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
---
Since this is a bug fix, I think it is suitable for inclusion in the
4.3 release, and backported to older releases.
---
xen/arch/x86/hvm/hvm.c | 20 ++++++--------------
1 files changed, 6 insertions(+), 14 deletions(-)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a962ce2..0dcfd81 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3404,21 +3404,13 @@ static void hvm_latch_shinfo_size(struct domain *d)
*/
if ( current->domain == d ) {
new_has_32bit = (hvm_guest_x86_mode(current) != 8);
- if (new_has_32bit != d->arch.has_32bit_shinfo) {
+ if (new_has_32bit != d->arch.has_32bit_shinfo)
d->arch.has_32bit_shinfo = new_has_32bit;
- /*
- * Make sure that the timebase in the shared info
- * structure is correct for its new bit-ness. We should
- * arguably try to convert the other fields as well, but
- * that's much more problematic (e.g. what do you do if
- * you're going from 64 bit to 32 bit and there's an event
- * channel pending which doesn't exist in the 32 bit
- * version?). Just setting the wallclock time seems to be
- * sufficient for everything we do, even if it is a bit of
- * a hack.
- */
- update_domain_wallclock_time(d);
- }
+ /*
+ * Make sure that the timebase in the shared info
+ * structure is correct.
+ */
+ update_domain_wallclock_time(d);
}
}
--
1.7.7.5 (Apple Git-26)
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |