[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/hvm: fix corrupt ACPI PM-Timer during live migration
>>> On 05.02.13 at 07:12, Kouya Shimura <kouya@xxxxxxxxxxxxxx> wrote: >--- a/xen/arch/x86/hvm/pmtimer.c Tue Jan 22 09:33:10 2013 +0100 >+++ b/xen/arch/x86/hvm/pmtimer.c Tue Feb 05 10:26:13 2013 +0900 >@@ -84,28 +84,31 @@ void hvm_acpi_sleep_button(struct domain > } > > /* Set the correct value in the timer, accounting for time elapsed >- * since the last time we did that. */ >-static void pmt_update_time(PMTState *s) >+ * since the last time we did that. >+ * return true if the counter's MSB has changed. */ >+static bool_t pmt_count_up_and_test_msb(PMTState *s, uint64_t gtime) > { >- uint64_t curr_gtime, tmp; > uint32_t tmr_val = s->pm.tmr_val, msb = tmr_val & TMR_VAL_MSB; >- >- ASSERT(spin_is_locked(&s->lock)); >+ uint64_t tmp = ((gtime - s->last_gtime) * s->scale) + s->not_accounted; > >- /* Update the timer */ >- curr_gtime = hvm_get_guest_time(s->vcpu); >- tmp = ((curr_gtime - s->last_gtime) * s->scale) + s->not_accounted; > s->not_accounted = (uint32_t)tmp; > tmr_val += tmp >> 32; > tmr_val &= TMR_VAL_MASK; >- s->last_gtime = curr_gtime; >+ s->last_gtime = gtime; > > /* Update timer value atomically wrt lock-free reads in handle_pmt_io(). > */ > *(volatile uint32_t *)&s->pm.tmr_val = tmr_val; > >- /* If the counter's MSB has changed, set the status bit */ >- if ( (tmr_val & TMR_VAL_MSB) != msb ) >+ return ((tmr_val & TMR_VAL_MSB) != msb); Single space only please. >+} >+ >+static void pmt_update_time(PMTState *s) >+{ >+ ASSERT(spin_is_locked(&s->lock)); >+ >+ if ( pmt_count_up_and_test_msb(s, hvm_get_guest_time(s->vcpu)) ) > { >+ /* MSB has changed, set the status bit */ > s->pm.pm1a_sts |= TMR_STS; > pmt_update_sci(s); > } >@@ -244,21 +247,34 @@ static int handle_pmt_io( > static int pmtimer_save(struct domain *d, hvm_domain_context_t *h) > { > PMTState *s = &d->arch.hvm_domain.pl_time.vpmt; >- uint32_t x, msb = s->pm.tmr_val & TMR_VAL_MSB; >+ uint64_t guest_time; > int rc; > > spin_lock(&s->lock); > >- /* Update the counter to the guest's current time. We always save >- * with the domain paused, so the saved time should be after the >- * last_gtime, but just in case, make sure we only go forwards */ >- x = ((s->vcpu->arch.hvm_vcpu.guest_time - s->last_gtime) * s->scale) >> >32; >- if ( x < 1UL<<31 ) >- s->pm.tmr_val += x; >- if ( (s->pm.tmr_val & TMR_VAL_MSB) != msb ) >- s->pm.pm1a_sts |= TMR_STS; >- /* No point in setting the SCI here because we'll already have saved the >- * IRQ and *PIC state; we'll fix it up when we restore the domain */ >+ guest_time = s->vcpu->arch.hvm_vcpu.guest_time; >+ /* Update the counter to the guest's current time only if the >+ * timer mode is delay_for_missed_ticks */ >+ if ( guest_time != 0 ) How is guest_time being (non-)zero an indication of mode being delay_for_missed_ticks? I think you should check for the mode explicitly, as the value here being zero can just be an effect of a large enough negative v->arch.hvm_vcpu.stime_offset. And besides that you don't explain why the update being done here is unnecessary in other modes - all you explain is that the way it's being done in those modes is wrong. >+ { >+ /* We always save with the domain paused, so the saved time >+ * should be after the last_gtime, but just in case, make sure >+ * we only go forwards */ >+ if ( (s64)guest_time - (s64)s->last_gtime < 0) >+ { >+ dprintk(XENLOG_WARNING, >+ "Last update of PMT is ahead of guest's time by %ld\n", While probably fine this way for -unstable, please nevertheless use the correct PRId64 here (both to be formally correct and to ease eventual backporting). Jan >+ s->last_gtime - guest_time); >+ } >+ else >+ { >+ if ( pmt_count_up_and_test_msb(s, guest_time) ) >+ s->pm.pm1a_sts |= TMR_STS; >+ /* No point in setting the SCI here because we'll already >+ * have saved the IRQ and *PIC state; >+ * we'll fix it up when we restore the domain */ >+ } >+ } > > rc = hvm_save_entry(PMTIMER, 0, h, &s->pm); > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |