于 2014/1/16 20:17, Stefano Stabellini
写道:
. For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
> is already visiable. Does guest OS still need any action to ask
> hypervisor to update this piece of memory periodically?
I don't think you need to ask the hypervisor to update vcpu_time_info
periodically, what gave you that idea?
Hi Stefano,
Now, I see it's the hypervisor that will update vcpu_time_info, but
another thing confuse me:
HVM guest has time drift issue because TSC on different vCPU could
be out-of-sync, especially after domain suspend/resume.
But how pvclock actually fix this issue? Let's see how FreeBSD port
calculate the system time:
==================
static uint64_t
get_nsec_offset(struct vcpu_time_info *tinfo)
{
return (scale_delta(rdtsc() -
tinfo->tsc_timestamp,
tinfo->tsc_to_system_mul, tinfo->tsc_shift));
}
/**
* \brief Get the current time, in nanoseconds, since the hypervisor
booted.
*
* \note This function returns the current CPU's idea of this value,
unless
* it happens to be less than another CPU's previously
determined value.
*/
static uint64_t
xen_fetch_vcpu_time(void)
{
struct vcpu_time_info dst;
struct vcpu_time_info *src;
uint32_t pre_version;
uint64_t now;
volatile uint64_t last;
struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);
src = "">
critical_enter();
do {
pre_version = xen_fetch_vcpu_tinfo(&dst, src);
barrier();
now = dst.system_time + get_nsec_offset(&dst);
barrier();
} while (pre_version != src->version);
/*
* Enforce a monotonically increasing clock time across all
* VCPUs. If our time is too old, use the last time and return.
* Otherwise, try to update the last time.
*/
do {
last = last_time;
if (last > now) {
now = last;
break;
}
} while (!atomic_cmpset_64(&last_time, last, now));
critical_exit();
return (now);
}
==================================
I guest linux guest will do the same thing, rdtsc() fetch current
timestamp from current running vCPU, TSC out-of-sync issue is still
there.
It seems to me pvclock finally fix the time drift issue just because
the workaround enforced as above, right?
Thanks,
Michael
|