[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Time drift between values returned by TSC and gettimeofday
Hey guys, I have made an interesting (and unexpected) observation in relation to the drift between the (system) time returned by gettimeofday() system call and time obtained from time stamp counter (defined as TSC reading divided by CPU frequency). I was wondering if anyone could help me explain this: Let me first explain my system: I have 3 Hardware based vitual machines (HVMs) running on Linux 2.6.24-26-generic (tickless) kernel. The timer mode is 1 (default, virtual time is always wallclock time). I ran an experiment in which I compared the difference between the values retured by rdtsc() and gettimeofday on the three domains against realtime (from a third independent source). There is no NTP sync on either the control domain or any of the user domains. The scheduler weights and cap values for all domains (domain-0 and user domains) are default 256 and 0 respectively. I observe that:
If I understand the virtualization architecture correctly, read shadows are created for each user domain which are updated by domain-0. Read access to TSC will return a value from these shadow tables. Can some body give me a pointer to what I am missing here. Has anyone else observed this behavior?And since I am using the timer mode = 1 , I expect that system time will also be same on all domains. Which means that time difference between TSC time and system time should increase by the same amount on all domains which is not what I observe. Thanks! --pr _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |