[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: (progress on hpet accuracy) and Re: [Xen-devel] [PATCH] Add a timer mode that disables pending missed ticks
Hi Keir - Keir Fraser wrote: If this is an integration of hpet into the vpt.c infrastructure then that would be very welcome. So far it is not, however I may head in that direction. Last night's test had an error of .065% on one guest so I still have some work to do. -Dave -- Keir On 25/2/08 20:01, "Dave Winchell" <dwinchell@xxxxxxxxxxxxxxx> wrote:Hi Dan, I've added a few comments below ... Lately I've been busy getting the hpet emulation to be accurate. Right now I have the hpet based clock error down to under .02% with a limited amount of testing. (The error was 1% or more before my changes.) I hope to submit a patch soon so others can try this out. I need to do more testing and reduce my changes to the minimal set required. One thing I really like about the hpet is that a test designed to see if the clock will go backwards passes on the hpet where it fails on the pit/tsc clock. I tried for quite a while to keep pit/tsc from going backwards, but was unsuccessful. It ended up being easier to fix the hpet emulation for accuracy. The clock going backwards test is simply a call to gettimeofday() in a loop, with no delay between calls except to check for new_time < prev_time. I run this from a driver calling do_gettimeofday() to make it more stressful. Then I run an instance of this on all (8) vcpus on 8 physical * 2 guests for a total of 16 vcpus running the test. The test does a yield now and then so you can actually log into the system, etc. Regards, Dave Dan Magenheimer wrote:Hi Dave -- I've looked into RHEL5-64 a bit more and would appreciate any thoughts you might have:So, some open questions: [2] In RHEL5, I *think* it is the WALL source that we care about?I'll have to check on this too.On second look, I may be wrong. The GTOD clock seems to be the one associated with vxtime.mode and vxtime.mode is used in linux-2.6.18.8/arch/x86_64/kernel/time.c:main_timer_handler() to determine if a tick is lost or not. Is this the code that we want timer_mode=2 to influence?Yes, it is.[1] Is notsc necessary for proper ticks for RHEL4-32/RHEL5-64? (I *think* not as it has never come up in any email.)I have not investigated this yet.Well for RHEL5-64, looking at linux-2.6.18.8/arch/x86_64/kernel/time.c, it appears whether notsc is required or not depends in part on the underlying virtual AND physical system. notsc definitely is involved in the selection of GTOD time but notsc can get set not only by kernel command line parameter but also by the result of unsynchronized_tsc():According to the comment in time.c, unsynchronized_tsc() is guessing whether the tsc is synchronized across processors. On most physical platforms they are, and, on those physical platforms the virtualization layer will present the same error as the hardware exibits. Right now we don't have code in xen to make a unsynchronized host look synchronized to the guest, though that wouldn't be very hard to do as long as the error between physical processors remained constant.if (apic_is_clustered_box) notsc=1 if (box_is_Intel) { /* C3 delay is worst case HW latency to enter/exit C3 state */ if (C3 delay < 1000) notsc=1 } else { /* e.g. AMD */ if (num_present_cpus() > 1) notsc=1 } I'm not sure what constitutes a clustered HVM guest or how that C3 state latency is determined under Xen, but its clear that the clocksource can be influenced not only by what clock hardware is present and command-line parameters but also by the physical CPU and number of guest vcpus.I haven't looked intoYuck! Thanks, Dan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |