[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Debian stable kernel got timer issue when running as PV guest
>>> On 13.04.12 at 12:37, David Vrabel <dvrabel@xxxxxxxxxx> wrote: > On 13/04/12 08:56, Jan Beulich wrote: >>>>> On 12.04.12 at 21:22, Sheng Yang <sheng@xxxxxxxxxx> wrote: >>> I've compiled a kernel, force sched_clock_stable=0, then it solved the >>> timestamp jump issue as expected. Luckily, seems it also solved the call >>> trace and guest hang issue as well. >> >> And this is also how it should be fixed. > > Something like this? I've not tested it yet as I need to track down > some of the problem hardware and get it set up. Yeah, except that I'm not sure you really need to clear the feature flags. Just making sure sched_clock_stable never gets set should be enough; playing with the feature flags always implies that users will see bigger differences in /proc/cpuinfo between native and Xen kernels... Jjan > 8<--------------- > xen: always set the sched clock as unstable > > It's not clear to me if the Xen clock source can be used as a stable > sched clock. Also, even if the guest is started on a system whose > underying TSC is stable it may be migrated to one where it's not. So > never mark the sched clock as stable. > > Signed-off-by: David Vrabel <david.vrabel@xxxxxxxxxx> > --- > arch/x86/xen/time.c | 3 +++ > 1 files changed, 3 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c > index 0296a95..b22cd9c 100644 > --- a/arch/x86/xen/time.c > +++ b/arch/x86/xen/time.c > @@ -473,6 +473,9 @@ static void __init xen_time_init(void) > do_settimeofday(&tp); > > setup_force_cpu_cap(X86_FEATURE_TSC); > + setup_clear_cpu_cap(X86_FEATURE_CONSTANT_TSC); > + setup_clear_cpu_cap(X86_FEATURE_NONSTOP_TSC); > + sched_clock_stable = 0; > > xen_setup_runstate_info(cpu); > xen_setup_timer(cpu); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |