[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Xen-users] Xen timekeeping best practices
On Tue, 2012-04-17 at 08:41 +0100, Lloyd Dizon wrote: > On Mon, Apr 16, 2012 at 4:57 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote: > > On Mon, 2012-04-16 at 15:45 +0100, Lloyd Dizon wrote: > >> On Mon, Apr 16, 2012 at 11:57 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> > >> wrote: > >> > > >> > On Fri, 2012-04-13 at 18:22 +0100, Todd Deshane wrote: > >> > > Adding Xen-devel. > >> > > > >> > > What are the current timekeeping best practices now? > >> > > >> > pvops kernels behave as if independent_wallclock == 1, so the existing > >> > advice: > >> > set /proc/sys/xen/independent_wallclock to 1 and run ntp on > >> > domU. > >> > still applies. > >> > >> This also applies to Xen 4.0 (4.0.0) if I understood correctly? > > > > This is entirely a function of the guest kernel version and not the > > hypervisor > > > >> If so and based on previous posts can we resume that this is what is > >> recommended: > > > > If you s/Xen3/Classic XenoLinux Kernels/ and s/Xen4/Paravirt Ops Linux > > Kernels/g then I think it is somewhat accurate, at least for the PV > > line. > > > > I must confess I don't really know for the HVM line. I suspect that it > > is actually: > > > > Classic XenoLinux -- no PV time source, independent_wallclock means > > nothing, run NTP in the guest > > > > Paravirt ops kernels _without_ PV time PVHVM extensions -- no PV time > > source, independent_wallclock means nothing, run NTP in the guest > > > > Paravirt ops kernels _with_ PV time PVHVM extensions -- PV time source, > > independent_wallclock always == 1, run NTP. > > > > So in short always run NTP in an HVM guest, regardless of the presence > > or absence of PV time extensions > > I do not fully understand the difference yet between and Classic > XenoLinux and Paravirt Ops Kernel (will dig the info later) Roughly: classic == old out-of-tree Xen patches, static compile time support for Xen. This is the old "2.6.18-xen.hg" tree and includes the "SuSE forward port kernels" which are the classic patches kept up to date with recent kernels. pvops == patches in upstream Linux, dynamic support for Xen enabled at kernel boot time. pvops supported domU from ~2.6.24 and dom0 from 3.0 onwards. > but here > is the resume: > > > ------------------------------------------------------------------------------------------------------------------- > | Xen3: Classic XenoLinux Kern | > Xen4: Paravirt Ops Linux Kern | > -------------------------------------------------------------------------------------------------------------------------- > PV | independent_wallclock = 0 or 1 | > independent_wallclock = 1 | > -------------------------------------------------------------------------------------------------------------------------- > HVM | -> no PV timesource, no ind. wallclock, run NTP | -> w/o PV > time PVHVM extensions, no ind. wallclock, run NTP | > | | -> PV > time source, ind. wallclock always 1, run NTP | > -------------------------------------------------------------------------------------------------------------------------- It got a bit mangled in transit but it looks reasonable. You should remove "Xen3" and "Xen4" from the titles since the hypervisor version has no bearing on any of this. You can just as easily run a pvops kernel on Xen3.x or a classic kernel on Xen4.x (and of course you can mix and match those on a single host) The 3->4 major number transition was mostly just a version number bump, it didn't indicate any major change in the API or anything like that -- any PV kernel should run on any Xen from 3.0 onwards, right through to 4.2 (and beyond). > Should we put this somewhere (wiki) until somebody else proves this is > inaccurate ? Please do put it on the wiki, I think there was a reference to a page (a FAQ?) further up thread which seems like as good a place as any to add it. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |