[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [Xen-devel] rdtscP and xen (and maybe the app-tsc answer I've been looking for)
Hi Jan --
Thanks for the feedback!
> >>> Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> 18.09.09 22:27 >>>
> >Guest app must have some capability of getting 64-bit
> >pvclock parameters directly from Xen without OS changes,
> >e.g. emulated userland wrmsr, userland hypercall,
> >or userland mapped shared page. (This will be done
> >rarely so need not be fast! But it does create
> >a new userland<->Xen ABI that must be kept compatible.)
> Are you sure this will indeed be infrequent enough? On my supposedly
> constant-TSC AMD box, I see Xen quite frequently apply small error
> correction factors to keep TSC from running ahead of HPET/PMTIMER.
I'd like to hear from Keir on this, but I'd
guess that this would be either a bug or a
remnant of or inaccuracy in an old algorithm.
Also if you could provide more information, I'd
like to see if I can reproduce it on my Intel
> >I think this will work even for a NUMA machine
> >provided Xen always schedules all the vcpus
> >for one guest on pcpus in the same NUMA node,
> >and increments the version number when
> >the guest is rescheduled from one NUMA node to
> >another (assuming TSC on each node is reliable).
> I think this is an improper assumption: Any guest with more
> vCPU-s than
> there are pCPU-s on a single node will likely benefit from
> being run on
> two (or more) nodes (compared to its vCPU-s competing amongst
> themselves for pCPU-s on a single node).
Any guest that has some vcpus running on pcpus
in one "TSC domain" and other vcpus running on
pcpus in another "TSC domain" would
have to be handled the same as running on a
machine with tsc_NOT_constant. This does raise
a challenge for multi-socket machines that Xen has
to be able to determine and record what pcpu's are
within a TSC domain boundary, which may or may
not be the same as a NUMA boundary.
Xen-devel mailing list