[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH 0/2] Time-related fixes for migration
- To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
- From: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
- Date: Mon, 31 Mar 2014 11:30:25 -0400
- Cc: "ian.campbell@xxxxxxxxxx" <ian.campbell@xxxxxxxxxx>, "stefano.stabellini@xxxxxxxxxxxxx" <stefano.stabellini@xxxxxxxxxxxxx>, "Nakajima, Jun" <jun.nakajima@xxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "ian.jackson@xxxxxxxxxxxxx" <ian.jackson@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "suravee.suthikulpanit@xxxxxxx" <suravee.suthikulpanit@xxxxxxx>
- Delivery-date: Mon, 31 Mar 2014 15:28:31 +0000
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
On 03/31/2014 10:41 AM, Tian, Kevin wrote:
* The second patch keeps TSCs synchronized across VPCUs after save/restore.
Currently TSC values diverge after migration because during both save and
restore
we calculate them separately for each VCPU and base each calculation on
newly-read host's TSC.
The problem can be easily demonstrated with this program for a 2-VCPU guest
(I am assuming here invariant TSC so, for example,
tsc_mode="always_emulate" (*)):
int
main(int argc, char* argv[])
{
unsigned long long h = 0LL;
int proc = 0;
cpu_set_t set;
for(;;) {
unsigned long long n = __native_read_tsc();
if(h && n < h)
printf("prev 0x%llx cur 0x%llx\n", h, n);
CPU_ZERO(&set);
proc = (proc + 1) & 1;
CPU_SET(proc, &set);
if (sched_setaffinity(0, sizeof(cpu_set_t), &set)) {
perror("sched_setaffinity");
exit(1);
}
h = n;
}
}
what's the backward drift range from above program? dozens of cycles?
hundreds of cycles?
For "raw" difference (i.e. TSC registers themselves) it's usually tens
of thousands, sometimes more (and sometimes *much* more).
For example, here are outputs of 'xl debug-key v' before and after a
migrate for a 2p guest:
root@haswell> xl dmesg |grep Offset
(XEN) TSC Offset = ffffff54e63f1cab
(XEN) TSC Offset = ffffff54e63f1cab
(XEN) TSC Offset = ffffff6cb0a59ea9
(XEN) TSC Offset = ffffff6cb0a566ae
root@haswell>
For guest's view, taking into account, for example, the fact that
sched_affinity() takes quite some time, it's a few hundreds of cycles.
And obviously as you inclrease number of VCPUs (and I think guest memory
as well) the largest difference grows further.
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|