[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] (no subject)
On 02/13/14 11:10, Andrew Cooper wrote: On 13/02/14 16:01, Simon Martin wrote:Hi all, I am now successfully running my little operating system inside Xen.Congratulations!It is fully preemptive and working a treat, but I have just noticed something I wasn't expecting, and will really be a problem for me if I can't work around it. My configuration is as follows: 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.Can you be more specific - this covers 4 generations of Intel CPUs. I think most i3's have Intel's hyper-threading. If this is a 2 core/4 thread chip, I would expect this kind of result. I also know that for the "sandy bridge" version I am using: Intel® Xeon® E3-1260L processors (“Sandy Bridge” microarchitecture) (2.4/2.5/3.3 GHz, 4 cores/8 threads)How many instruction per second a thread gets does depend on the "idleness" of other threads (no longer just the hyperThread's parther). For example running my test code that does: 0x0000000000400ee0 <workerMain+432>: inc %eax 0x0000000000400ee2 <workerMain+434>: cmp $0xffffffffffffffff,%eax 0x0000000000400ee5 <workerMain+437>: jne 0x400ee0 <workerMain+432> for almost 4GiI in this loop. On the setup: [root@dcs-xen-53 ~]# xl cpupool-list Name CPUs Sched Active Domain count Pool-0 8 credit y 9 [root@dcs-xen-53 ~]# xl vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 7 r-- 1033.2 any cpu Domain-0 0 1 3 -b- 255.9 any cpu Domain-0 0 2 2 -b- 451.7 any cpu Domain-0 0 3 6 -b- 231.7 any cpu Domain-0 0 4 3 -b- 197.0 any cpu Domain-0 0 5 0 -b- 115.1 any cpu Domain-0 0 6 0 -b- 69.9 any cpu Domain-0 0 7 5 -b- 214.9 any cpu P-1-0 2 0 0 -b- 73.6 0 P-1-2 4 0 2 -b- 46.5 2 P-1-3 5 0 3 -b- 44.6 3 P-1-4 6 0 4 -b- 38.1 4 P-1-5 7 0 5 -b- 41.3 5 P-1-6 8 0 6 -b- 38.6 6 P-1-7 9 0 7 -b- 40.6 7 P-1-1 10 0 1 -b- 35.3 1 (They are HVM not PV): xentop - 17:20:57 Xen 4.3.2-rc1 9 domains: 2 running, 7 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown Mem: 33544044k total, 30265044k used, 3279000k free CPUs: 8 @ 2400MHzNAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID Domain-0 -----r 2629 9.4 4194304 12.5 no limit n/a 8 0 0 0 0 0 0 0 0 0 0 P-1-0 --b--- 140 6.3 3145868 9.4 3146752 9.4 1 2 61 8 0 0 0 0 0 0 0 P-1-1 --b--- 101 6.1 3145868 9.4 3146752 9.4 1 2 2 0 0 0 0 0 0 0 0 P-1-2 --b--- 113 6.3 3145868 9.4 3146752 9.4 1 2 96 10 0 0 0 0 0 0 0 P-1-3 --b--- 111 6.3 3145868 9.4 3146752 9.4 1 2 100 12 0 0 0 0 0 0 0 P-1-4 --b--- 61 2.1 3145868 9.4 3146752 9.4 1 2 8 0 0 0 0 0 0 0 0 P-1-5 --b--- 90 4.5 3145868 9.4 3146752 9.4 1 2 5 0 0 0 0 0 0 0 0 P-1-6 --b--- 162 2.7 3145868 9.4 3146752 9.4 1 2 55 20 0 0 0 0 0 0 0 P-1-7 -----r 519 100.0 3145868 9.4 3146752 9.4 1 2 46 21 0 0 0 0 0 0 0 start done thr 0: 13 Feb 14 12:20:54.201596 13 Feb 14 12:21:22.847245 +28.645649 ~= 28.65 and 4.19 GiI/Sec And 6&7 at the same time: xentop - 17:21:58 Xen 4.3.2-rc1 9 domains: 3 running, 6 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown Mem: 33544044k total, 30265044k used, 3279000k free CPUs: 8 @ 2400MHzNAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID Domain-0 -----r 2633 6.1 4194304 12.5 no limit n/a 8 0 0 0 0 0 0 0 0 0 0 P-1-0 --b--- 144 6.2 3145868 9.4 3146752 9.4 1 2 63 8 0 0 0 0 0 0 0 P-1-1 --b--- 105 6.2 3145868 9.4 3146752 9.4 1 2 2 0 0 0 0 0 0 0 0 P-1-2 --b--- 117 6.6 3145868 9.4 3146752 9.4 1 2 98 10 0 0 0 0 0 0 0 P-1-3 --b--- 115 6.5 3145868 9.4 3146752 9.4 1 2 103 12 0 0 0 0 0 0 0 P-1-4 --b--- 62 2.1 3145868 9.4 3146752 9.4 1 2 8 0 0 0 0 0 0 0 0 P-1-5 --b--- 93 4.5 3145868 9.4 3146752 9.4 1 2 5 0 0 0 0 0 0 0 0 P-1-6 -----r 168 100.0 3145868 9.4 3146752 9.4 1 2 58 20 0 0 0 0 0 0 0 P-1-7 -----r 550 100.0 3145868 9.4 3146752 9.4 1 2 49 22 0 0 0 0 0 0 0 start done thr 0: 13 Feb 14 12:21:55.073588 13 Feb 14 12:22:50.905476 +55.831888 ~= 55.83 and 2.15 GiI/Sec start done thr 0: 13 Feb 14 12:21:54.847626 13 Feb 14 12:22:49.206362 +54.358736 ~= 54.36 and 2.21 GiI/Sec(The DomU are all CentOS 5.3, which why Dom0 is spending so much time running QEMU.) I would excpect the same from PV guests. -Don Slutz 2.- Xen: 4.4 (just pulled from repository) 3.- Dom0: Debian Wheezy (Kernel 3.2) 4.- 2 cpu pools: # xl cpupool-list Name CPUs Sched Active Domain count Pool-0 3 credit y 2 pv499 1 arinc653 y 1 5.- 2 domU: # xl list Name ID Mem VCPUs State Time(s) Domain-0 0 984 3 r----- 39.7 win7x64 1 2046 3 -b---- 143.0 pv499 3 128 1 -b---- 61.2 6.- All VCPUs are pinned: # xl vcpu-list Name ID VCPU CPU State Time(s) CPU Affinity Domain-0 0 0 0 -b- 27.5 0 Domain-0 0 1 1 -b- 7.2 1 Domain-0 0 2 2 r-- 5.1 2 win7x64 1 0 0 -b- 71.6 0 win7x64 1 1 1 -b- 37.7 1 win7x64 1 2 2 -b- 34.5 2 pv499 3 0 3 -b- 62.1 3 7.- pv499 is the domU that I am testing. It has no disk or vif devices (yet). I am running a little test program in pv499 and the timing I see is varies depending on disk activity. My test program runs prints up the time taken in milliseconds for a million cycles. With no disk activity I see 940 ms, with disk activity I see 1200 ms. I can't understand this as disk activity should be running on cores 0, 1 and 2, but never on core 3. The only thing running on core 3 should by my paravirtual machine and the hypervisor stub. Any idea what's going on?Curious. Lets try ruling some things out. How are you measuring time in pv499? What is your Cstates and Pstates looking like? If you can, try disabling turbo? ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |