[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 00 of 10 v2] NUMA aware credit scheduling
On Wed, 2012-12-19 at 20:07 +0100, Dario Faggioli wrote: > Which, reasoning in terms of %-performances increase/decrease, means NUMA > aware > scheduling does as follows, as compared to no affinity at all and to pinning: > > ---------------------------------- > | SpecJBB2005 (throughput) | > ---------------------------------- > | #VMs | No affinity | Pinning | > | 2 | +14.36% | -0.36% | > | 6 | +14.72% | -0.26% | > | 10 | +11.88% | -2.44% | > ---------------------------------- > | Sysbench memory (throughput) | > ---------------------------------- > | #VMs | No affinity | Pinning | > | 2 | +14.12% | +0.09% | > | 6 | +11.12% | +2.14% | > | 10 | +11.81% | +5.06% | > ---------------------------------- > | LMBench proc (latency) | > ---------------------------------- > | #VMs | No affinity | Pinning | > ---------------------------------- > | 2 | +10.02% | +1.07% | > | 6 | +3.45% | +1.02% | > | 10 | +2.94% | +4.53% | > ---------------------------------- > Just to be sure, as I may have not picked up the perfect wording, in the table above a +xx.yy% means NUMA aware scheduling (i.e., with this patch series fully applied) performs xx.yy% _better_ than either 'No affinity' or 'Pinning'. Conversely, a -zz.ww% means it performs zz.ww% worse. Sorry but the different combination and the presence of both throughput values (which are better if high) and latency values (which are better if low) made things a little bit tricky to present effectively. :-) Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |