[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] credit scheduler
While doing some performance analysis of Xen I have noticed some interesting behavior of the credit scheduler that I would like to see discussed. In my most basic test I have a 4 socket dual core Intel box (Paxville) with a uniprocessor dom0 and 7 uniprocessor domUs. Each of the domUs is pinned on its own core with the first core of the system left for dom0. When running the credit scheduler the dom0 VCPU will bounce around the system, sometimes landing on the same thread as one of the domUs or sometimes on one of the sibling hyperthreads (this appears to happen a majority of the time it moves). This is less than ideal when considering cache warmth and the sharing of CPU resources when the first core of the system is always available in this configuration. Does the credit scheduler have any awareness of cache warmth or CPU siblings when balancing? I have also seen similar behavior when running tests in the domUs such that each has its VCPU running at 100% utilization so I believe this behavior to be fairly uniform. In my testing I am looking for uniform behavior so I have just been setting sched=sedf but I would like to move to credit scheduler since it is the new default. However, until I sort out this behavior I cannot. Attached is a snapshot of "xm vcpu-list" taken every 5 seconds for 5 measurements to show the dom0 VCPU migration pattern on a system with idle domains. -- Karl Rister IBM Linux Performance Team kmr@xxxxxxxxxx Attachment:
xm.vcpu-list _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |