[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Default CPU allocation



On mar, 2013-05-28 at 12:21 +0100, Paul Stimpson wrote:
> Hi,
> 
> We have a machine with a 4-core Hyperthreading-capable CPU that
> appears as 8 logical CPUs (0-7) to Xen. 
> 
> The Dom0 is Ubuntu 12.04 with Xen 4.2.0. There are two guests: A
> low-load Ubuntu 12.04 Linux guest that is used for maintenance
> purposes. An HVM Windows Server guest that is doing all the heavy work
> but isn't currently heavily loaded.
> 
> Allocation is:
>         CPU0 - Dom0 shared with low-load maintenance guest.
>         CPU1 - Low-load maintenance guest shared with Windows guest
>         CPU2-7 - Windows guest exclusive
>         
Ok. Not sure why you do this "sharing thing", but probably you have your
reasons. :-)

> The following interesting questions just surfaced during a discussion:
> 
>         How does the default Xen scheduler allocate cores? Is it
>         intelligent enough to use the exclusive cores for the Windows
>         guest before it resorts to the core it shares with the
>         maintenance guest or does it go in core order? If it's going
>         in core order, it looks like the sharing of its lowest
>         numbered core would cause needless context switching even when
>         Windows was lightly loaded. Would we be better to make core 7
>         the shared core instead of core 1?
>         
Well, that depends from how and when the load occurs in the various
domains. The scheduler certainly does not only rely on 'core order',
otherwise I wouldn't even call it scheduler! :-O

So, what happens is, any time vcpuX wants to run, the scheduler tries to
find an idle pcpu where it can run (according to how vcpu to pcpu
pinning is configured, which is what you said you've done above). If
there are more busy vcpus than pcpus in the system, of course there will
be some timesharing; otherwise, every vcpu should just be able to run in
parallel, unless one has really screwed up when setting up vcpu pinning.

One thing that is missing from your description is how many vcpus each
domain has, which is quite fundamental, even just for picturing a simple
example situation.... Supposing that both Dom0 and the two DomUs have
each one 1 vcpu (quite unlikely, I guess), when _all_ the 3 vcpus are
busy you'll see something like this:
 - Dom0's vcpu running on CPU0 (or CPU1)
 - Ubuntu's vcpu running on CPU1 (or CPU0, if it is Dom0's vcpu that is
   running on CPU1)
 - Win's vcpu running on one core among CPU2-CPU7

The same applies to the case where the domains have multiple vcpus... If
you tell us what these numbers are, I can try to be more specific.

Anyway, as already said, the order is not important, and it doesn't
matter at all whether the domains share CPU1 or CPU7: both cases will
give you exactly the same result.

One thing that you might want to pay attention to is the fact that
you're not really dealing with proper cores, but with --as you say
yourself-- hyperthreads. This matter because, if, for instance, CPU0 and
CPU1 are two threads of core0, and CPU2 and CPU3 are two threads of
core1, and you have two busy vcpu, running them on CPU0 and CPU1 or on
CPU0 and CPU2 does actually matter (and the latter is, usually, better).
The scheduler is pretty smart in dealing with this too, and it tries to
balance things automatically, but of course, if you pin the wrong vcpus
to the wrong cores, you may well make it impossible for it to help you!

Looks like your workload could benefit from some fine tuning of this
nature too, but again, you need to tell us how many vcpu you're
using. :-)

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.