[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Credit scheduler


  • To: <pak333@xxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxxxxxxxx>
  • Date: Wed, 27 Jun 2007 12:04:10 +0100
  • Delivery-date: Wed, 27 Jun 2007 04:02:12 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Ace4quNBIemlVSSeEdyz1QAX8io7RQ==
  • Thread-topic: [Xen-devel] Credit scheduler

You’ll probably do better by pinning dom0 to cpu0 and all other domains to all other cpus.

 -- Keir

On 27/6/07 02:55, "pak333@xxxxxxxxxxx" <pak333@xxxxxxxxxxx> wrote:



Yes all these are HVM guests.
Yes there is dom0 and 4 other VMs.

I added some trace statements in the schedule.c function and then postprocessed that data to come with the 3us

example I have 16400 lines printed with xentrace during a span of 52 ms. Which means that i have entered the scheduler 16400 times during the 52ms time interval, which is roughly 3.2 us.

i have not yet tried any other combination.

my understanding of the schduler is that the scheduler kicks in on a timer interrupt or a block of any VM.
so in a all cpu intensive VMs the VM would run interrupted for 30ms(credit schduler cap).
since we have IO, whenever a VM needs IO service, it blocks and domain 0 kicks in and services the IO.
is this correct?

What I also see is that dom0 cpu utilization is only 28% across all 4 cpus. Assuming that the scheduler was entered because of an IO request, dom0 kicks in but spends very little time servicing the request. Dom0 kicks in 6800 times of the  16400 times that the scheduler function was called. Does this make sense?

Thanks
-Prabha









 -------------- Original message ----------------------
From: Keir Fraser <keir@xxxxxxxxxxxxx>
> Are these all Windows HVM guests? And do you exclude domain0 when you count
> the number of VMs in your system (so it’s actually 4 user VMs + domain0 VM)?
> How did you come up with the 3us figure — some kind of in-guest tracing, or
> using xentrace, or by some other means?
>
> Does the problem still happen if you run 2 cpu-intensive VMs instead of 3?
>
>  -- Keir
>
> On 27/6/07 02:09, "pak333@xxxxxxxxxxx" <pak333@xxxxxxxxxxx> wrote:
>
> >
> > I have 4 physical cpus (2 dual core sockets) and the IO workload is loadsim
> > with 500 users
> >
> > Thanks
> > -Prabha
> >
> >
> >  -------------- Original message ----------------------
> > From: Keir Fraser <keir@xxxxxxxxxxxxx>
> >> > How many physical CPUs do you have, and what is the IO workload that one of
> >> > the VMs is running?
> >> >
> >> >  -- Keir
> >> >
> >> > On 26/6/07 22:33, "pak333@xxxxxxxxxxx" <pak333@xxxxxxxxxxx> wrote:
> >> >
> >>> > > Hi
> >>> > >  
> >>> > > I have noticed that VMs are typically spending 3 microsecs or less
> >>> before they
> >>> > > are being prempted. I thought that the credit schduler time slice was 30
> >>> ms. I
> >>> > > have 4 VMs running and they are all cpu intensive except for 1 (which is
> IO
> >>> > > intensive) but having a VM spend max 3 micro secs before being kicked
> out
> >>> > > seems strange.
> >>> > >  
> >>> > > Is there something else going on that i am not aware of. Is the time >>>
> slice
> >>> > > really 30 millisecs? I am using default parameters of the credit
> >>> scheduler.
> >>> > >  
> >>> > > Thanks
> >>> > > -Prabha
> >>> > >  
> >>> > >
> >>> > >
> >>> > > _______________________________________________
> >>> > > Xen-devel mailing list
> >>> > > Xen-devel@xxxxxxxxxxxxxxxxxxx
> >>> > > http://lists.xensource.com/xen-devel
> >> >
> >> >
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.