[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Load increase after memory upgrade (part2)



Hi,

 

let me try to explain a bit more. Here you see the output of my xentop munin graph for a

week. Only take a look at the bluish buckle. Notice the small step in front? So it's the CPU

permille used by the DomU that owns the cards. The small buckle is when I only put in

one PCI card. Afterwards it's constantly noticable higher load. See that Dom0 (green) is

not impacted. I am back to the Xenified kernel, as you can see.

 


 

In the next picture you see the output of xenpm visualized. So this might be an indicator that

realy something happens. It's only the core that I dedicated to that DomU. I have a three-core

AMD CPU by the way:

 

 

In CPU usage of the Dom0, there is nothing to see:

 

 

In CPU usage of the DomU, there is also not much to see, eventually a very slight change of

mix:

 

 

There is a slight increase in sleaping jobs at the time slot in question, I guess nothing we ca

directly map to the issue:

 

 

If you need other charts, I can try to produce them.

 

BR,
Carsten.

 

-----Ursprüngliche Nachricht-----
An: Carsten Schiers <carsten@xxxxxxxxxx>; zhenzhong.duan@xxxxxxxxxx; lersek@xxxxxxxxxx;
CC: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>; konrad.wilk <konrad.wilk@xxxxxxxxxx>;
Von: Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx>
Gesendet: Mo 28.11.2011 16:33
Betreff: Re: [Xen-devel] Load increase after memory upgrade (part2)
On Fri, Nov 25, 2011 at 11:11:55PM +0100, Carsten Schiers wrote:
> I got the values in DomU. I will have
>
>   - aprox. 5% load in DomU with 2.6.34 Xenified Kernel
>   - aprox. 15% load in DomU with 2.6.32.46 Jeremy or 3.1.2 Kernel with one card attached
>   - aprox. 30% load in DomU with 2.6.32.46 Jeremy or 3.1.2 Kernel with two cards attached

HA!

I just wonder if the issue is that the reporting of CPU spent is wrong.
Laszlo Ersek and Zhenzhong Duan have both reported a bug in the pvops
code when it came to account of CPU time.

>
> I looked through my old mails from you and you explained already the necessity of double
> bounce buffering (PCI->below 4GB->above 4GB). What I don't understand is: why does the
> Xenified kernel not have this kind of issue?

That is a puzzle. It should not. The code is very much the same - both
use the generic SWIOTLB which has not changed for years.
>
> The driver in question is nearly identical between the two kernel versions. It is in
> Drivers/media/dvb/ttpci by the way and when I understood the code right, the allo in
> question is:
>
>         /* allocate and init buffers */
>         av7110->debi_virt = pci_alloc_consistent(pdev, 8192, &av7110->debi_bus);

Good. So it allocates it during init and uses it.
>         if (!av7110->debi_virt)
>                 goto err_saa71466_vfree_4;
>
> isn't it? I think the cards are constantly transferring the stream received through DMA.

Yeah, and that memory is set aside for the life of the driver. So there
should be no bounce buffering happening (as it allocated the memory
below the 4GB mark).
>
> I have set dom0_mem=512M by the way, shall I change that in some way?

Does the reporting (CPU usage of DomU) change in any way with that?
>
> I can try out some things, if you want me to. But I have no idea what to do and where to
> start, so I rely on your help...
>
> Carsten.
>
> -----Urspr?ngliche Nachricht-----
> Von: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] Im Auftrag von Konrad Rzeszutek Wilk
> Gesendet: Freitag, 25. November 2011 19:43
> An: Carsten Schiers
> Cc: xen-devel; konrad.wilk
> Betreff: Re: [Xen-devel] Load increase after memory upgrade (part2)
>
> On Thu, Nov 24, 2011 at 01:28:44PM +0100, Carsten Schiers wrote:
> > Hello again, I would like to come back to that thing...sorry that I did not have the time up to now.
> >
> > ??
> > We (now) speak about
> >
> > ??
> > * Xen 4.1.2
> > * Dom0 is Jeremy's 2.6.32.46 64 bit
> > * DomU in question is now 3.1.2 64 bit
> > * Same thing if DomU is also 2.6.32.46
> > * DomU owns two PCI cards (DVB-C) that o DMA
> > * Machine has 8GB, Dom0 pinned at 512MB
> >
> > ??
> > As compared to 2.6.34 Kernel with backported patches, the load on the DomU is at least twice as high. It
> >
> > will be "close to normal" if I reduce the memory used to 4GB.
>
> That is in the dom0 or just in general on the machine?
> >
> > ??
> > As you can see from the attachment, you once had an idea. So should we try to find something...?
>
> I think that was to instrument swiotlb to give an idea of how
> often it is called and basically have a matrix of its load. And
> from there figure out if the issue is that:
>
>  1). The drivers allocoate/bounce/deallocate buffers on every interrupt
>     (bad, driver should be using some form of dma pool and most of the
>     ivtv do that)
>
>  2). The buffers allocated to the drivers are above the 4GB and we end
>     up bouncing it needlessly. That can happen if the dom0 has most of
>     the precious memory under 4GB. However, that is usually not the case
>     as the domain isusually allocated from the top of the memory. The
>     fix for that was to set dom0_mem=max:XX. .. but with Dom0 kernels
>     before 3.1, the parameter would be ignored, so you had to use
>     'mem=XX' on the Linux command line as well.
>
>  3). Where did you get the load values? Was it dom0? or domU?
>
>
>
> >
> > ??
> > Carsten.
> > ??
> > -----Urspr??ngliche Nachricht-----
> > An:konrad.wilk <konrad.wilk@xxxxxxxxxx>;
> > CC:linux <linux@xxxxxxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>;
> > Von:Carsten Schiers <carsten@xxxxxxxxxx>
> > Gesendet:Mi 29.06.2011 23:17
> > Betreff:AW: Re: Re: Re: AW: Re: [Xen-devel] AW: Load increase after memory upgrade?
> > > Lets first do the c) experiment as that will likely explain your load average increase.
> > ...
> > > >c). If you want to see if the fault here lies in the bounce buffer
> > > being used more
> > > >often in the DomU b/c you have 8GB of memory now and you end up using
> > > more pages
> > > >past 4GB (in DomU), I can cook up a patch to figure this out. But an
> > > easier way is
> > > >to just do (on the Xen hypervisor line): mem=4G and that will make
> > > think you only have
> > > >4GB of physical RAM. ??If the load comes back to the normal "amount"
> > > then the likely
> > > >culprit is that and we can think on how to fix this.
> >
> > You are on the right track. Load was going down to "normal" 10% when reducing
> > Xen to 4GB by the parameter. Load seems to be still a little, little bit lower
> > with Xenified Kernel (8-9%), but this is drastically lower than the 20% we had
> > before.
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.