[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Request for help: passing network statistics from netback driver to Xen scheduler.

On Thu, Jul 31, 2014 at 10:37 AM, Dlugajczyk, Marcin
<j.j.dlugajczyk@xxxxxxxxxxxxxxx> wrote:
> Hi everyone!
> This my first time on this mailing list. Iâm working with Xen for my master 
> thesis, and Iâd like to ask you for an advice. I donât have any previous 
> experience with Xen or Linux development, so please bear with my if Iâm doing 
> something silly!
> What Iâm trying to accomplish is to build a Xen scheduler, which is 
> scheduling domains based on their network intensity (Iâm trying to implement 
> one of the co-scheduling algorithms). So far, Iâve implemented simple 
> round-robin scheduler which seems to work. Now I want to extend it to take 
> into account the network traffic of each domain (higher the traffic, higher 
> the priority).

Just as a comment: I think a potential problem with this approach is
that you'll run into a positive feedback loop.  Processing network
traffic takes a lot of CPU time; and in particular, it needs the
ability to process packets in a *timely* manner.  Because most
connections are TCP, the ability to do work now *creates* work later
(in the form of more network packets).  Not having enough CPU, or
being delayed in when it can process packets, even by 50ms, can
significantly reduce the amount of traffic a VM gets.  So it's likely
that a domain that currently has a high priority will be able to
generate more traffic for itself, maintaining its high priority; and a
domain that currently has a low priority will not be able to send acks
fast enough, and will continue to receive low network traffic, thus
maintaining its low priority.

Something to watch out for, anyway. :-)

> Iâve found a patch for older version of Xen[1], which implemented something 
> similar. Author of this patch added some statistics to share_info structure, 
> and updated them in the netback driver. Iâd like to do something similar.
> Iâve modified the ubunutâs kernel code to add some statistics to shared_info:
> -Iâve modified the shared_info structure in include/xen/interface/xen.h and 
> added an array for storing statistics (up to 10 domains):
>         unsigned long network_intensity[10]
> -Iâve used the EXPORT_SYMBOL macro to make the HYPERVISOR_shared_info 
> available in netback driver
> - in the driver/net/xen-netback/netback.c file, Iâve modified function 
> xenvif_rx_action, so that itâd increment the network intensity counter. Iâve 
> also added a printk statement to verify if my code is working
> - Iâve updated the shared_info structure in xen source code as well.
> After compiling the kernel and booting it, Iâve verified that output from my 
> printk statements was visible in the output from dmesg.

Does the printk print the new value (i.e., "Intensity now [nnn] for
domain M"), or just print that it's trying to do something (i.e.,
"Incrementing network intensity")?

> However, when I try to access statistics from shared_info in Xen scheduler, 
> theyâre always 0.
> Could you please tell me what am I doing wrong? Do I have to somehow request 
> synchronisation of the shared_info between kernel and Xen?
> Or maybe is there some better way to get data on network intensity without 
> modifying the kernel?

Do you really need this information to be "live" on a ms granularity?
If not, you could have a process in dom0 wake up every several hundred
ms, read the information from the netback thread, and then make calls
to the scheduler to adjust priority.  It would be somewhat less
responsive, but much easier to change (as you could simply recompile
the dom0 process and restart it instead of having to reboot your

If your goal is just to hack together something for your project, then
doing the shared page info is probably fine.  But if you want to
upstream anything, you'll probably have to take a different approach.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.