[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Measure disk activity in full-virtualization.


  • To: deshantm@xxxxxxxxx
  • From: "Alejandro Paredes" <aleparedes@xxxxxxxxx>
  • Date: Mon, 28 Jul 2008 01:02:14 -0300
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sun, 27 Jul 2008 21:02:51 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references; b=JO7GBDlGY7nos1TfZ720S63URcQH6li84iEALYgmi+5Ry44H5Ugehejr9VGDigoSt9 yoNNbBS073q8oMxBcQZzrs8XxEiPl1fkQ/5Ad4Twt4CURQ3TkTfNH0ejsl1kY0ESylh6 IwMSf4nNbYGV2ufdNMHFeCpw/P4/x3+CGwLrc=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi all, I'm going to briefly explain what I've done. First of all,
I'll mention the tools used…
-       Stress: It imposes a configurable amount of CPU, memory, I/O, and
disk stress on the system.
-       Memtester: is an effective userspace tester for stress-testing the
memory subsystem.  It is very effective at finding intermittent and
non-deterministic faults. Memtester will malloc the amount of memory
specified.
-       Bonnie++: is a program to test hard drives and file systems for
performance or the lack therof.  There are a many different types of
file system operations which different applications use to different
degrees. Bonnie++ tests some of them and for each test gives a result
of the amount of work done per second and the percentage of CPU time
this took. There are two sections to the program's operations. The
first is to test the IO throughput in a fashion that is designed to
simulate some types of database applications.  The second is to test
creation, reading, and deleting files.
-       Sysstat: The sysstat package contains utilities to monitor system
performance and usage activity.
-       Xenmon: make use of the existing xen tracing feature to provide fine
grained reporting of various domain related metrics like: Execution
Count, CPU usage, Waiting time, etc.


The measure procedure was like this:
1-      Two clones guests were created with the following characteristics:
1024 RAM, 8 Gb disk, 8 Vitual CPUs. Lets call them VM1 and VM2
respectively.
2-      The xen credit scheduler was used, assigning the same weight to
each virtual machine.
3-      One particular stress was run over VM1 (for example CPU stress)
with the VM2 in an idle state. Here the all the metrics were taken
from inside the VM1 (using sysstat) and from dom0 (using xennom and
xentop).
4-      After that, the same test was re-run in VM1 with another parameter
stressed in VM2 (for example memory stress).
5-      A normalized value is calculated dividing the performance value
obtained from VM1 (taken from step 3) by the value from VM1 taken from
step 4. Here we can see the fall of performance in VM1 due to VM2 (or
fall of CPU due to memory).
6-      Then, step 3 and 4 are done again but the load of each virtual
machine is exchanged (VM1 memory stressed and VM2 CPU stressed). Here
we can find the fall he performance of memory due to CPU.
7-      By summing the values of step 5 and 6 we can find the overall
performance inference of memory-CPU.
8-      The same test can be run for:
CPU-disk
memory-disk
CPU-CPU
Disk-disk

Before doing this procedure, all the values of the stress tests were
taken from inside each VM, without any value seen from Dom0.
Everything seemed to be fine, but when I run memory against CPU the
results were disconcerting. The problem was that each VM had its own
virtual time, so every result was relative to the VM that run the
test, so there wasn't any possibility of compare the results. The
scheduler played an important role at this time, so it was very
important to see this environment from outside of each VM, lets say
from Dom0. Xenmon and xentop helped a lot to solve this problem.

For measure disk usage, I understand that xenmon measures the transfer
of memory pages between DomU and Dom0, and virt-top could show VBD_RD
and VBD_WR for each VM. The problem arrived when measuring
full-virtualized guests. I can't measure de disk usage of a particular
VM. I know that the split device driver model used in
para-virtualization change by the complete emulation of the hardware.

I found that the process that handles the VM image is called qemu-dm.
When stressing disk activity of that VM (let's say VM1), no disk usage
of the qemu-dm process related to the VM1 is detected (using for
example pidstat over qemu-dm).Virt-top and xenmon didn't give me any
value reflecting disk usage when full-virtualized guest are stressed.

Is there any way to measure disk activity of a particular VM with full
virtualization?
Why virt-top and xenmon didn't give me any value?
Anybody tried something like this?

Thanks!

Alejandro.


2008/7/19 Todd Deshane <deshantm@xxxxxxxxx>:
> Hi Alejandro,
>
> On Fri, Jul 18, 2008 at 3:30 PM, Alejandro Paredes <aleparedes@xxxxxxxxx>
> wrote:
>>
>> I've been doing tests to compare the performance isolation between
>> para-virtualized and full-virtualized guests using Intel VT. First of
>> all we performed tests over 2 VM's (paravirtualized guest). Some bench
>> stressing memory and disk were done measured by xenmon and virt-top.
>> These two measure tools helped to determine CPU allocations, blocked
>> and waited times, and everything worked fine.
>
> I think this is really interesting research. Can you give us some more
> insight into
> your testing methodology?
>
> Are you familiar with the work at Clarkson University regarding performance
> isolation?
>
> http://clarkson.edu/~jnm/publications/isolation_ExpCS_FINALSUBMISSION.pdf
> http://todddeshane.net/research/Xen_versus_KVM_20080623.pdf
>
>
> Cheers,
> Todd
>
>
> --
> Todd Deshane
> http://todddeshane.net
> check out our book: http://runningxen.com

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.