I'm using iozone to perform tests, configured to use 8KB block-size on files
ranging from 64KB to 2GB. Test comprises 6 cases: seq. read, seq. re-read,
random read, seq. write, seq. re-write, random write
I first performed my tests on a vanilla linux kernel (2.6.26-2-686), configured
to use 1GB RAM, then i run the same tests with Xen3.4, on a domU with this
configuration:
Dom0: 2.6.26-2-xen-686, dom0_mem=1024MB
DomU:
name = "vm"
kernel = "/root/vm/xen-kernel/vmlinuz-2.6.24-19-xen"
ramdisk =
"/root/vm/xen-kernel/initrd.img-2.6.24-19-xen"
disk = [ 'file:/root/vm/vm.img,sda1,w', 'phy:/dev/VolGroup00/Test,sda2,w'
]
In both cases, tests were performed on a lvm partition, running on top
of a scsi disk.
I performed such tests on different lv configuration (pure lv,
snapshotted lv, etc.), using ext3 filesystem.
Attached to this mail there is a file with 3 graphs summurizing the
results in the seq. write case.
First and second graphs have the write speed in KB/s on the Y axis, the
X axis is the file dimension in KB and each color rapresents a different LV configuration.
The third graph is the difference in performance between the precedent
two graphs, using the vanilla linux performance as 1.
So, the Y axis is the fraction of the DomU performance in respect to
the vanilla linux performance. (0,5 means 50% of the linux vanilla performance,
2,1 means 210% ...)
I'm trying to justify the results. Can you help me?
Looking at the First Graph (exluding the case for 64KB file that is,
for some reasons, a biased test), we can easly see three performance level:
~250MB/s: the effects of the processor cache (for 128, 256, 512KB
files),
~220MB/s: then we see the effect of the ram buffer till the 64MB file
test,
~60MB/s: finally we see "degraded" performance, when access
to the physical disk are performed (i.e. we have to wait for the RAM buffer to
be written on the disk)
as you can see, when the file dimension grow, the performance of
snapshotted LVs goes down because of the need of multiple I/O accesses.
Now, looking at what happens with the domU (just for the
"pure" LV
case for now), I'm not sure on how to interpret the results and maybe i need
more knoweldge on how Xen and the OS work in handling IO requests (e.g. how
processors are used).
From the second graph (again, exluding the case for the 64KB file), you
can see just two level of performance:
~300MB/s: till the 32MB file test
~58MB/s: when accesses to the physical disk are performed.
The first strange thing is that domU seems to have better performance
than vanilla linux and, more interesting, domU performance is not affected by
the processor cache limit (!).
The second thing is the result for the 64MB and 128MB tests, which are
the only cases where domU performs worse. Even if domU ram is the same of the
vanilla linux configuration, it seems like domU is not able to use all its RAM
to buffer writes to disk.
Actually i'm not able to figure out what is going on, can you help me?