[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] a strange issue on disk scheduler in xen



Hi all,

In Xen 4.0 with Linux 2.6.18.8, noop is default scheduler in dom U. In Xen
4.0 with Linux 2.6.31.8, CFQ is default scheduler in domU. In the both
setup, CFQ is default scheduler in dom0. I compare the execution time of two
seqenatil read applications run on a domU under noop of domU with under cfq
of domU in both two Linux kernel.
The sequential read application is generated by sysbench to read 1gb, like
that: sysbench --num-threads=16 --test=fileio --file-total-size=1G
--file-test-mode=seqrd prepare/run/cleanup

I find a strange phenomenon as following:
in Xen 4.0 with Linux 2.6.18.8
noop in domU
seqrd 1: 34.90s seqrd2: 32.49s
cfq in domU
seqrd 1: 245.94s seqrd2: 256.09s
In this situation, the execution time of seqrd under CFQ of domU is much
worse than that under noop of domU.
However, in Xen 4.0 with Linux 2.6.31.8:
noop in domU
seqrd1: 36.83s seqrd 2: 37.01s
CFQ in domU
seqrd1: 35.68s seqrd 2: 35.76s
In this situation, the execution time of seqrd under CFQ of domU is almost
same as under noop of domU.

The comparsion between noop and cfq of domU in Xen 4.0 with Linux 2.6.18 is
so different from that in Xen 4.0 with Linux 2.6.31.8, It is very very
strange. I also do the experiment in differnt machines, the results are
same. So does anyone know the reason?



--
View this message in context: 
http://xen.1045712.n5.nabble.com/a-strange-issue-on-disk-scheduler-in-xen-tp5714815.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.