[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Please help estimate number of the domUs



Ok. I performed tests with icf in Iometer-config-file.zip file (8 workers and 120 GB max file size) on RAID1+0 LUN, please see attached. In this tests IOPS are much smaller. What is the real word performance then? I'm little confused. Also is that right that I should not create one big LUN for VMs and create few LUNs with the LUN size = (size of 20-30 VMs)*(average size of VM) for better performance?

16.01.2013 02:07, admin@xxxxxxxxxxx ÐÐÑÐÑ:
Those numbers are higher than I would have expected given the hardware
you listed.  For mixed random access, I expected your hardware would
have delivered 2000 to 5000, not 49747.  Of course, I test with 100%
random and 67% writes.  You were testing with 60% random and 35%
writes.  There could be considerable caching involved (especially with
read tests), but it is hard to say without more data points.

If you want to run more benchmarks with IOmeter, I would suggest trying
the ones that ZFSBuild uses from
http://www.zfsbuild.com/pics/Graphs/Iometer-config-file.zip . That zip
file contains an IOMeter.icf file.  More details about those benchmarks
are at http://www.zfsbuild.com/2012/12/14/zfsbuild2012-benchmark-methods/

Anyway, I am a lot more familiar with the benchmarks from ZFSBuild.  If
you run those benchmarks and post those results, then I could give you a
very good idea what level of real world performance to expect.

Here are some InfiniBand based benchmarks using that the ZFSBuild
IOmeter file:
http://www.zfsbuild.com/2012/12/15/zfsbuild2012-infiniband-performance/

Here are some graphs of single ethernet port benchmarks: (comparing some
hardware from 2010 with hardware from 2012)
http://www.zfsbuild.com/2012/12/14/zfsbuild2012-performance-compared-to-zfsbuild2010/



On 1/15/2013 2:33 PM, Andrey wrote:
Just finished measuring SAN performance with IOmeter
(http://vmktree.org/iometer/OpenPerformanceTest.icf and 5 minutes each
test) on RAID10 (data, 16GB maximum test file) and RAID50 (backup, 8GB
maximum test file) both 3.6TB with one ext4 partition. SAN is
configure in dual-path configuration and server has multipath
configured with 2 HBA adapters. Here are the results:

RAID 5+0:
-----------------------------------------------------------------
|       Test name        |   Avg iops     |    AvgMBps    |
-----------------------------------------------------------------
| Max Throughput-100%Read     |    47528    |    1485    |
| RealLife-60%Rand-65%Read     |    24760    |    193    |
| Max Throughput-50%Read     |    6959    |    217    |
| Random-8k-70%Read         |    26612    |    207    |
-----------------------------------------------------------------

RAID 1+0:
-----------------------------------------------------------------
|       Test name        |   Avg iops     |    AvgMBps    |
-----------------------------------------------------------------
| Max Throughput-100%Read     |    44031    |    1375    |
| RealLife-60%Rand-65%Read     |    49474    |    386    |
| Max Throughput-50%Read     |    43002    |    1343    |
| Random-8k-70%Read         |    49930    |    390    |
-----------------------------------------------------------------

Caching is in action or else?

13.01.2013 23:52, admin@xxxxxxxxxxx ÐÐÑÐÑ:
You should measure the performance of the SAN using something like
IOmeter (running IOmeter on the hardware you plan to run XenServer or
XCP on).  Assuming you configure those drives in RAID10, I would guess
that SAN would deliver about 2,000 to 5,000 IOPS.  If you use RAID5
(please don't), then you will see far less IOPS during mixed read and
write tests.

If you want to deploy 100 VMs onto that SAN, then each VM is only have
to have 20-50 IOPS (assuming RAID10).  The performance in each VM will
be less than fantastic.  If the VMs need to do any IO intensive tasks,
the owners of the VMs are probably going to complain about sluggish
performance.  I don't think the SAN you listed can deliver enough IOPS
to satisfy 100 VMs.

On 1/13/2013 12:17 PM, Andrey wrote:
Well, storage is the direct-connect HP P2000 G3 FC dual-controller
array with 600GBx24 disks in dual-path configuration (two HBA ports ->
two controllers ports). I guess it is quite enough.

13.01.2013 20:45, admin@xxxxxxxxxxx ÐÐÑÐÑ:
You will probably run out of disk IO before you run into any hard
limits
in XenServer or XCP.

What type of SAN are you going to use?  What type of network
interconnect will you use to link your XenServer/XCP nodes to your
SAN?
How many IOPS does your SAN deliver over your chosen network
interconnect?

On 1/13/2013 9:03 AM, Andrey wrote:
Sure, will try. I see in XenServer 6.1 FAQ that maximum supported
number of guests is 150 and it requires increasing dom0_mem to max
4096. It's obvious that internal limits are not quite realistic so it
will be good result for me if we able to run at least 100 guests. It
seems that it is more realistic number although some resources note
maximum number of VMs as 4-10 per CPU core (so 32-80 in my case). But
in all these cases 192 GB RAM would be redundant I think.

With regards, Andrey

11.01.2013 16:43, Wei Liu ÐÐÑÐÑ:
On Fri, 2013-01-11 at 12:24 +0000, Andrey wrote:
Thank you for the answer

I'm really consider the case with creating as many DomUs as
possible
with typical load and get practical info.

What about network capacity? Does this math implies to the network
resources? Should we shape the DomUs bandwidth to prevent network
overload? Can CPU be bottleneck in this configuration?


The math I did was to show you some internal infrastructure limits
that
I know.

CPU / network overloading is another topic. TBH I haven't done
stress
tests on CPUs and network.

And whether you will hit any bottlenecks in CPU / network or not
relates
closely to your use case. Boot up DomUs and do some typical
workload is
a good idea.


Wei.




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users
SERVER TYPE:
CPU TYPE / NUMBER: 
HOST TYPE: 
STORAGE TYPE / DISK NUMBER / RAID LEVEL: 
Test name Latency Avg iops Avg MBps cpu load
4K; 100% Read; 0% random 0.00 129456 505 8%
4K; 100% Read; 0% random 0.00 278037 1086 19%
4K; 100% Read; 0% random 0.00 241715 944 11%
4K; 100% Read; 0% random 0.00 275984 1078 7%
4K; 100% Read; 0% random 0.00 247454 966 16%
4K; 100% Read; 0% random 0.00 231215 903 1%
4K; 100% Read; 0% random 0.00 273517 1068 18%
4K; 100% Read; 0% random 0.00 246220 961 2%
4K; 100% Read; 0% random 0.00 267679 1045 16%
4k random write 223.51 54567 213 2%
4k random write 41.21 10060 39 0%
4k random write 40.97 10003 39 0%
4k random write 44.91 10964 42 0%
4k random write 44.43 10847 42 0%
4k random write 46.89 11448 44 0%
4k random write 48.23 11773 45 0%
4k random write 50.07 12225 47 0%
4k random write 46.74 11411 44 0%
4k random 67%read33%write 13.50 4915 19 0%
4k random 67%read33%write 16.33 5952 23 0%
4k random 67%read33%write 16.70 6077 23 0%
4k random 67%read33%write 16.40 5974 23 0%
4k random 67%read33%write 17.36 6317 24 0%
4k random 67%read33%write 16.03 5845 22 0%
4k random 67%read33%write 16.08 5856 22 0%
4k random 67%read33%write 16.18 5896 23 0%
4k random 67%read33%write 15.09 5497 21 0%
4k random read 0.00 2456 9 0%
4k random read 0.00 2575 10 0%
4k random read 0.00 2724 10 0%
4k random read 0.00 2956 11 0%
4k random read 0.00 3100 12 0%
4k random read 0.00 3394 13 0%
4k random read 0.00 3623 14 0%
4k random read 0.00 3948 15 0%
4k random read 0.00 4242 16 0%
8k random read 0.00 3032 23 0%
8k random read 0.00 3055 23 0%
8k random read 0.00 3034 23 0%
8k random read 0.00 3013 23 0%
8k random read 0.00 2998 23 0%
8k random read 0.00 2962 23 0%
8k random read 0.00 2975 23 0%
8k random read 0.00 2977 23 0%
8k random read 0.00 2999 23 0%
8k sequential read 0.00 71874 561 5%
8k sequential read 0.00 100502 785 5%
8k sequential read 0.00 113958 890 10%
8k sequential read 0.00 122670 958 1%
8k sequential read 0.00 138002 1078 16%
8k sequential read 0.00 83993 656 12%
8k sequential read 0.00 99369 776 11%
8k sequential read 0.00 131833 1029 1%
8k sequential read 0.00 136937 1069 1%
8k random write 266.95 32586 254 3%
8k random write 73.93 9024 70 0%
8k random write 79.13 9659 75 0%
8k random write 79.80 9740 76 0%
8k random write 82.17 10030 78 0%
8k random write 76.26 9309 72 0%
8k random write 80.70 9850 76 0%
8k random write 80.96 9883 77 0%
8k random write 75.95 9271 72 0%
8k random 67%write33%read 21.87 3984 31 0%
8k random 67%write33%read 23.80 4334 33 0%
8k random 67%write33%read 25.92 4723 36 0%
8k random 67%write33%read 25.30 4603 35 0%
8k random 67%write33%read 24.85 4522 35 0%
8k random 67%write33%read 24.58 4479 34 0%
8k random 67%write33%read 25.18 4587 35 0%
8k random 67%write33%read 24.62 4482 35 0%
8k random 67%write33%read 24.57 4470 34 0%
16k random 67%write33%read (1) 27.40 2495 38 0%
16k random 67%write33%read (1) 27.11 2469 38 0%
16k random 67%write33%read (1) 27.65 2517 39 0%
16k random 67%write33%read (1) 27.04 2465 38 0%
16k random 67%write33%read (1) 27.67 2517 39 0%
16k random 67%write33%read (1) 30.04 2738 42 0%
16k random 67%write33%read (1) 29.86 2721 42 0%
16k random 67%write33%read (1) 30.34 2766 43 0%
16k random 67%write33%read (1) 27.95 2547 39 1%
16k random write 112.39 6859 107 0%
16k random write 129.81 7923 123 0%
16k random write 132.35 8077 126 0%
16k random write 128.66 7852 122 0%
16k random write 136.28 8318 129 0%
16k random write 130.38 7957 124 0%
16k random write 134.22 8192 128 0%
16k random write 134.12 8186 127 0%
16k random write 126.41 7715 120 0%
16k sequential read 0.00 9798 153 0%
16k sequential read 0.00 16587 259 1%
16k sequential read 0.00 22844 356 2%
16k sequential read 0.00 34414 537 2%
16k sequential read 0.00 41958 655 3%
16k sequential read 0.00 51061 797 0%
16k sequential read 0.00 63785 996 23%
16k sequential read 0.00 36369 568 2%
16k sequential read 0.00 52557 821 8%
16k random read 0.00 2937 45 0%
16k random read 0.00 2968 46 0%
16k random read 0.00 2929 45 0%
16k random read 0.00 2885 45 0%
16k random read 0.00 2854 44 0%
16k random read 0.00 2814 43 0%
16k random read 0.00 2763 43 0%
16k random read 0.00 2697 42 0%
16k random read 0.00 2619 40 0%
32k random 67%write33%read 79.37 3615 112 1%
32k random 67%write33%read 72.54 3305 103 0%
32k random 67%write33%read 71.76 3272 102 0%
32k random 67%write33%read 65.56 2981 93 1%
32k random 67%write33%read 64.76 2950 92 0%
32k random 67%write33%read 53.11 2420 75 0%
32k random 67%write33%read 39.54 1804 56 0%
32k random 67%write33%read 48.66 2215 69 0%
32k random 67%write33%read 48.86 2219 69 1%
32k random read 0.00 957 29 0%
32k random read 0.00 978 30 0%
32k random read 0.00 981 30 0%
32k random read 0.00 979 30 0%
32k random read 0.00 961 30 0%
32k random read 0.00 985 30 0%
32k random read 0.00 990 30 0%
32k random read 0.00 999 31 0%
32k random read 0.00 1068 33 0%
32k sequential read 0.00 5547 173 1%
32k sequential read 0.00 10661 333 0%
32k sequential read 0.00 17719 553 1%
32k sequential read 0.00 22094 690 0%
32k sequential read 0.00 27978 874 5%
32k sequential read 0.00 33211 1037 0%
32k sequential read 0.00 35150 1098 8%
32k sequential read 0.00 6295 196 1%
32k sequential read 0.00 11433 357 1%
32k random write 366.76 11192 349 3%
32k random write 198.54 6059 189 0%
32k random write 190.92 5826 182 0%
32k random write 192.72 5881 183 0%
32k random write 192.95 5888 184 0%
32k random write 190.54 5814 181 0%
32k random write 193.08 5892 184 0%
32k random write 188.88 5764 180 0%
32k random write 201.50 6149 192 0%

Old style VMTN communities table:

SERVER TYPE:
CPU TYPE / NUMBER: 
HOST TYPE: 
STORAGE TYPE / DISK NUMBER / RAID LEVEL: 

|*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*|
|*4K; 100% Read; 0% random*|0.00|129456|505|8%|
|*4K; 100% Read; 0% random*|0.00|278037|1086|19%|
|*4K; 100% Read; 0% random*|0.00|241715|944|11%|
|*4K; 100% Read; 0% random*|0.00|275984|1078|7%|
|*4K; 100% Read; 0% random*|0.00|247454|966|16%|
|*4K; 100% Read; 0% random*|0.00|231215|903|1%|
|*4K; 100% Read; 0% random*|0.00|273517|1068|18%|
|*4K; 100% Read; 0% random*|0.00|246220|961|2%|
|*4K; 100% Read; 0% random*|0.00|267679|1045|16%|
|*4k random write*|223.51|54567|213|2%|
|*4k random write*|41.21|10060|39|0%|
|*4k random write*|40.97|10003|39|0%|
|*4k random write*|44.91|10964|42|0%|
|*4k random write*|44.43|10847|42|0%|
|*4k random write*|46.89|11448|44|0%|
|*4k random write*|48.23|11773|45|0%|
|*4k random write*|50.07|12225|47|0%|
|*4k random write*|46.74|11411|44|0%|
|*4k random 67%read33%write*|13.50|4915|19|0%|
|*4k random 67%read33%write*|16.33|5952|23|0%|
|*4k random 67%read33%write*|16.70|6077|23|0%|
|*4k random 67%read33%write*|16.40|5974|23|0%|
|*4k random 67%read33%write*|17.36|6317|24|0%|
|*4k random 67%read33%write*|16.03|5845|22|0%|
|*4k random 67%read33%write*|16.08|5856|22|0%|
|*4k random 67%read33%write*|16.18|5896|23|0%|
|*4k random 67%read33%write*|15.09|5497|21|0%|
|*4k random read*|0.00|2456|9|0%|
|*4k random read*|0.00|2575|10|0%|
|*4k random read*|0.00|2724|10|0%|
|*4k random read*|0.00|2956|11|0%|
|*4k random read*|0.00|3100|12|0%|
|*4k random read*|0.00|3394|13|0%|
|*4k random read*|0.00|3623|14|0%|
|*4k random read*|0.00|3948|15|0%|
|*4k random read*|0.00|4242|16|0%|
|*8k random read*|0.00|3032|23|0%|
|*8k random read*|0.00|3055|23|0%|
|*8k random read*|0.00|3034|23|0%|
|*8k random read*|0.00|3013|23|0%|
|*8k random read*|0.00|2998|23|0%|
|*8k random read*|0.00|2962|23|0%|
|*8k random read*|0.00|2975|23|0%|
|*8k random read*|0.00|2977|23|0%|
|*8k random read*|0.00|2999|23|0%|
|*8k sequential read*|0.00|71874|561|5%|
|*8k sequential read*|0.00|100502|785|5%|
|*8k sequential read*|0.00|113958|890|10%|
|*8k sequential read*|0.00|122670|958|1%|
|*8k sequential read*|0.00|138002|1078|16%|
|*8k sequential read*|0.00|83993|656|12%|
|*8k sequential read*|0.00|99369|776|11%|
|*8k sequential read*|0.00|131833|1029|1%|
|*8k sequential read*|0.00|136937|1069|1%|
|*8k random write*|266.95|32586|254|3%|
|*8k random write*|73.93|9024|70|0%|
|*8k random write*|79.13|9659|75|0%|
|*8k random write*|79.80|9740|76|0%|
|*8k random write*|82.17|10030|78|0%|
|*8k random write*|76.26|9309|72|0%|
|*8k random write*|80.70|9850|76|0%|
|*8k random write*|80.96|9883|77|0%|
|*8k random write*|75.95|9271|72|0%|
|*8k random 67%write33%read*|21.87|3984|31|0%|
|*8k random 67%write33%read*|23.80|4334|33|0%|
|*8k random 67%write33%read*|25.92|4723|36|0%|
|*8k random 67%write33%read*|25.30|4603|35|0%|
|*8k random 67%write33%read*|24.85|4522|35|0%|
|*8k random 67%write33%read*|24.58|4479|34|0%|
|*8k random 67%write33%read*|25.18|4587|35|0%|
|*8k random 67%write33%read*|24.62|4482|35|0%|
|*8k random 67%write33%read*|24.57|4470|34|0%|
|*16k random 67%write33%read (1)*|27.40|2495|38|0%|
|*16k random 67%write33%read (1)*|27.11|2469|38|0%|
|*16k random 67%write33%read (1)*|27.65|2517|39|0%|
|*16k random 67%write33%read (1)*|27.04|2465|38|0%|
|*16k random 67%write33%read (1)*|27.67|2517|39|0%|
|*16k random 67%write33%read (1)*|30.04|2738|42|0%|
|*16k random 67%write33%read (1)*|29.86|2721|42|0%|
|*16k random 67%write33%read (1)*|30.34|2766|43|0%|
|*16k random 67%write33%read (1)*|27.95|2547|39|1%|
|*16k random write*|112.39|6859|107|0%|
|*16k random write*|129.81|7923|123|0%|
|*16k random write*|132.35|8077|126|0%|
|*16k random write*|128.66|7852|122|0%|
|*16k random write*|136.28|8318|129|0%|
|*16k random write*|130.38|7957|124|0%|
|*16k random write*|134.22|8192|128|0%|
|*16k random write*|134.12|8186|127|0%|
|*16k random write*|126.41|7715|120|0%|
|*16k sequential read*|0.00|9798|153|0%|
|*16k sequential read*|0.00|16587|259|1%|
|*16k sequential read*|0.00|22844|356|2%|
|*16k sequential read*|0.00|34414|537|2%|
|*16k sequential read*|0.00|41958|655|3%|
|*16k sequential read*|0.00|51061|797|0%|
|*16k sequential read*|0.00|63785|996|23%|
|*16k sequential read*|0.00|36369|568|2%|
|*16k sequential read*|0.00|52557|821|8%|
|*16k random read*|0.00|2937|45|0%|
|*16k random read*|0.00|2968|46|0%|
|*16k random read*|0.00|2929|45|0%|
|*16k random read*|0.00|2885|45|0%|
|*16k random read*|0.00|2854|44|0%|
|*16k random read*|0.00|2814|43|0%|
|*16k random read*|0.00|2763|43|0%|
|*16k random read*|0.00|2697|42|0%|
|*16k random read*|0.00|2619|40|0%|
|*32k random 67%write33%read*|79.37|3615|112|1%|
|*32k random 67%write33%read*|72.54|3305|103|0%|
|*32k random 67%write33%read*|71.76|3272|102|0%|
|*32k random 67%write33%read*|65.56|2981|93|1%|
|*32k random 67%write33%read*|64.76|2950|92|0%|
|*32k random 67%write33%read*|53.11|2420|75|0%|
|*32k random 67%write33%read*|39.54|1804|56|0%|
|*32k random 67%write33%read*|48.66|2215|69|0%|
|*32k random 67%write33%read*|48.86|2219|69|1%|
|*32k random read*|0.00|957|29|0%|
|*32k random read*|0.00|978|30|0%|
|*32k random read*|0.00|981|30|0%|
|*32k random read*|0.00|979|30|0%|
|*32k random read*|0.00|961|30|0%|
|*32k random read*|0.00|985|30|0%|
|*32k random read*|0.00|990|30|0%|
|*32k random read*|0.00|999|31|0%|
|*32k random read*|0.00|1068|33|0%|
|*32k sequential read*|0.00|5547|173|1%|
|*32k sequential read*|0.00|10661|333|0%|
|*32k sequential read*|0.00|17719|553|1%|
|*32k sequential read*|0.00|22094|690|0%|
|*32k sequential read*|0.00|27978|874|5%|
|*32k sequential read*|0.00|33211|1037|0%|
|*32k sequential read*|0.00|35150|1098|8%|
|*32k sequential read*|0.00|6295|196|1%|
|*32k sequential read*|0.00|11433|357|1%|
|*32k random write*|366.76|11192|349|3%|
|*32k random write*|198.54|6059|189|0%|
|*32k random write*|190.92|5826|182|0%|
|*32k random write*|192.72|5881|183|0%|
|*32k random write*|192.95|5888|184|0%|
|*32k random write*|190.54|5814|181|0%|
|*32k random write*|193.08|5892|184|0%|
|*32k random write*|188.88|5764|180|0%|
|*32k random write*|201.50|6149|192|0%|
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.