[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] disk I/O problems under load? (Xen-3.4.1/x86_64)
[resending to list as I erroneously sent directly to Pasi] Il giorno 01/ott/09, alle ore 09:12, Pasi Kärkkäinen ha scritto: You have only 2x 7200 rpm disks for 7 virtual machines and you're wondering why there's a lot of iowait? :) Actually, no. I just reported the observed behaviour of the system.I fully expect, as load increases, to hit the disk i/o bottleneck before having any cpu/ram/network problem. Here we're talking about the MD 'virtual' device, it's a linux kernel artifact...Normally iostat in Domain-0 shows more or less high tps (200~300 under normal load, even higher if I play around with rsync to artificially trigger the problems) on the md device where all the DomU reside, ...and here we're talking about the infamous 7200rpm disks, that usual under normal load are floating at 20-30 iops.and much less (usually just 10-20% of the previous value) on the two physical disks sda and sdb that compose the mirror. What I did _not_ expect was that all domains would literally freeze up for the whole duration of a single i/o intensive job (rsync example), or any qemu-dm / HVM domain to crash (but I'm not sure if this is related or not). We're not talking about a slow website, but about people not even connecting to HTTP (or POP, FTP, etc...) servers, without the latter ever hitting any application-level hard limit like number of worker threads/processes and the like. I expected the CompletelyFairScheduler in linux to be fair, maybe not completely fair but a little fair at least ;), in distributing I/O load in the system so it would gracefully slow down as load increased. I do not have scientific tests, but my guess is that a single, non virtualized system, would keep up with that load, maybe slowing down during peaks but not freezing up anywhere. While I'm still learning how to get the most out of Xen, I'm not 100% sure about my choice of kernel configurations (see my questions about I/O schedulers and tickless kernels) and hypervisor usage (see free ram, cpu pinning, etc). If my guess that the load should be 'light' on that system is correct, I'm probably just hitting some underoptimization issues in my setup. On the same line, I can add two other disks to that system (no networked storage for now, I have to rely on 4 local sata bays), I'll study up how to get the most iops out of the hardware I have. Thanks, -- Luca Lesinigo _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |