[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] problems with smp
On Thu, 2007-02-08 at 16:02 -0800, David Brown wrote: > > What kind of load are you running? If it involves I/O, then all that > > has to go through dom0 and can bottleneck the other domains. There's > > also some overhead in scheduling between vCPUs and time spent running in > > Xen itself, so you may not be able to reach that 200% total, but I'd > > think you could get closer than 150%. Thanks, > > > > Yeah, its very I/O driven, running a distributed filesystem under > xen... so could the xen kernel be switching too much then? is there > anyway to provide some nice like level to an OS so that it can get > most of the time running? Would pci sharing rather than block sharing > help (if there's more than one block per pci device)? You can tune the Xen credit scheduler to effectively do what you're asking (I think). AFAIK, dom0 already has some scheduling priority. See here for details on tweaking the credit scheduler: http://wiki.xensource.com/xenwiki/CreditScheduler You can't really "share" PCI devices. At the PCI function level, one and only one domain can own a function. This sometimes means you can split a dual port card among 2 domains if the device exposes the 2 ports as 2 separate functions. So you could give each domain it's own NIC and SCSI device, if you have enough empty PCI slots. I'm not sure that's going to help your situation though. There's probably some tuning you can do on the vbd side too, like using LVM devices or raw block devices instead of disk images if you're not already. Alex -- Alex Williamson HP Open Source & Linux Org. _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |