[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Working around xenstored performance?
On Thu, Jul 31, 2008 at 10:41:15AM +0200, Pim van Riezen wrote: > Good Day, > > For monitoring purposes, we have a process that sends a 'xend.domains' > methodCall to xend at timed intervals. Our problem here is that this > call, if the extra parameter is not '0', is pretty slow; xenstored > will run with 40% cpu grinding over its database before coming up with > a reply, hardly something you want to do like every 15 seconds. With > the parameter on 0, the response is instantaneous, but lacks any > information beyond the list of currently running guests. > > Is there a less intrusive way to just get the cpu-counter for a > specific guest through /proc/xen or some such? I'd also be perfectly > happy to fish the information out of xenstored's tdb-file, but all of > its records seem to be in an undocumented binary encoding so that's > not really helping either. The quickest workaround is to mount /var/lib/xenstore on tmpfs at boot time. This should reduce overhead by something like ~50%. > Our set-up is the Fedora branch of 3.0 (but from what I can read, 3.1 > still has this problem). If you use libvirt, then once you have fetched a 'virDomainPtr' object, you can query the CPU / memory utilization of the guest without it ever hitting xenstore/xend again. The virDomainGetInfo() method is optimized so it will either make a direct hypercall (very fast), or if non-root use a setuid proxy to make the hypercall (fairly fast). Regards, Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |