[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 0 of 3 v5/leftover] Automatic NUMA placement for xl
On Fri, 2012-07-20 at 13:00 +0100, Ian Campbell wrote: > On Fri, 2012-07-20 at 12:43 +0100, Andre Przywara wrote: > > On 07/20/2012 01:07 PM, David Vrabel wrote: > > > On 16/07/12 18:13, Dario Faggioli wrote: > > >> Hello again, > > >> > > > I think the tests should test a representative sample of common system > > > configurations, available memory and VM memory requirements. I'd > > > suggest you'd be looking at 100s of test cases here for reasonable > > > coverage. > > > > > > One method would be to start with various 'empty' systems and pile as > > > many differently sized VMs as will fit. You may want both fixed test of > > > reproducible tests and random ones. > > > > 1. If we focus on placement only, I have good experience with > > ttylinux.iso. Those live distros can be killed easily at any time and > > you just need one instance of the .iso file on the disk. > > 2. # xl vcpu-list | sed -e 1d | sort -n -k 7 | tr -s \ | cut -d\ -f7 | > > uniq -c > > This gives the number of VCPUs per node (sort of ;-) > > Ideally you wouldn't need a xen system at all for this, you just want a > database of input configurations (host NUMA setup, existing guest > layout) and hypothetical new guests and their mapping to the expected > output. You can then feed these offline to the algorithmn and validate > that the output is the expected one. > Ok, I see. I like this and I think it should be done. I'll look into it. Thanks and Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |