[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-API] VCPUs-at-startup and VCPUs-max with NUMA node affinity
Anil wrote: > Is there a memory-swap operation available to exchange pages from one > NUMA domain for pages from another? I'm thinking of a scenario where > CPU hotplugs have led to allocated memory being on the wrong NUMA > domain entirely. Is the only way for the guest to resolve this by live > migrating back to localhost so that it goes through a suspend/resume > cycle? Whilst a localhost migrate would do the job it needs enough spare memory on the target node to do it. Dario Faggioli over on xen-devel is working on memory migration primarily for rebalancing nodes but would apply here too. Using VCPUs-max to do placement means that vCPU hotplugging would all be within a node anyway so this shouldn't be a problem. > Right now we see performance like this all the time (on non-NUMA Xen) > since memory is usually allocated from a single NUMA domain; e.g. on a > 48-core Magny-cours, notice unix domain socket latency grows worse as > it spreads away from vCPU 0 (which also happens to be on NUMA domain > 0); http://www.cl.cam.ac.uk/research/srg/netos/ipc- > bench/details/tmpwlnFNM.html By non-NUMA I assume you mean numa=off, as was the default before 4.0 (or thereabouts)? I think since then memory is striped so everybody should suffer equally. Cheers, James _______________________________________________ Xen-api mailing list Xen-api@xxxxxxxxxxxxx http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |