[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] SMP guest support in unstable tree.
Christian Limpach wrote: On Tue, Jan 04, 2005 at 10:41:57AM -0600, Andrew Theurer wrote:Do you think there would be any room for "dedicated" cpus in a domain, like a one-to-one mapping of physical cpu to domain cpu? I am asking because I think there would be situations where (a) one would want to discretely divide a large system, in particular one with numa characteristics where one could dedicate cpus and memory close to each other andWe have a one-to-one mapping (pinning) of virtual cpus to physical cpus -- if you don't allocate multiple virtual cpus to the same physical cpu, then the physical cpu becomes implicitly dedicated to that domain. OK, great, this is essentially the option I wanted, thanks! Hopefully soon I can get some performance tests going and we can see if there's any issues here. My other concern would be on larger (multi numa-node) systems, even with one to one mapping, that the hardware topology (numa) information does not make it to the SMP guest -it would be nice to take advantage of the numa work developed in the linux kernel over that last 2 years. I am not sure exactly what impact this could be.This mapping can be changed dynamically, at least at the Xen level -- the tools don't have support for changing the mapping of SMP guests yet. We also don't support enforcing allocation policies yet.(b) perhaps in this one to one mapping, there might be less overhead of managing cpus in a domain, vs (assuming) some sort of timesharing of a physical cpu to many domains, and even more than one virtual cpu in just one domain.I don't think there's significant overhead if there's only a single virtual cpu pinned to one physical cpu so I wouldn't expect a noticeable performance advantage if we handled this case differently. Anyway, I am mostly curious at this point. This is just what I have seen in the ppc/power5 world, a choice of dedicated cpus (however, if they are idle that cpu can be "shared" if desired) or virtual cpus (up to 64 I think) backed by N physical cpus.I think we need load balancing software and we also need to get measurements to see what's the cost of moving virtual cpus between physical cpus (or hyperthreads) and what impact service domains have on the scheduling and load balancing decisions. Agreed, thanks for the info. -Andrew Theurer ------------------------------------------------------- The SF.Net email is sponsored by: Beat the post-holiday blues Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek. It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |