[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Cpu pools discussion
On 30/07/2009 06:46, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxx> wrote: >> Another alternative might be to create a 'hypervisor thread', either >> dynamically, or a per-cpu worker thread, and do the work in that. Of course >> that has its own complexities and these threads would also have their own >> interactions with cpu pools to keep them pinned on the appropriate physical >> cpu. I don't know whether this would really work out simpler. > > There should be an easy solution for this: What you are suggesting here sounds > like a "hypervisor domain" similar to the the idle domain, but with high > priority and normally all vcpus blocked. > > The interactions of this domain with cpupools would be the same as for the > idle domain. > > I think this approach could be attractive, but the question is if the pros > outweigh the cons. OTOH such a domain could open interesting opportunities. I think especially if cpupools are added into the mix then this becomes more attractive than the current approach. The other alternative is to modify the two existing problematic callers to work okay from softirq context (or not need continue_hypercall_on_cpu() at all, which might be possible at least in the case of CPU hotplug). I would be undecided between these two just now -- it depends on how easily those two callers can be fixed up. CPU hotplug raises a question in relation to cpupools, by the way. What pool does a cpu get added to when it is brought online? And what do you do when someone offlines a CPU (e.g., especially when it is the last in its pool)? In that latter case, have you not considered it, or do you refuse the offline, or do you somehow break the pool affinity so that domains belonging to it can run elsewhere? -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |