[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 5/9] libxl: vnuma types declararion
On gio, 2014-09-04 at 11:35 +0100, Ian Campbell wrote: > On Wed, 2014-09-03 at 23:48 -0400, Elena Ufimtseva wrote: > > > > Ok, now going back to these types. > > > > ("vnodes", uint32), > > ("vmemranges", uint32), > > ("vnuma_mem", Array(uint64, "num_vnuma_mem")), > > ("vnuma_vcpumap", Array(uint32, "num_vnuma_vcpumap")), > > ("vdistance", Array(uint32, "num_vdistance")), > > ("vnuma_vnodemap", Array(uint32, "num_vnuma_vnondemap")), > > > > > > vnodes - number of vnuma nodes; > > vmemranges - number of vmemranges (not used, but added support in > > Xen); Not sure how to deal with this. Should I just remove it as its > > not used by tool stack yet? > > I think that would be best, otherwise we risk tying ourselves to an API > which might not actually work when we come to try and use it. > I agree. FTR, this is mostly mine and other h/v folks noticing that it would have been nice to have multirange support very late. Elena adapted the hypervisor patches quite quickly, so, thanks Elena! :-) For the tools part, I also think it'd be best to keep this out for now. > > vnuma_mem - vnode memory sizes (conflicts now with vmemranges); > > vnuma_vcpumap - map of vnuma nodes and vcpus (length is number of > > vcpus, each element is vnuma node number); > > vdistance - distances, num_vdistance = vnodes * vnodes; > > vnuma_vnodemap - mapping of vnuma nodes to physical NUMA nodes; > > > > And vnuma_autoplacement probably not needed here at all. > > I *REALLY* don't think this is necessary. > Then I would take the rest and wrap them in a libxl_vnode_info struct: > > libxl_vnode_info = Struct("libxl_vnode_info", [ > ("mem", MemKB), # Size of this node's memory > ("distances", Array(...)), # Distances from this node to the others > (single dimensional array) > ("pnode", TYPE), # which pnode this vnode is associated with > ]) > > Then in the (b|c)_config > ("vnuma_nodes", Array(libxl_vnode_info...)) > > This avoids the issue of having to have various num_* which are expected > to always be identical. Except for vnuma_nodes[N].num_distances, but I > think that is unavoidable. > My +1 to this way of arranging the interface. > The only thing I wasn't sure of was how to fit the vcpumap into this. > AIUI this is a map from vcpu number to vnode number (i.e. for each vcpu > it tells you which vnuma node it is part of). > Correct. > I presume it isn't > possible/desirable to have a vcpu be part of multiple vnuma nodes. > > One possibility would be to have a libxl_bitmap/cpumap as part of the > libxl_vnode_info (containing the set of vcpus which are part of this > node) but that would be prone to allowing a vcpu to be in multiple > nodes. > Not ideal, but not too bad from my point of view. > Another possibility would be an array in (b|c)_config as you have now, > which would be annoying because it's num_foo really ought to be the > ->vcpus count and the IDL doesn't really allow that. > > I'm not sure what is the lesser of the two evils here. > Sure, both are not ideal. I think I like the former, the one using libxl_cpumap-s, better. It looks more consistent to me, and it fit nicely with the array of struct way of restructuring the API Ian described above. It allows the user to ask for a vcpu to be part of more than one node, yes, but identifying and at least warning about this, or even sanity checking things, should not be impossible (a bit expensive, perhaps, but probably not to much for a toolstack level check). Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |