[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC v2][PATCH 1/3] docs: design and intended usage for NUMA-aware ballooning
>>>> Dario Faggioli <dario.faggioli@xxxxxxxxxx> 08/17/13 1:31 AM >>> >On ven, 2013-08-16 at 10:09 +0100, Jan Beulich wrote: >> I believe this thinking of yours stems from the fact that in Linux the >> page control structures are associated with nodes by way of the >> physical memory map being split into larger pieces, each coming from >> a particular node. But other OSes don't need to follow this model, >> and what you propose would also exclude extending the spanned >> nodes set if memory gets ballooned in that's not associated with >> any node the domain so far was "knowing" of. >> >I agree on the first part of this comment... Too much Linux-ism in the >description of what should be a generic model. > >The second part (the one about what happens if memory comes from an >"unknown" node), I'm not sure I get what you mean. > >Suppose we have guest G with 2 v-nodes and with pages in v-node 0 (say, >page 0,1,2..N-1) are backed by frames on p-node 2, while pages in v-node >1 (say, N,N+1,N+2..2N-1) are backed by frames on p-node 4, and that is >because, at creation time, either the user or the toolstack decided this >was the way to go. >So, if page 2 was ballooned down, when ballooning it up, we would like >to retain the fact that it is backed by a frame in p-node 2, and we >could ask Xen to try make that happen. On failure (e.g., no free frames >on p-node 2), we could either fail or have Xen allocate the memory >somewhere else, i.e., not on p-node 2 or p-node 4, and live with it >(i.e., map G's page 2 there), which I think is what you mean with <<node >the domain so far was "knowing" of>>, isn't it? Right. Or the guest could choose to create a new node on the fly. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |