[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v2][PATCH 1/3] docs: design and intended usage for NUMA-aware ballooning

(re-adding xen-devel to Cc)

>>> On 16.08.13 at 12:12, Li Yechen <lccycc123@xxxxxxxxx> wrote:
> On Fri, Aug 16, 2013 at 5:09 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> And finally, coming back what Tim had already pointed out - doing
>> things the way you propose can cause an imbalance in the
>> ballooned down guest, penalizing it in favor of not penalizing the
>> intended consumer of the recovered memory. Therefore I wonder
>> whether, without any new xenstore node, it wouldn't be better to
>> simply require conforming balloon drivers to balloon out memory
>> evenly across the domain's virtual nodes.
> I should say sorry here, but I'm not quite understand the "whether" part.
> the "new xenstore node" just store the requirement from user, so that
> balloon could read it. It's similar to ~/memory/target. This new node
> could store either p-nodeid, or v-nodeid, according to the interfaces we
> talked above is placed inside of xen, or inside of guest OS.
> Do you have a better way to pass this requirement to balloon, instead of
> create a new xenstore node? I'd be very happy if you have one, since
> nor do I like the way I have done(create a new node) already!

As said - I'd want you to evaluate a model without such a new node,
and with instead the requirement placed on the balloon driver to
balloon out pages evenly allocated across the guest's virtual nodes.

> You are exactly right again, this design is only for Linux balloon driver.
> For Linux, balloon can choose which page to balloon in/out. So we can
> assocate the pages with v-nodeid.
> For the other kinds of architechure, please forgive me that I haven't think
> of that far...

The abstract model shouldn't take OS implementation details or
policies into account; the implementation later of course can (and
frequently will need to).


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.