[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1][RFC] xen balloon driver numa support, libxl interface

On lun, 2013-08-12 at 15:00 +0100, Jan Beulich wrote:
> >>> On 12.08.13 at 15:18, Yechen Li <lccycc123@xxxxxxxxx> wrote:
> > @@ -3754,7 +3761,10 @@ retry_transaction:
> >          abort_transaction = 1;
> >          goto out;
> >      }
> > -
> > +    //lcc:
> > +    LIBXL__LOG(ctx, LIBXL__LOG_DEBUG, "target_nid = %d focus= %d", 
> > node_specify, (int) nodeexact);
> > +    libxl__xs_write(gc, t, libxl__sprintf(gc, "%s/memory/target_nid",
> > +                dompath), "%"PRId32" %"PRId32, node_specify, 
> > (int)nodeexact);
> ... new Xenbus node. Without (so far) a true view into the
> host topology, even if you added a respective consumer to the
> balloon driver, how would that driver honor such a request?
Right. So, as I asked in another e-mail, Yechen, could you provide
(either here or when submitting an new/updated version of the patch)
some more details about the overall design, so that people wanting to
review the patch could have an insight of the big picture? :-)

Really quickly, Jan, this is meant to be effective once:
 1. we will have a way to specify and build a virtual NUMA topology for
    the guests;
 2. we will have a way for a virtual NUMA enabled guest to figure out
    what physical host NUMA node X has the pages of its virtual NUMA
    node Y.

As Yechen said, Elena is working on the PV side of 1., while Matt has
something for the HVM side of it. Once we will have both in place, we
can figure out 2., and then have Yechen's work integrate with that all.

Yes, I know, sending this patch as the first item _is_ a bit confusing,
but we thought that, since this involves new interfaces (e.g., the new
xenstore node), it could be useful to start gathering some early
feedback. :-)

Hope Yechen can explain the problem better, and make things even more

> And even if it knew about host topology, unless the virtual
> topology in the subject domain sufficiently closely resembled the
> host's, it would - afaict - still have no way of obtaining suitable
> memory from the page allocator.
Not sure I get entirely what you mean... Are you saying that, if the
pages for a guest virtual NUMA node X come from multiple host nodes
(instead than from just one single host NUMA node Y), this won't work

If yes, well, I agree. In that case, we'll more or less fall back on
what we have right now (and if that is not the case in the code, well,
we should make it so), so no better, no worse, right?
However, what about the case in which we do manage in putting a guest's
virtual node onto a host's virtual node? That should work a lot better
(than now), right?

In summary, this is meant at being a performance and resource
optimization, for all the cases and setups where a specific set of
conditions can be satisfied. I think this is the case for most
performance and resource optimization, and I think this would be a nice
one to have.

Thanks for your feedback and Regards,

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.