[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/2] linux/vnuma: vnuma support for pv guest



On Thu, Aug 29, 2013 at 12:21:40AM +0200, Dario Faggioli wrote:
> On mer, 2013-08-28 at 09:38 -0700, Matt Wilson wrote:
> > On Wed, Aug 28, 2013 at 12:01:48PM -0400, Konrad Rzeszutek Wilk wrote:
> > > That would also parallel the work you do with ACPI right?
> > 
> > Yes.
> > 
> I see. It's hard to comment, since I have only seen some previous (off
> list) versiona of Elena's code (and won't have time to properly review
> this one untill Monday), and I haven't seen Matt's code at all... 

See the link I gave in my other reply for older proposed patch I based
this work on. Here's the head of the patchset:
  http://lists.xen.org/archives/html/xen-devel/2010-02/msg00279.html

> > > We could enable ACPI parsing in a PV guest and provide one table - the
> > > SLIT (or SRAT).
> > 
> > Right, it'd be the SRAT table for the resource affinity and a SLIT
> > table for the locality/distance information.
> > 
> ... I see the point in sharing code for HVM and PV. However, I'm still
> not convinced this would be something valuable to do with this specific
> hunk, mostly because it looks really easy and clean to hook up at the
> numa_init() stage (i.e., what Elena is doing), that anything else I can
> think of looks like more work. :-P

I agree.

> > > But I don't know enough about SRAT to know whether this is something
> > > that represents truly everything we need?
> > 
> > The SRAT table has processor objects and memory objects. A processor
> > object maps a logical processor number to its initial APIC ID and
> > provides the node information. A memory object specifies the start and
> > length for a memory region and provides the node information.
> > 
> > For SLIT, the entries are a matrix of distances.
> > 
> > Here are the structs:
> > 
> > [snip]
> >
> Ok, thanks for the very useful info. What would be interesting to know
> is where and how Linux reads the information from ACPI and fill these
> structures.
> 
> The current Elena's approach is 1 hypercall, during early NUMA
> initialization, and that is pretty self-contained (which is the thing I
> like most about it).
> 
> How easy is it to look up the places where each one of the tables gets
> filled, intercept the code/calls doing that, and replace it properly for
> our use case? How easy is it to "xen-ify" those call sites (stuff like
> '#ifdef CONFIG_XEN' or/and is_xen_domain() )? How many hypercalls would
> it require? Is it possible to have one doing all the work, or would we
> need something like one for each table?

I think it wouldn't be too hard to construct the static ACPI tables
and provide them in acpi_os_get_root_pointer().

> As I said, I can't check the details about it right now, but it sounds
> like more work than Elena's xen_numa_init().

Yes, it's probably a bit more work.

--msw



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.