[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/2] linux/vnuma: vnuma support for pv guest

On mar, 2013-08-27 at 18:37 -0700, Matt Wilson wrote:
> On Tue, Aug 27, 2013 at 06:27:15PM -0700, Matt Wilson wrote:
> > On Tue, Aug 27, 2013 at 04:52:59AM -0400, Elena Ufimtseva wrote:
> > > Uses subop hypercall to request XEN about vnuma topology.
> > > Sets the memory blocks (aligned by XEN), cpus, distance table
> > > on boot. NUMA support should be compiled in kernel.
> > 
> > Are we *really sure* that we want to go this route for PV vNUMA?
> > Couldn't we build just enough(tm) of the ACPI tables to express the
> > NUMA topology when constructing the domain? That's what we do for the
> > e820 map.
> Ignore me somewhat, since the e820 information is retrieved via
> hypercall similar to what you're proposing.

> Still, if there's some way that we can reuse existing Linux code
> rather than bolting on a completely parallel mechanism to set this up
> under PV I think it'd be better.
Well, it looks to me that Elena is reusing quite a bit of it, isn't she?
All she's providing is a new initialization function ( xen_numa_init()
), as it is happening already for ACPI NUMA, NUMAQ, and other NUMA

In practice, while ACPI based NUMA code parses the ACPI tables in
acpi_numa_init(), PV vNUMA parses the information coming from an
hypercall xen_numa_init(). From that point on, Linux that steps-in and
do everything else "as usual".

Isn't that enough sharing?

Thanks and Regards,

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.