[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/2] linux/vnuma: vnuma support for pv guest

On Wed, Aug 28, 2013 at 2:08 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> On mar, 2013-08-27 at 18:37 -0700, Matt Wilson wrote:
>> On Tue, Aug 27, 2013 at 06:27:15PM -0700, Matt Wilson wrote:
>> > On Tue, Aug 27, 2013 at 04:52:59AM -0400, Elena Ufimtseva wrote:
>> > > Uses subop hypercall to request XEN about vnuma topology.
>> > > Sets the memory blocks (aligned by XEN), cpus, distance table
>> > > on boot. NUMA support should be compiled in kernel.
>> >
>> > Are we *really sure* that we want to go this route for PV vNUMA?
>> > Couldn't we build just enough(tm) of the ACPI tables to express the
>> > NUMA topology when constructing the domain? That's what we do for the
>> > e820 map.
>> Ignore me somewhat, since the e820 information is retrieved via
>> hypercall similar to what you're proposing.
> :-)
>> Still, if there's some way that we can reuse existing Linux code
>> rather than bolting on a completely parallel mechanism to set this up
>> under PV I think it'd be better.
> Well, it looks to me that Elena is reusing quite a bit of it, isn't she?
> All she's providing is a new initialization function ( xen_numa_init()
> ), as it is happening already for ACPI NUMA, NUMAQ, and other NUMA
> implementations.
> In practice, while ACPI based NUMA code parses the ACPI tables in
> acpi_numa_init(), PV vNUMA parses the information coming from an
> hypercall xen_numa_init(). From that point on, Linux that steps-in and
> do everything else "as usual".
> Isn't that enough sharing?

I think the only way to "share" more would be to have Xen do some
crazy ACPI table fake-up scheme, which sounds like kind of a
nightmare; and also completely pointless, since there are already nice
clean interfaces we can just hook into and pass nice clean data
structures straight from Xen.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.