[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v2 1/7] xen/vNUMA: vNUMA support for PV guests.
On Tue, Sep 17, 2013 at 3:11 AM, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote: > On mar, 2013-09-17 at 08:05 +0100, Jan Beulich wrote: >> >>> On 17.09.13 at 08:44, Elena Ufimtseva <ufimtseva@xxxxxxxxx> wrote: >> >> Please don't top post. >> >> > George, after talking to Dario, I think the max number of physical >> > nodes will not exceed 256. Dario's automatic NUMA >> > placement work with this number and I think it can be easily u8. >> > Unless anyone has other thoughts. >> >> With nr_vnodes being uint16_t, the vnode numbers should be >> too. Limiting them to u8 would possibly be even better, but then >> nr_vnodes would better be unsigned int (perhaps that was the >> case from the beginning, regardless of the types used for the >> arrays). >> >> The pnode array surely can also be uint8_t for the time being, >> considering that there are other places where node IDs are >> limited to 8 bits. >> > All agreed. > >> And with struct acpi_table_slit having just 8-bit distances, there's >> no apparent reason why the virtual distances can't be 8 bits too. >> >> But - all this is only for the internal representations. Anything in >> the public interface should be wide enough to allow future >> extension. >> > And, in fact, 'node_to_node_distance' in xen/include/public/sysctl.h > (http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/sysctl.h) > is uint32. > Linux has u8 for distance. Ok, thank you for pointing that out. Elena > Dario > > -- > <<This happens because I choose it to happen!>> (Raistlin Majere) > ----------------------------------------------------------------- > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) > -- Elena _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |