[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 17/21] libxl: disallow memory relocation when vNUMA is enabled



On Thu, Jan 29, 2015 at 11:06:07AM +0000, Ian Campbell wrote:
> On Wed, 2015-01-28 at 22:22 +0000, Wei Liu wrote:
> > On Wed, Jan 28, 2015 at 04:41:11PM +0000, Ian Campbell wrote:
> > > On Fri, 2015-01-23 at 11:13 +0000, Wei Liu wrote:
> > > > Disallow memory relocation when vNUMA is enabled, because relocated
> > > > memory ends up off node. Further more, even if we dynamically expand
> > > > node coverage in hvmloader, low memory and high memory may reside
> > > > in different physical nodes, blindly relocating low memory to high
> > > > memory gives us a sub-optimal configuration.
> > > 
> > > What are the downsides of doing this? Less flexible PCI passthrough/MMIO
> > > hole sizing I think? Anything else?
> > 
> > Yes, PCI passthrough and MMIO hole size are affected, but not to the
> > degree that they are unusable.  Going forward like this doesn't make
> > thing worse than before I think.
> 
> > Memory relocation is already disabled if we're using QEMU upstream.
> 
> Is that a shortcoming which might get fixed?
> 

Not until proper infrastructure is in place to allow communication among
several parties (hypervisor, QEMU and hvmloader).

> Is this vnuma restriction something which could be fixed in the future?

Presumably if the aforementioned infrastructure is in place this can be
fixed at the same time we fix that QEMU problem.

The infrastructure is a common blocker to both issues.

>  
> > > Does some user doc somewhere need this adding?
> > > 
> > > (and now I think of it, the PoD vs. vnuma one too).
> > > 
> > 
> > Maybe release note or xl manpage?
> 
> Manpage would be good. Perhaps in some vnuma docs on the wiki too.
> 

No problem.

Wei.

> > 
> > Wei.
> > 
> > > Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.