[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/1] tools: Handle xc_maxmem adjustments



On Wed, Apr 15, 2015 at 10:53 AM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
> On 14/04/15 23:06, Don Slutz wrote:
>> This fixes an issue where "xl save" followed by "xl restore" reports:
>> "xc: error: Failed to allocate memory for batch.!: Internal error"
>>
>> One of the ways to get into this state is to have more then 4 e1000
>> nics configured.
>>
>> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
>
> I still don't think this is the correct solution, although I will
> concede that this is a far better patch than v1.
>
> Going back to the original problem, why does Qemu need to change maxmem
> in the first place?  You talk about e1000 option roms, but the option
> roms themselves must be allocated in an appropriate PCI bridge window.
>
> As a result, there is necessarily already ram backing, which can be
> ballooned back in.  Currently, all ram behind the PCI MMIO is ballooned
> out hvmloader but still accounted to the domain, and otherwise wasted.

First, I think you should avoid using the word "balloon" unless you're
actually talking about a balloon -- i.e., a pool of memory allocated
from the guest OS by a kernel driver, behind which there is no actual
ram.

The ram behind the PCI MMIO hole is relocated to highmem, not
ballooned, just like it might be in a real BIOS.  Are you saying that
this is not reflected anywhere in the e820 map, so the guest OS
doesn't know that such ram exists?

Secondly, I agree in general that the original solution -- having qemu
ask the hypervisor directly for more ram -- isn't a good one.  It
would have been better if it could have requested that from libxl
somehow.

Without having actually reviewed the patch, I think this solution is a
decent one.  But if we could update it in the libxl domain config in a
way that was backwards-compatible, that would be fine too.  I don't
think we should change maxmem in the domain config -- I think there
should be another field, maxpages or something, which talks about the
hypervisor side.

Maybe we could get in the habit of saying "memory" when we talk about
the illusion we're giving to the guest, and "pages" when we're talking
about the actual number of pages allocated within Xen?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.