[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory"
>>> On 16.12.10 at 21:54, Keir Fraser <keir@xxxxxxx> wrote:
> On 16/12/2010 20:44, "Charles Arnold" <carnold@xxxxxxxxxx> wrote:
>>>> On 12/16/2010 at 01:33 PM, in message <C9302813.2966F%keir@xxxxxxx>, Keir
>> Fraser <keir@xxxxxxx> wrote:
>>> On 16/12/2010 19:23, "Charles Arnold" <carnold@xxxxxxxxxx> wrote:
>>>> The bug is that qemu-dm seems to make the assumption that it can mmap from
>>>> dom0 all the memory with which the guest has been defined instead of the
>>>> that is actually available on the host.
>>> 32-bit dom0? Hm, I thought the qemu mapcache was supposed to limit the total
>>> amount of guest memory mapped at one time, for a 32-bit qemu. For 64-bit
>>> qemu I wouldn't expect to find a limit as low as 3.25G.
>> Sorry, I should have specified that it is a 64 bit dom0 / hypervisor.
> Okay, well I'm not sure what limit qemu-dm is hitting then. Mapping 3.25G of
> guest memory will only require a few megabytes of pagetables for the qemu
> process in dom0. Perhaps there is a ulimit or something set on the qemu
I don't think a ulimit plays in here - if Dom0 is given more memory,
qemu-dm won't fail. The question really is why qemu-dm actually
uses unbounded mmap()-s in the first place. Even if the memory
went only into page tables (which I doubt), this is a scalability
Of course, e.g. an address space ulimit put on qemu-dm should
also not cause any failure - clearly there's a lack of error handling
> If we can work out and detect this limit, perhaps 64-bit qemu-dm could have
> a mapping cache similar to 32-bit qemu-dm, limited to some fraction of the
> detected mapping limit. And/or, on mapping failure, we could reclaim
> resources by simply zapping the existing cached mappings. Seems there's a
> few options. I don't really maintain qemu-dm myself -- you might get some
> help from Ian Jackson, Stefano, or Anthony Perard if you need more advice.
Looking forward to their comments.
Xen-devel mailing list