[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] I cannot get any message from domU by console / pv_ops domU kernel crashes with xen_create_contiguous_region failed
On Tue, Dec 22, 2009 at 02:58:47PM +0000, Ian Campbell wrote: > On Tue, 2009-12-22 at 14:32 +0000, Konrad Rzeszutek Wilk wrote: > > > > (early) [ 0.000000] Kernel panic - not syncing: > > > > <3>xen_create_contiguous_region failed > > > > This implies you did not have enough DMA32 memory in the hypervisor for the > > guest. > > Meaning at least 64MB. > > Does this mean that every domU is allocating 64M for swiotlb by default? Unfortunately yes. We can remove the: ommit 4bc6b1a9dd5d7447af8c3d27c1449f73f5f764ec Author: root <root@xxxxxxxxxxxxxxxxxxxxx> Date: Thu Nov 5 16:33:10 2009 -0500 Enable Xen-SWIOTLB if running in [non-]privileged and disable the Xen-IOMMU if an IOMMU is detected. For PCI passthrough to work correctly, we need the Xen-SWIOTLB. Otherwise PCI devices in the non-privileged domains might not be able to do DMA. And revert to the old-style behaviour. > That seems like an awful lot of wastage when the vast majority of guests > have no I/O device and therefore no need for swiotlb, especially given > that this is memory from a limited pool (OK, so 64G isn't that low a > limit). > > If we are unable to automatically determine early enough whether a guest > needs swiotlb or not (which I suspect is the case without additional That is possible. Which is why I am working on the new SWIOTLB that can be turned on _after_ xen-pcifront has been activated. > tools support) then I think the old-style behaviour of only enabling > swiotlb in domU on explicit request matches the common case better. > > Ian. > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |