[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/balloon: set ballooned out pages as invalid in p2m
On Tue, 1 Jul 2014, Konrad Rzeszutek Wilk wrote: > On Tue, Jul 01, 2014 at 03:00:16PM +0100, David Vrabel wrote: > > On 01/07/14 14:58, Konrad Rzeszutek Wilk wrote: > > > On Tue, Jul 01, 2014 at 02:37:48PM +0100, David Vrabel wrote: > > >> Since cd9151e26d31048b2b5e00fd02e110e07d2200c9 (xen/balloon: set a > > >> mapping for ballooned out pages), a ballooned out page had its entry > > >> in the p2m set to the MFN of one of the scratch pages. This means > > >> that the p2m will contain many entries pointing to the same MFN. > > >> > > >> During a domain save, these many-to-one entries are not identified as > > >> such and the scratch page is saved multiple times. On restore the > > >> ballooned pages are populated with new frames and the domain may use > > >> up its allocation before all pages can be restored. > > >> > > >> Set ballooned out pages as INVALID_P2M_ENTRY in the p2m (as they were > > >> before), preventing them from being saved and re-populated on restore. > > >> > > > > > > Won't that invalide the primal purpose of the scratch page code? > > > > > > That is cd9151e26d31048b2b5e00fd02e110e07d2200c9 > > > > > > xen/balloon: set a mapping for ballooned out pages > > > " Allocate a page per cpu and map all the ballooned out pages to the > > > corresponding mfn. Set the p2m accordingly. This way reading from a > > > ballooned out page won't cause a kernel crash (see > > > http://lists.xen.org/archives/html/xen-devel/2012-12/msg01154.html). > > > " > > > ? > > > > No, because we still have a virtual mapping for the ballooned out page. > > If you could add that comment in that would reflect that - IMHO > it would be good. Agreed. Aside from that: Acked-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |