[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 07/12] x86/virt/guest/xen: Remove use of pgd_list from the Xen guest code
On 16/06/15 15:19, Boris Ostrovsky wrote: > On 06/16/2015 10:15 AM, David Vrabel wrote: >> On 15/06/15 21:35, Ingo Molnar wrote: >>> * David Vrabel <david.vrabel@xxxxxxxxxx> wrote: >>> >>>> On 15/06/15 10:05, Ian Campbell wrote: >>>>> On Sat, 2015-06-13 at 11:49 +0200, Ingo Molnar wrote: >>>>>> xen_mm_pin_all()/unpin_all() are used to implement full guest >>>>>> instance >>>>>> suspend/restore. It's a stop-all method that needs to iterate >>>>>> through all >>>>>> allocated pgds in the system to fix them up for Xen's use. >>>>>> >>>>>> This code uses pgd_list, probably because it was an easy interface. >>>>>> >>>>>> But we want to remove the pgd_list, so convert the code over to >>>>>> walk all >>>>>> tasks in the system. This is an equivalent method. >>>> It is not equivalent because pgd_alloc() now populates entries in >>>> pgds that are >>>> not visible to xen_mm_pin_all() (note how the original code adds the >>>> pgd to the >>>> pgd_list in pgd_ctor() before calling pgd_prepopulate_pmd()). These >>>> newly >>>> allocated page tables won't be correctly converted on suspend/resume >>>> and the new >>>> process will die after resume. >>> So how should the Xen logic be fixed for the new scheme? I can't say >>> I can see >>> through the paravirt complexity here. >> Actually, since we freeze_processes() before trying to pin page tables, >> I think it should be ok as-is. >> >> I'll put the patch through some tests. > > Actually, I just ran this through a couple of boot/suspend/resume tests > and didn't see any issues (with the one fix I mentioned to Ingo > earlier). On unstable Xen only. In which case this can have a: Reviewed-by: David Vrabel <david.vrabel@xxxxxxxxxx> Thanks. David _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |