[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1 of 3] x86/mm/sharing: Clean ups for relinquishing shared pages on destroy
> At 10:16 -0400 on 12 Apr (1334225772), Andres Lagar-Cavilla wrote: >> xen/arch/x86/domain.c | 16 ++++++++++++- >> xen/arch/x86/mm/mem_sharing.c | 45 >> +++++++++++++++++++++++++++++++++++++++ >> xen/arch/x86/mm/p2m.c | 4 +++ >> xen/include/asm-arm/mm.h | 4 +++ >> xen/include/asm-x86/domain.h | 1 + >> xen/include/asm-x86/mem_sharing.h | 10 ++++++++ >> xen/include/asm-x86/p2m.h | 4 +++ >> 7 files changed, 82 insertions(+), 2 deletions(-) >> >> >> When a domain is destroyed, its pages are freed in relinquish_resources >> in a >> preemptible mode, in the context of a synchronous domctl. >> >> P2m entries pointing to shared pages are, however, released during p2m >> cleanup >> in an RCU callback, and in non-preemptible mode. >> >> This is an O(n) operation for a very large n, which may include actually >> freeing shared pages for which the domain is the last holder. >> >> To improve responsiveness, move this operation to the preemtible portion >> of >> domain destruction, during the synchronous domain_kill hypercall. And >> remove >> the bulk of the work from the RCU callback. >> >> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx> > > I've applied this as a bug-fix. The other two seem more like new > development and I'm less happy about taking them before 4.2 Thanks for applying this one. Can I ask you to reconsider on the other two? Right now I can freeze up the hypervisor for several minutes (without these two patches). Soft lockups start popping up in dom0. I've had two instances in which the block IO driver bails out and the dom0 file system becomes read-only, forcing host reboot. It's a horrible performance regression this is fixing. Thanks, Andres > > Cheers, > > Tim. > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |