[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2] x86/memshr: fix preemption in relinquish_shared_pages()
For one, should hypercall_preempt_check() return false the first time it gets called, it would never have got called again (because count, being checked for equality, didn't get reset to zero). And then, if there were a huge range of unshared pages, with count not getting incremented at all in that case there would also not be any preemption. Fix this by using a biased increment (ratio 1:16 for unshared vs shared pages), and flushing the count to zero in case of a "false" return from hypercall_preempt_check(). Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- v2: Adjust continuation detection as per Tim's request. --- b/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -1290,15 +1290,21 @@ int relinquish_shared_pages(struct domai set_rc = p2m->set_entry(p2m, gfn, _mfn(0), PAGE_ORDER_4K, p2m_invalid, p2m_access_rwx); ASSERT(set_rc != 0); - count++; + count += 0x10; } + else + ++count; - /* Preempt every 2MiB. Arbitrary */ - if ( (count == 512) && hypercall_preempt_check() ) + /* Preempt every 2MiB (shared) or 32MiB (unshared) - arbitrary. */ + if ( count >= 0x2000 ) { - p2m->next_shared_gfn_to_relinquish = gfn + 1; - rc = -EAGAIN; - break; + if ( hypercall_preempt_check() ) + { + p2m->next_shared_gfn_to_relinquish = gfn + 1; + rc = -EAGAIN; + break; + } + count = 0; } } Attachment:
x86-memshr-cleanup-preemption.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |