[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] x86/memshr: fix preemption in relinquish_shared_pages()



For one, should hypercall_preempt_check() return false the first time
it gets called, it would never have got called again (because count,
being checked for equality, didn't get reset to zero).

And then, if there were a huge range of unshared pages, with count not
getting incremented at all in that case there would also not be any
preemption.

Fix this by using a biased increment (ratio 1:16 for unshared vs shared
pages), and check for certain bits in count to be clear rather than for
a specific value (the alternative would be to do a greater-or-equal
comparison and flush the value to zero in case of a "false" return from
hypercall_preempt_check()).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1290,11 +1290,13 @@ int relinquish_shared_pages(struct domai
             set_rc = p2m->set_entry(p2m, gfn, _mfn(0), PAGE_ORDER_4K,
                                     p2m_invalid, p2m_access_rwx);
             ASSERT(set_rc != 0);
-            count++;
+            count += 0x10;
         }
+        else
+            ++count;
 
-        /* Preempt every 2MiB. Arbitrary */
-        if ( (count == 512) && hypercall_preempt_check() )
+        /* Preempt every 2MiB (shared) or 32MiB (unshared) - arbitrary. */
+        if ( !(count & 0x1ff0) && hypercall_preempt_check() )
         {
             p2m->next_shared_gfn_to_relinquish = gfn + 1;
             rc = -EAGAIN;



Attachment: x86-memshr-cleanup-preemption.patch
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.