[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/memshr: fix preemption in relinquish_shared_pages()
At 09:07 +0000 on 17 Dec (1387267627), Jan Beulich wrote: > For one, should hypercall_preempt_check() return false the first time > it gets called, it would never have got called again (because count, > being checked for equality, didn't get reset to zero). > > And then, if there were a huge range of unshared pages, with count not > getting incremented at all in that case there would also not be any > preemption. > > Fix this by using a biased increment (ratio 1:16 for unshared vs shared > pages), and check for certain bits in count to be clear rather than for > a specific value (the alternative would be to do a greater-or-equal > comparison and flush the value to zero in case of a "false" return from > hypercall_preempt_check()). > > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > --- a/xen/arch/x86/mm/mem_sharing.c > +++ b/xen/arch/x86/mm/mem_sharing.c > @@ -1290,11 +1290,13 @@ int relinquish_shared_pages(struct domai > set_rc = p2m->set_entry(p2m, gfn, _mfn(0), PAGE_ORDER_4K, > p2m_invalid, p2m_access_rwx); > ASSERT(set_rc != 0); > - count++; > + count += 0x10; > } > + else > + ++count; > > - /* Preempt every 2MiB. Arbitrary */ > - if ( (count == 512) && hypercall_preempt_check() ) > + /* Preempt every 2MiB (shared) or 32MiB (unshared) - arbitrary. */ > + if ( !(count & 0x1ff0) && hypercall_preempt_check() ) This will check after each of for the first 15 gfns unless one of them is shared (and potentially again after each rollover). I think it would be clearer just to check for > 2000 and reset to 0. Cheers, Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |