[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/5] IOMMU: make page table population preemptible
>>> On 13.12.13 at 13:18, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote: > On 13/12/2013 09:51, Jan Beulich wrote: >>>>> On 11.12.13 at 19:40, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote: >>> On 10/12/2013 15:45, Jan Beulich wrote: >>>> @@ -321,7 +342,32 @@ static int iommu_populate_page_table(str >>>> d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page), >>>> IOMMUF_readable|IOMMUF_writable); >>>> if ( rc ) >>>> + { >>>> + page_list_add(page, &d->page_list); >>>> break; >>>> + } >>>> + } >>>> + page_list_add_tail(page, &d->arch.relmem_list); >>>> + if ( !(++n & 0xff) && !page_list_empty(&d->page_list) && >>> Why the forced restart here? If nothing needs pre-empting, surely it is >>> better to continue? >>> >>> Or is this about equality on the pcidevs_lock ? >>> >>>> + hypercall_preempt_check() ) >> Did you overlook this part of the condition? > > No, but I did mentally get the logic inverted when trying to work out > what was going on. > > How about (++n > 0xff) ? > > If we have already spent a while in this loop, and the > hypercall_preempt_check() doesn't flip to 1 until a few iterations after > n is congruent with 0x100, waiting for another 0x100 iterations before > checking again seems a little long. The thing I'm trying to avoid is calling hypercall_preempt_check() overly often - especially when the prior if()'s body doesn't get entered we might otherwise do very little useful for per check. While 256 isn't overly much (covering merely 1Mb of guest space), I could see two possibilities to get this a little more towards what you want: Either we right shift the mask each time we actually call hypercall_preempt_check(), or we bias the increment (adding e.g. 10 when going the expensive route, else 1). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |