[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] mm, oom: distinguish blockable mode for mmu notifiers
On Fri, Aug 24, 2018 at 07:54:19PM +0900, Tetsuo Handa wrote: > Two more worries for this patch. [...] > > > --- a/mm/hmm.c > > +++ b/mm/hmm.c > > @@ -177,16 +177,19 @@ static void hmm_release(struct mmu_notifier *mn, > > struct mm_struct *mm) > > up_write(&hmm->mirrors_sem); > > } > > > > -static void hmm_invalidate_range_start(struct mmu_notifier *mn, > > +static int hmm_invalidate_range_start(struct mmu_notifier *mn, > > struct mm_struct *mm, > > unsigned long start, > > - unsigned long end) > > + unsigned long end, > > + bool blockable) > > { > > struct hmm *hmm = mm->hmm; > > > > VM_BUG_ON(!hmm); > > > > atomic_inc(&hmm->sequence); > > + > > + return 0; > > } > > > > static void hmm_invalidate_range_end(struct mmu_notifier *mn, > > This assumes that hmm_invalidate_range_end() does not have memory > allocation dependency. But hmm_invalidate_range() from > hmm_invalidate_range_end() involves > > down_read(&hmm->mirrors_sem); > list_for_each_entry(mirror, &hmm->mirrors, list) > mirror->ops->sync_cpu_device_pagetables(mirror, action, > start, end); > up_read(&hmm->mirrors_sem); > > sequence. What is surprising is that there is no in-tree user who assigns > sync_cpu_device_pagetables field. > > $ grep -Fr sync_cpu_device_pagetables * > Documentation/vm/hmm.rst: /* sync_cpu_device_pagetables() - synchronize > page tables > include/linux/hmm.h: * will get callbacks through > sync_cpu_device_pagetables() operation (see > include/linux/hmm.h: /* sync_cpu_device_pagetables() - synchronize page > tables > include/linux/hmm.h: void (*sync_cpu_device_pagetables)(struct > hmm_mirror *mirror, > include/linux/hmm.h: * hmm_mirror_ops.sync_cpu_device_pagetables() > callback, so that CPU page > mm/hmm.c: mirror->ops->sync_cpu_device_pagetables(mirror, > action, > > That is, this API seems to be currently used by only out-of-tree users. Since > we can't check that nobody has memory allocation dependency, I think that > hmm_invalidate_range_start() should return -EAGAIN if blockable == false for > now. So you can see update and user of this there: https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-intel-v00 https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-nouveau-v01 https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-radeon-v00 I am still working on Mellanox and AMD GPU patchset. I will post the HMM changes that adapt to Michal shortly as anyway thus have been sufficiently tested by now. https://cgit.freedesktop.org/~glisse/linux/commit/?h=hmm-4.20&id=78785dcb5ba0924c2c5e7be027793f99ebbc39f3 https://cgit.freedesktop.org/~glisse/linux/commit/?h=hmm-4.20&id=4fc25571dc893f2b278e90cda9e71e139e01de70 Cheers, Jérôme _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |