[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V7 2/5] x86/mm: allocate logdirty_ranges for altp2ms
>>> On 19.11.18 at 18:26, <rcojocaru@xxxxxxxxxxxxxxx> wrote: > For now, only do allocation/deallocation; keeping them in sync > will be done in subsequent patches. > > Logdirty synchronization will only be done for active altp2ms; > so allocate logdirty rangesets (copying the host logdirty > rangeset) when an altp2m is activated, and free it when > deactivated. > > Write a helper function to do altp2m activiation (appropriately > handling failures). Also, refactor p2m_reset_altp2m() so that it > can be used to remove redundant codepaths, fixing the locking > while we’re at it. Perhaps this should have been a separate patch again, such that e.g. ... > +static void p2m_reset_altp2m(struct domain *d, unsigned int idx, > + enum altp2m_reset_type reset_type) > +{ > + struct p2m_domain *p2m; > + > + ASSERT(idx < MAX_ALTP2M); > + p2m = d->arch.altp2m_p2m[idx]; > + > + p2m_lock(p2m); > + > + p2m_flush_table_locked(p2m); > + > + if ( reset_type == ALTP2M_DEACTIVATE ) > + p2m_free_logdirty(p2m); > + > + /* Uninit and reinit ept to force TLB shootdown */ > + ept_p2m_uninit(p2m); > + ept_p2m_init(p2m); > + > + p2m->min_remapped_gfn = gfn_x(INVALID_GFN); > + p2m->max_remapped_gfn = 0; ... the addition of these can be properly associated with either part of the change. Looking at the code you remove from e.g. p2m_flush_altp2m() it's not part of the refactoring, but of what this patch's actual purpose is. But this is guesswork of mine without the split and without the addition getting explained, not the least because this getting moved here from the original instance of the function might also mean that it's part of the refactoring, but would then need to be done only in the ALTP2M_RESET case. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |