|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
On Thu, Nov 12, 2020 at 01:29:33PM +0100, Jan Beulich wrote:
> On 11.11.2020 13:17, Roger Pau Monné wrote:
> > On Tue, Nov 10, 2020 at 03:50:44PM +0100, Jan Beulich wrote:
> >> On 10.11.2020 14:59, Roger Pau Monné wrote:
> >>> On Wed, Oct 28, 2020 at 10:24:53AM +0100, Jan Beulich wrote:
> >>>> --- a/xen/arch/x86/mm/p2m-pt.c
> >>>> +++ b/xen/arch/x86/mm/p2m-pt.c
> >>>> @@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
> >>>> {
> >>>> struct domain *d = p2m->domain;
> >>>> struct vcpu *v = current;
> >>>> - int rc = 0;
> >>>>
> >>>> if ( v->domain != d )
> >>>> v = d->vcpu ? d->vcpu[0] : NULL;
> >>>> if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v))
> >>>> ||
> >>>> p2m_is_nestedp2m(p2m) )
> >>>> - rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
> >>>> + {
> >>>> + unsigned int oflags;
> >>>> + mfn_t omfn;
> >>>> + int rc;
> >>>> +
> >>>> + paging_lock(d);
> >>>> +
> >>>> + if ( p2m->write_p2m_entry_pre )
> >>>> + p2m->write_p2m_entry_pre(d, gfn, p, new, level);
> >>>> +
> >>>> + oflags = l1e_get_flags(*p);
> >>>> + omfn = l1e_get_mfn(*p);
> >>>> +
> >>>> + rc = p2m_entry_modify(p2m,
> >>>> p2m_flags_to_type(l1e_get_flags(new)),
> >>>> + p2m_flags_to_type(oflags),
> >>>> l1e_get_mfn(new),
> >>>> + omfn, level);
> >>>> + if ( rc )
> >>>> + {
> >>>> + paging_unlock(d);
> >>>> + return rc;
> >>>> + }
> >>>> +
> >>>> + safe_write_pte(p, new);
> >>>> +
> >>>> + if ( p2m->write_p2m_entry_post )
> >>>> + p2m->write_p2m_entry_post(p2m, oflags);
> >>>> +
> >>>> + paging_unlock(d);
> >>>> +
> >>>> + if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
> >>>> + (oflags & _PAGE_PRESENT) &&
> >>>> + !p2m_get_hostp2m(d)->defer_nested_flush &&
> >>>> + /*
> >>>> + * We are replacing a valid entry so we need to flush
> >>>> nested p2ms,
> >>>> + * unless the only change is an increase in access rights.
> >>>> + */
> >>>> + (!mfn_eq(omfn, l1e_get_mfn(new)) ||
> >>>> + !perms_strictly_increased(oflags, l1e_get_flags(new))) )
> >>>> + p2m_flush_nestedp2m(d);
> >>>
> >>> It feel slightly weird to have a nested p2m hook post, and yet have
> >>> nested specific code here.
> >>>
> >>> Have you considered if the post hook could be moved outside of the
> >>> locked region, so that we could put this chunk there in the nested p2m
> >>> case?
> >>
> >> Yes, I did, but I don't think the post hook can be moved out. The
> >> only alternative therefore would be a 3rd hook. And this hook would
> >> then need to be installed on the host p2m for nested guests, as
> >> opposed to nestedp2m_write_p2m_entry_post, which gets installed in
> >> the nested p2m-s. As said in the description, the main reason I
> >> decided against a 3rd hook is that I suppose the code here isn't
> >> HAP-specific (while prior to this patch it was).
> >
> > I'm not convinced the guest TLB flush needs to be performed while
> > holding the paging lock. The point of such flush is to invalidate any
> > intermediate guest visible translations that might now be invalid as a
> > result of the p2m change, but the paging lock doesn't affect the guest
> > in any way.
> >
> > It's true that the dirty_cpumask might change, but I think we only
> > care that when returning from the function there are no stale cache
> > entries that contain the now invalid translation, and this can be
> > achieved equally by doing the flush outside of the locked region.
>
> I agree with all this. If only it was merely about TLB flushes. In
> the shadow case, shadow_blow_all_tables() gets invoked, and that
> one - looking at the other call sites - wants the paging lock held.
You got me confused here, I think you meant shadow_blow_tables?
The post hook for shadow could pick the lock again, as I don't think
the removal of the tables needs to be strictly done inside of the same
locked region?
Something to consider from a performance PoV.
> Additionally moving the stuff out of the locked region wouldn't
> allow us to achieve the goal of moving the nested flush into the
> hook, unless we introduced further hook handlers to be installed
> on the host p2m-s of nested guests.
Right, or else we would also need to add that chunk in the
non-nestedp2m hook also?
Maybe you could join both the nested and non-nested hooks and use a
different dirty bitmap for the flush?
Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |