|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V4 3/3] x86/altp2m: fix display frozen when switching to a new view early
On 11/01/2018 02:45 PM, Razvan Cojocaru wrote:
> When an new altp2m view is created very early on guest boot, the
> display will freeze (although the guest will run normally). This
> may also happen on resizing the display. The reason is the way
> Xen currently (mis)handles logdirty VGA: it intentionally
> misconfigures VGA pages so that they will fault.
>
> The problem is that it only does this in the host p2m. Once we
> switch to a new altp2m, the misconfigured entries will no longer
> fault, so the display will not be updated.
>
> This patch:
> * updates ept_handle_misconfig() to use the active altp2m instead
> of the hostp2m;
> * modifies p2m_change_entry_type_global(), p2m_memory_type_changed
> and p2m_change_type_range() to propagate their changes to all
> valid altp2ms.
>
> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
> Suggested-by: George Dunlap <george.dunlap@xxxxxxxxxx>
Hey Razvan,
Sorry for taking so long to get to this. At a high level this looks
good. One answer and one other comment...
>
> ---
> CC: Jun Nakajima <jun.nakajima@xxxxxxxxx>
> CC: Kevin Tian <kevin.tian@xxxxxxxxx>
> CC: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
>
> ---
> Changes since V3:
> - RFC: We need George's opinion on Jan's suggestion to update
> p2m-pt.c as well.
Hmm -- if the only change is to add the `p2m_get_altp2m()` clause from
ept_handle_misconfig(), it wouldn't be terrible; still, it's extra code
and an extra branch that will never be executed.
I think I'd rather put an ASSERT(!altp2m_active()) there instead, with a
comment to the effect that if altp2m is ever enabled for NPT / shadow,
we'll have to use the altp2m; and to see ept_handle_misconfig().
> - Fixed mis-indented line in change_type_range().
> - Moved p2m_memory_type_changed() (and static helper) under
> #ifdef CONFIG_HVM.
> ---
> xen/arch/x86/mm/p2m-ept.c | 8 +++++
> xen/arch/x86/mm/p2m.c | 83
> +++++++++++++++++++++++++++++++++++++++--------
> xen/include/asm-x86/p2m.h | 6 ++--
> 3 files changed, 81 insertions(+), 16 deletions(-)
>
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index fabcd06..e6fa85f 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -657,6 +657,9 @@ bool_t ept_handle_misconfig(uint64_t gpa)
> bool_t spurious;
> int rc;
>
> + if ( altp2m_active(curr->domain) )
> + p2m = p2m_get_altp2m(curr);
> +
> p2m_lock(p2m);
>
> spurious = curr->arch.hvm.vmx.ept_spurious_misconfig;
> @@ -1440,6 +1443,11 @@ void p2m_init_altp2m_ept(struct domain *d, unsigned
> int i)
> struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> struct ept_data *ept;
>
> + p2m->max_mapped_pfn = hostp2m->max_mapped_pfn;
> + p2m->default_access = hostp2m->default_access;
> + p2m->domain = hostp2m->domain;
> +
> + p2m->global_logdirty = hostp2m->global_logdirty;
> p2m->ept.ad = hostp2m->ept.ad;
> p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
> p2m->max_remapped_gfn = 0;
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index bc6e543..70e436d 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -279,7 +279,6 @@ int p2m_init(struct domain *d)
> int p2m_is_logdirty_range(struct p2m_domain *p2m, unsigned long start,
> unsigned long end)
> {
> - ASSERT(p2m_is_hostp2m(p2m));
> if ( p2m->global_logdirty ||
> rangeset_contains_range(p2m->logdirty_ranges, start, end) )
> return 1;
> @@ -288,24 +287,49 @@ int p2m_is_logdirty_range(struct p2m_domain *p2m,
> unsigned long start,
> return 0;
> }
>
> +static void change_entry_type_global(struct p2m_domain *p2m,
> + p2m_type_t ot, p2m_type_t nt)
> +{
> + p2m->change_entry_type_global(p2m, ot, nt);
> + p2m->global_logdirty = (nt == p2m_ram_logdirty);
> +}
> +
> void p2m_change_entry_type_global(struct domain *d,
> p2m_type_t ot, p2m_type_t nt)
> {
> - struct p2m_domain *p2m = p2m_get_hostp2m(d);
> + struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>
> ASSERT(ot != nt);
> ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
>
> - p2m_lock(p2m);
> - p2m->change_entry_type_global(p2m, ot, nt);
> - p2m->global_logdirty = (nt == p2m_ram_logdirty);
> - p2m_unlock(p2m);
> + p2m_lock(hostp2m);
> +
> + change_entry_type_global(hostp2m, ot, nt);
> +
> +#ifdef CONFIG_HVM
> + if ( unlikely(altp2m_active(d)) )
> + {
> + unsigned int i;
> +
> + for ( i = 0; i < MAX_ALTP2M; i++ )
> + if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> + {
> + struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
> +
> + p2m_lock(p2m);
> + change_entry_type_global(p2m, ot, nt);
> + p2m_unlock(p2m);
> + }
> + }
> +#endif
> +
> + p2m_unlock(hostp2m);
So here you hold the host p2m lock while updating all the altp2m locks.
That sounds like it's probably necessary; but do you remember the
locking discipline? Is that allowed, and/or do we ever grab the altp2m
lock and then the hostp2m lock?
But then...
> }
>
> -void p2m_memory_type_changed(struct domain *d)
> +#ifdef CONFIG_HVM
> +/* There's already a memory_type_changed() in asm/mtrr.h. */
> +static void _memory_type_changed(struct p2m_domain *p2m)
> {
> - struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -
> if ( p2m->memory_type_changed )
> {
> p2m_lock(p2m);
> @@ -314,6 +338,21 @@ void p2m_memory_type_changed(struct domain *d)
> }
> }
>
> +void p2m_memory_type_changed(struct domain *d)
> +{
> + _memory_type_changed(p2m_get_hostp2m(d));
> +
> + if ( unlikely(altp2m_active(d)) )
> + {
> + unsigned int i;
> +
> + for ( i = 0; i < MAX_ALTP2M; i++ )
> + if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> + _memory_type_changed(d->arch.altp2m_p2m[i]);
> + }
> +}
> +#endif
...here and...
> +
> int p2m_set_ioreq_server(struct domain *d,
> unsigned int flags,
> struct hvm_ioreq_server *s)
> @@ -994,12 +1033,12 @@ int p2m_change_type_one(struct domain *d, unsigned
> long gfn_l,
> }
>
> /* Modify the p2m type of a range of gfns from ot to nt. */
> -void p2m_change_type_range(struct domain *d,
> - unsigned long start, unsigned long end,
> - p2m_type_t ot, p2m_type_t nt)
> +static void change_type_range(struct p2m_domain *p2m,
> + unsigned long start, unsigned long end,
> + p2m_type_t ot, p2m_type_t nt)
> {
> unsigned long gfn = start;
> - struct p2m_domain *p2m = p2m_get_hostp2m(d);
> + struct domain *d = p2m->domain;
> int rc = 0;
>
> ASSERT(ot != nt);
> @@ -1052,6 +1091,24 @@ void p2m_change_type_range(struct domain *d,
> p2m_unlock(p2m);
> }
>
> +void p2m_change_type_range(struct domain *d,
> + unsigned long start, unsigned long end,
> + p2m_type_t ot, p2m_type_t nt)
> +{
> + change_type_range(p2m_get_hostp2m(d), start, end, ot, nt);
> +
> +#ifdef CONFIG_HVM
> + if ( unlikely(altp2m_active(d)) )
> + {
> + unsigned int i;
> +
> + for ( i = 0; i < MAX_ALTP2M; i++ )
> + if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
> + change_type_range(d->arch.altp2m_p2m[i], start, end, ot, nt);
> + }
> +#endif
> +}
> +
...here you grab & release each lock separately, inside the update
function. memory_type_changed is probably more or less idempotent, so
won't matter if two different calls race; but it seems likely that if
two p2m_change_type_range() calls happen concurrently, the various
altp2ms will get different results. Is it worth refactoring both of
these so that, like change_entry_type_global, you hold the host p2m lock
while you change the individual altp2m locks?
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |