|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
> From: Yu Zhang
> Sent: Tuesday, March 21, 2017 10:53 AM
>
> After an ioreq server has unmapped, the remaining p2m_ioreq_server
> entries need to be reset back to p2m_ram_rw. This patch does this
> synchronously by iterating the p2m table.
>
> The synchronous resetting is necessary because we need to guarantee the
> p2m table is clean before another ioreq server is mapped. And since the
> sweeping of p2m table could be time consuming, it is done with hypercall
> continuation.
>
> Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> ---
> Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>
> changes in v2:
> - According to comments from Jan and Andrew: do not use the
> HVMOP type hypercall continuation method. Instead, adding
> an opaque in xen_dm_op_map_mem_type_to_ioreq_server to
> store the gfn.
> - According to comments from Jan: change routine's comments
> and name of parameters of p2m_finish_type_change().
>
> changes in v1:
> - This patch is splitted from patch 4 of last version.
> - According to comments from Jan: update the gfn_start for
> when use hypercall continuation to reset the p2m type.
> - According to comments from Jan: use min() to compare gfn_end
> and max mapped pfn in p2m_finish_type_change()
> ---
> xen/arch/x86/hvm/dm.c | 41
> ++++++++++++++++++++++++++++++++++++++---
> xen/arch/x86/mm/p2m.c | 29 +++++++++++++++++++++++++++++
> xen/include/asm-x86/p2m.h | 7 +++++++
> 3 files changed, 74 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index
> 3f9484d..a24d0f8 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -385,16 +385,51 @@ static int dm_op(domid_t domid,
>
> case XEN_DMOP_map_mem_type_to_ioreq_server:
> {
> - const struct xen_dm_op_map_mem_type_to_ioreq_server *data =
> + struct xen_dm_op_map_mem_type_to_ioreq_server *data =
> &op.u.map_mem_type_to_ioreq_server;
> + unsigned long first_gfn = data->opaque;
> + unsigned long last_gfn;
> +
> + const_op = false;
>
> rc = -EOPNOTSUPP;
> /* Only support for HAP enabled hvm. */
> if ( !hap_enabled(d) )
> break;
>
> - rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
> - data->type, data->flags);
> + if ( first_gfn == 0 )
> + rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
> + data->type, data->flags);
> + /*
> + * Iterate p2m table when an ioreq server unmaps from
> p2m_ioreq_server,
> + * and reset the remaining p2m_ioreq_server entries back to
> p2m_ram_rw.
> + */
can you elaborate how device model is expected to use this
new extension, i.e. on deciding first_gfn?
> + if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
> + {
> + struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> + while ( read_atomic(&p2m->ioreq.entry_count) &&
> + first_gfn <= p2m->max_mapped_pfn )
> + {
> + /* Iterate p2m table for 256 gfns each time. */
> + last_gfn = first_gfn + 0xff;
> +
> + p2m_finish_type_change(d, first_gfn, last_gfn,
> + p2m_ioreq_server, p2m_ram_rw);
> +
> + first_gfn = last_gfn + 1;
> +
> + /* Check for continuation if it's not the last iteration. */
> + if ( first_gfn <= p2m->max_mapped_pfn &&
> + hypercall_preempt_check() )
> + {
> + rc = -ERESTART;
> + data->opaque = first_gfn;
> + break;
> + }
> + }
> + }
> +
> break;
> }
>
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index
> e3e54f1..0a2f276 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1038,6 +1038,35 @@ void p2m_change_type_range(struct domain *d,
> p2m_unlock(p2m);
> }
>
> +/* Synchronously modify the p2m type for a range of gfns from ot to nt.
> +*/ void p2m_finish_type_change(struct domain *d,
> + unsigned long first_gfn, unsigned long last_gfn,
> + p2m_type_t ot, p2m_type_t nt) {
> + struct p2m_domain *p2m = p2m_get_hostp2m(d);
> + p2m_type_t t;
> + unsigned long gfn = first_gfn;
> +
> + ASSERT(first_gfn <= last_gfn);
> + ASSERT(ot != nt);
> + ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> +
> + p2m_lock(p2m);
> +
> + last_gfn = min(last_gfn, p2m->max_mapped_pfn);
> + while ( gfn <= last_gfn )
> + {
> + get_gfn_query_unlocked(d, gfn, &t);
> +
> + if ( t == ot )
> + p2m_change_type_one(d, gfn, t, nt);
> +
> + gfn++;
> + }
> +
> + p2m_unlock(p2m);
> +}
> +
> /*
> * Returns:
> * 0 for success
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index
> 395f125..3d665e8 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -611,6 +611,13 @@ void p2m_change_type_range(struct domain *d,
> int p2m_change_type_one(struct domain *d, unsigned long gfn,
> p2m_type_t ot, p2m_type_t nt);
>
> +/* Synchronously change the p2m type for a range of gfns:
> + * [first_gfn ... last_gfn]. */
> +void p2m_finish_type_change(struct domain *d,
> + unsigned long first_gfn,
> + unsigned long last_gfn,
> + p2m_type_t ot, p2m_type_t nt);
> +
> /* Report a change affecting memory types. */ void
> p2m_memory_type_changed(struct domain *d);
>
> --
> 1.9.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |