[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
> -----Original Message----- > From: Yu Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx] > Sent: 02 April 2017 13:24 > To: xen-devel@xxxxxxxxxxxxx > Cc: zhiyuan.lv@xxxxxxxxx; Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Jan > Beulich <jbeulich@xxxxxxxx>; Andrew Cooper > <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap > <George.Dunlap@xxxxxxxxxx> > Subject: [PATCH v10 6/6] x86/ioreq server: Synchronously reset outstanding > p2m_ioreq_server entries when an ioreq server unmaps. > > After an ioreq server has unmapped, the remaining p2m_ioreq_server > entries need to be reset back to p2m_ram_rw. This patch does this > synchronously by iterating the p2m table. > > The synchronous resetting is necessary because we need to guarantee > the p2m table is clean before another ioreq server is mapped. And > since the sweeping of p2m table could be time consuming, it is done > with hypercall continuation. > > Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx> > --- > Cc: Paul Durrant <paul.durrant@xxxxxxxxxx> > Cc: Jan Beulich <jbeulich@xxxxxxxx> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx> > > changes in v3: > - According to comments from Paul: use mar_nr, instead of > last_gfn for p2m_finish_type_change(). > - According to comments from Jan: use gfn_t as type of > first_gfn in p2m_finish_type_change(). > - According to comments from Jan: simplify the if condition > before using p2m_finish_type_change(). > > changes in v2: > - According to comments from Jan and Andrew: do not use the > HVMOP type hypercall continuation method. Instead, adding > an opaque in xen_dm_op_map_mem_type_to_ioreq_server to > store the gfn. > - According to comments from Jan: change routine's comments > and name of parameters of p2m_finish_type_change(). > > changes in v1: > - This patch is splitted from patch 4 of last version. > - According to comments from Jan: update the gfn_start for > when use hypercall continuation to reset the p2m type. > - According to comments from Jan: use min() to compare gfn_end > and max mapped pfn in p2m_finish_type_change() > --- > xen/arch/x86/hvm/dm.c | 41 > ++++++++++++++++++++++++++++++++++++++--- > xen/arch/x86/mm/p2m.c | 29 +++++++++++++++++++++++++++++ > xen/include/asm-x86/p2m.h | 6 ++++++ > 3 files changed, 73 insertions(+), 3 deletions(-) > > diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c > index 7e0da81..d72b7bd 100644 > --- a/xen/arch/x86/hvm/dm.c > +++ b/xen/arch/x86/hvm/dm.c > @@ -384,15 +384,50 @@ static int dm_op(domid_t domid, > > case XEN_DMOP_map_mem_type_to_ioreq_server: > { > - const struct xen_dm_op_map_mem_type_to_ioreq_server *data = > + struct xen_dm_op_map_mem_type_to_ioreq_server *data = > &op.u.map_mem_type_to_ioreq_server; > + unsigned long first_gfn = data->opaque; > + > + const_op = false; > > rc = -EOPNOTSUPP; > if ( !hap_enabled(d) ) > break; > > - rc = hvm_map_mem_type_to_ioreq_server(d, data->id, > - data->type, data->flags); > + if ( first_gfn == 0 ) > + rc = hvm_map_mem_type_to_ioreq_server(d, data->id, > + data->type, data->flags); > + else > + rc = 0; > + > + /* > + * Iterate p2m table when an ioreq server unmaps from > p2m_ioreq_server, > + * and reset the remaining p2m_ioreq_server entries back to > p2m_ram_rw. > + */ > + if ( rc == 0 && data->flags == 0 ) > + { > + struct p2m_domain *p2m = p2m_get_hostp2m(d); > + > + while ( read_atomic(&p2m->ioreq.entry_count) && > + first_gfn <= p2m->max_mapped_pfn ) > + { > + /* Iterate p2m table for 256 gfns each time. */ > + p2m_finish_type_change(d, _gfn(first_gfn), 256, > + p2m_ioreq_server, p2m_ram_rw); > + > + first_gfn += 256; > + > + /* Check for continuation if it's not the last iteration. */ > + if ( first_gfn <= p2m->max_mapped_pfn && > + hypercall_preempt_check() ) > + { > + rc = -ERESTART; > + data->opaque = first_gfn; > + break; > + } > + } > + } > + > break; > } > > diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c > index 7a1bdd8..0daaa86 100644 > --- a/xen/arch/x86/mm/p2m.c > +++ b/xen/arch/x86/mm/p2m.c > @@ -1031,6 +1031,35 @@ void p2m_change_type_range(struct domain *d, > p2m_unlock(p2m); > } > > +/* Synchronously modify the p2m type for a range of gfns from ot to nt. */ > +void p2m_finish_type_change(struct domain *d, > + gfn_t first_gfn, unsigned long max_nr, > + p2m_type_t ot, p2m_type_t nt) > +{ > + struct p2m_domain *p2m = p2m_get_hostp2m(d); > + p2m_type_t t; > + unsigned long gfn = gfn_x(first_gfn); > + unsigned long last_gfn = gfn + max_nr - 1; > + > + ASSERT(ot != nt); > + ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt)); > + > + p2m_lock(p2m); > + > + last_gfn = min(last_gfn, p2m->max_mapped_pfn); > + while ( gfn <= last_gfn ) > + { > + get_gfn_query_unlocked(d, gfn, &t); > + > + if ( t == ot ) > + p2m_change_type_one(d, gfn, t, nt); > + > + gfn++; > + } > + > + p2m_unlock(p2m); > +} > + > /* > * Returns: > * 0 for success > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h > index e7e390d..0e670af 100644 > --- a/xen/include/asm-x86/p2m.h > +++ b/xen/include/asm-x86/p2m.h > @@ -611,6 +611,12 @@ void p2m_change_type_range(struct domain *d, > int p2m_change_type_one(struct domain *d, unsigned long gfn, > p2m_type_t ot, p2m_type_t nt); > > +/* Synchronously change the p2m type for a range of gfns */ > +void p2m_finish_type_change(struct domain *d, > + gfn_t first_gfn, > + unsigned long max_nr, > + p2m_type_t ot, p2m_type_t nt); > + > /* Report a change affecting memory types. */ > void p2m_memory_type_changed(struct domain *d); > > -- > 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |