|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
On 3/21/2017 6:00 PM, Paul Durrant wrote: -----Original Message----- From: Yu Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx] Sent: 21 March 2017 02:53 To: xen-devel@xxxxxxxxxxxxx Cc: zhiyuan.lv@xxxxxxxxx; Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx> Subject: [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps. After an ioreq server has unmapped, the remaining p2m_ioreq_server entries need to be reset back to p2m_ram_rw. This patch does this synchronously by iterating the p2m table. The synchronous resetting is necessary because we need to guarantee the p2m table is clean before another ioreq server is mapped. And since the sweeping of p2m table could be time consuming, it is done with hypercall continuation. Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> --- Cc: Paul Durrant <paul.durrant@xxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx> changes in v2: - According to comments from Jan and Andrew: do not use the HVMOP type hypercall continuation method. Instead, adding an opaque in xen_dm_op_map_mem_type_to_ioreq_server to store the gfn. - According to comments from Jan: change routine's comments and name of parameters of p2m_finish_type_change(). changes in v1: - This patch is splitted from patch 4 of last version. - According to comments from Jan: update the gfn_start for when use hypercall continuation to reset the p2m type. - According to comments from Jan: use min() to compare gfn_end and max mapped pfn in p2m_finish_type_change() --- xen/arch/x86/hvm/dm.c | 41 ++++++++++++++++++++++++++++++++++++++--- xen/arch/x86/mm/p2m.c | 29 +++++++++++++++++++++++++++++ xen/include/asm-x86/p2m.h | 7 +++++++ 3 files changed, 74 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c index 3f9484d..a24d0f8 100644 --- a/xen/arch/x86/hvm/dm.c +++ b/xen/arch/x86/hvm/dm.c @@ -385,16 +385,51 @@ static int dm_op(domid_t domid, case XEN_DMOP_map_mem_type_to_ioreq_server: { - const struct xen_dm_op_map_mem_type_to_ioreq_server *data = + struct xen_dm_op_map_mem_type_to_ioreq_server *data = &op.u.map_mem_type_to_ioreq_server; + unsigned long first_gfn = data->opaque; + unsigned long last_gfn; + + const_op = false; rc = -EOPNOTSUPP; /* Only support for HAP enabled hvm. */ if ( !hap_enabled(d) ) break; - rc = hvm_map_mem_type_to_ioreq_server(d, data->id, - data->type, data->flags); + if ( first_gfn == 0 ) + rc = hvm_map_mem_type_to_ioreq_server(d, data->id, + data->type, data->flags); + /* + * Iterate p2m table when an ioreq server unmaps from p2m_ioreq_server, + * and reset the remaining p2m_ioreq_server entries back to p2m_ram_rw. + */ + if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) ) + { + struct p2m_domain *p2m = p2m_get_hostp2m(d); + + while ( read_atomic(&p2m->ioreq.entry_count) && + first_gfn <= p2m->max_mapped_pfn ) + { + /* Iterate p2m table for 256 gfns each time. */ + last_gfn = first_gfn + 0xff; +Might be worth a comment here to sat that p2m_finish_type_change() limits last_gfn appropriately because it kind of looks wrong to be blindly calling it with first_gfn + 0xff. Or perhaps, rather than passing last_gfn, pass a 'max_nr' parameter of 256 instead. Then you can drop last_gfn altogether. If you prefer the parameters as they are then at least limit the scope of last_gfn to this while loop. Thanks for your comments, Paul. :)Well, setting last_gfn with first_gfn+0xff looks a bit awkward. But why using a 'max_nr' with a magic number, say 256, looks better? Or any other benefits? :-) Yu
_______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |