[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v8 1/5] x86/ioreq server: Release the p2m lock after mmio is handled.
Routine hvmemul_do_io() may need to peek the p2m type of a gfn to select the ioreq server. For example, operations on gfns with p2m_ioreq_server type will be delivered to a corresponding ioreq server, and this requires that the p2m type not be switched back to p2m_ram_rw during the emulation process. To avoid this race condition, we delay the release of p2m lock in hvm_hap_nested_page_fault() until mmio is handled. Note: previously in hvm_hap_nested_page_fault(), put_gfn() was moved before the handling of mmio, due to a deadlock risk between the p2m lock and the event lock(in commit 77b8dfe). Later, a per-event channel lock was introduced in commit de6acb7, to send events. So we do not need to worry about the deadlock issue. Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> --- Cc: Paul Durrant <paul.durrant@xxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- xen/arch/x86/hvm/hvm.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index ccfae4f..a9db7f7 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1870,18 +1870,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, (npfec.write_access && (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) { - __put_gfn(p2m, gfn); - if ( ap2m_active ) - __put_gfn(hostp2m, gfn); - rc = 0; if ( unlikely(is_pvh_domain(currd)) ) - goto out; + goto out_put_gfn; if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) ) hvm_inject_hw_exception(TRAP_gp_fault, 0); rc = 1; - goto out; + goto out_put_gfn; } /* Check if the page has been paged out */ -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |