[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 1/4] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
>>> On 09.09.16 at 07:55, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote: > > On 9/5/2016 9:31 PM, Jan Beulich wrote: >>>>> On 02.09.16 at 12:47, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote: >>> @@ -178,8 +179,27 @@ static int hvmemul_do_io( >>> break; >>> case X86EMUL_UNHANDLEABLE: >>> { >>> - struct hvm_ioreq_server *s = >>> - hvm_select_ioreq_server(curr->domain, &p); >>> + struct hvm_ioreq_server *s = NULL; >>> + p2m_type_t p2mt = p2m_invalid; >>> + >>> + if ( is_mmio ) >>> + { >>> + unsigned long gmfn = paddr_to_pfn(addr); >>> + >>> + (void) get_gfn_query_unlocked(currd, gmfn, &p2mt); >>> + >>> + if ( p2mt == p2m_ioreq_server && dir == IOREQ_WRITE ) >>> + { >>> + unsigned int flags; >>> + >>> + s = p2m_get_ioreq_server(currd, &flags); >>> + if ( !(flags & XEN_HVMOP_IOREQ_MEM_ACCESS_WRITE) ) >>> + s = NULL; >>> + } >>> + } >>> + >>> + if ( !s && p2mt != p2m_ioreq_server ) >>> + s = hvm_select_ioreq_server(currd, &p); >> What I recall is that we had agreed on p2m_ioreq_server pages >> to be treated as ordinary RAM ones as long as no server can be >> found. The type check here contradicts that. Is there a reason? > > Thanks Jan. I had given my explaination on Sep 6's reply. :) > If s is NULL for a p2m_ioreq_server page, we do not wish to traverse the > rangeset > by hvm_select_ioreq_server() again, it may probably be a read emulation > process > for a read-modify-write scenario. Well, yes, I recall. Yet my request stands: At the very least there should be a comment here outlining the intended behavior. That'll make it quite a bit easier, I think, for the reader to figure whether the code actually matches those intentions (and also help people needing to touch this code again down the road). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |