[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH RFC] ioreq-server: handle the lack of a default emulator properly
I started porting QEMU over to use the new ioreq server API and hit a problem with PCI bus enumeration. Because, with my patches, QEMU only registers to handle config space accesses for the PCI device it implements all other attempts by the guest to access 0xcfc go nowhere and this was causing the vcpu to wedge up because nothing was completing the I/O. This patch introduces an I/O completion handler into the hypervisor for the case where no ioreq server matches a particular request. Read requests are completed with 0xf's in the data buffer, writes and all other I/O req types are ignored. Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Cc: Keir Fraser <keir@xxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> --- This fix should probably go in 4.5 but this patch is RFC for the moment because of uncertainty about what to do for unemulated MMIO accesses. Originally I forced a domain crash in this case, but hvmloader actually hit the crash because the code that deals with building the ACPI TPM info reads from 0xFED40F00 looking for a signature value and there is nothing backing this access in my configuration. So, the question is whether to whitelist this access in some way or make building that table optional in some way so that it is only invoked if an emulated TPM is definitely present. xen/arch/x86/hvm/hvm.c | 41 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index bb45593..5d653d8 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d, if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) ) return NULL; - if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) || - (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) ) + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) return d->arch.hvm_domain.default_ioreq_server; cf8 = d->arch.hvm_domain.pci_cf8; @@ -2618,12 +2617,48 @@ bool_t hvm_send_assist_req_to_ioreq_server(struct hvm_ioreq_server *s, return 0; } +static bool_t hvm_complete_assist_req(ioreq_t *p) +{ + switch (p->type) + { + case IOREQ_TYPE_COPY: + gdprintk(XENLOG_WARNING, "unemulated MMIO (%s %d bytes @ %lx) %u reps\n", + (p->dir == IOREQ_READ) ? "read" : "write", + p->size, + p->addr, + p->count); + /* FALLTHRU */ + case IOREQ_TYPE_PIO: + if ( p->dir == IOREQ_READ ) + { + if ( !p->data_is_ptr ) + p->data = ~0ul; + else + { + int i, sign = p->df ? -1 : 1; + uint32_t data = ~0; + + for ( i = 0; i < p->count; i++ ) + hvm_copy_to_guest_phys(p->data + sign * i * p->size, &data, + p->size); + } + } + /* FALLTHRU */ + default: + p->state = STATE_IORESP_READY; + hvm_io_assist(p); + break; + } + + return 1; +} + bool_t hvm_send_assist_req(ioreq_t *p) { struct hvm_ioreq_server *s = hvm_select_ioreq_server(current->domain, p); if ( !s ) - return 0; + return hvm_complete_assist_req(p); return hvm_send_assist_req_to_ioreq_server(s, p); } -- 1.7.10.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |