[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/1] xen-hvm.c: Add support for Xen access to vmport
On 09/30/14 06:35, Stefano Stabellini wrote: > On Mon, 29 Sep 2014, Don Slutz wrote: >> On 09/29/14 06:25, Stefano Stabellini wrote: >>> On Mon, 29 Sep 2014, Stefano Stabellini wrote: >>>> On Fri, 26 Sep 2014, Don Slutz wrote: >>>>> This adds synchronisation of the vcpu registers >>>>> between Xen and QEMU. >>>>> >>>>> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx> >>>> [...] >>>> >>>>> diff --git a/xen-hvm.c b/xen-hvm.c >>>>> index 05e522c..e1274bb 100644 >>>>> --- a/xen-hvm.c >>>>> +++ b/xen-hvm.c >>>>> @@ -857,14 +857,48 @@ static void cpu_handle_ioreq(void *opaque) >>>>> handle_buffered_iopage(state); >>>>> if (req) { >>>>> +#ifdef IOREQ_TYPE_VMWARE_PORT >>>> Is there any reason to #ifdef this code? >>>> Couldn't we just always build it in QEMU? >> This allows QEMU 2.2 (or later) to build on a system that has Xen 4.5 >> or earlier installed; and work. > I would rather remove the #ifdef from here and add any necessary > compatibility stuff to include/hw/xen/xen_common.h. > Ok, will do. >>>>> + if (req->type == IOREQ_TYPE_VMWARE_PORT) { >>>> I think it would make more sense to check for IOREQ_TYPE_VMWARE_PORT >>>> from within handle_ioreq. >>>> >> Ok, I can move it. >> >> >>>>> + CPUX86State *env; >>>>> + ioreq_t fake_req = { >>>>> + .type = IOREQ_TYPE_PIO, >>>>> + .addr = (uint16_t)req->size, >>>>> + .size = 4, >>>>> + .dir = IOREQ_READ, >>>>> + .df = 0, >>>>> + .data_is_ptr = 0, >>>>> + }; >>> Why do you need a fake req? >> To transport the 6 VCPU registers (only 32bits of them) that vmport.c >> needs to do it's work. >> >>> Couldn't Xen send the real req instead? >> Yes, but then a 2nd exchange between QEMU and Xen would be needed >> to fetch the 6 VCPU registers. The ways I know of to fetch the VCPU >> registers >> from Xen, all need many cycles to do their work and return >> a lot of data that is not needed. >> >> The other option that I have considered was to extend the ioreq_t type >> to have room for these registers, but that reduces by almost half the >> maximum number of VCPUs that are supported (They all live on 1 page). > Urgh. Now that I understand the patch better is think it's horrible, no > offense :-) None taken. > Why don't you add another new ioreq type to send out the vcpu state? > Something like IOREQ_TYPE_VCPU_SYNC_REGS? You could send it to QEMU > before IOREQ_TYPE_VMWARE_PORT. Actually this solution looks very imilar > to Alex's suggestion. > I can, it is just slower. This would require 2 new types. 1 for regs to QEMU, 1 for regs from QEMU. So instead of 1 round trip (Xen to QEMU to Xen), you now have 3 (Xen to QEMU (regs to QEMU) to Xen, Xen to QEMU (PIO) to Xen, Xen to QEMU (regs from QEMU) to Xen). -Don Slutz >>> If any case you should spend a >>> few more words on why you are doing this. >>> >> Sure. Will add some form of the above as part of the commit message. >> >>>>> + if (!xen_opaque_env) { >>>>> + xen_opaque_env = g_malloc(sizeof(CPUX86State)); >>>>> + } >>>>> + env = xen_opaque_env; >>>>> + env->regs[R_EAX] = (uint32_t)(req->addr >> 32); >>>>> + env->regs[R_EBX] = (uint32_t)(req->addr); >>>>> + env->regs[R_ECX] = req->count; >>>>> + env->regs[R_EDX] = req->size; >>>>> + env->regs[R_ESI] = (uint32_t)(req->data >> 32); >>>>> + env->regs[R_EDI] = (uint32_t)(req->data); >>>>> + handle_ioreq(&fake_req); >>>>> + req->addr = ((uint64_t)fake_req.data << 32) | >>>>> + (uint32_t)env->regs[R_EBX]; >>>>> + req->count = env->regs[R_ECX]; >>>>> + req->size = env->regs[R_EDX]; >>>>> + req->data = ((uint64_t)env->regs[R_ESI] << 32) | >>>>> + (uint32_t)env->regs[R_EDI]; >>>> This code could be moved to a separate helper function called by >>>> handle_ioreq. >>>> >> Ok. >> >> -Don Slutz >> >>>>> + } else { >>>>> + handle_ioreq(req); >>>>> + } >>>>> +#else >>>>> handle_ioreq(req); >>>>> +#endif _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |