[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH v2x prototype 1/1] Add IOREQ_TYPE_VMWARE_PORT



> -----Original Message-----
> From: Don Slutz [mailto:dslutz@xxxxxxxxxxx]
> Sent: 13 October 2014 18:11
> To: Paul Durrant; Don Slutz; xen-devel@xxxxxxxxxxxxx
> Cc: Jan Beulich; Keir (Xen.org); Ian Campbell
> Subject: Re: [RFC][PATCH v2x prototype 1/1] Add
> IOREQ_TYPE_VMWARE_PORT
> 
> On 10/13/14 09:26, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Don Slutz [mailto:dslutz@xxxxxxxxxxx]
> >> Sent: 09 October 2014 15:26
> >> To: xen-devel@xxxxxxxxxxxxx; Paul Durrant
> >> Cc: Jan Beulich; Keir (Xen.org); Ian Campbell; Don Slutz
> >> Subject: [RFC][PATCH v2x prototype 1/1] Add
> IOREQ_TYPE_VMWARE_PORT
> >>
> >> This adds synchronisation of the 6 vcpu registers (only 32bits of
> >> them) that vmport.c needs between Xen and QEMU.
> >>
> >> This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
> >> fetch and put these 6 vcpu registers used by the code in vmport.c
> >> and vmmouse.c
> >>
> >> QEMU patch is named "xen-hvm.c: Add support for Xen access to
> vmport"
> >>
> >> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
> >> ---
> >> As requested by Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> >>
> >> Here is a prototype of the QEMU change using a 2nd shared page.
> >> I picked adding HVM_PARAM_VMPORT_IOREQ_PFN as the simple and
> >> fast way to handle QEMU building on older Xen versions.
> >>
> >> There is xentrace and debug logging that is TBD for the Xen 4.6
> >> submission of this.
> ...
> >> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> >> index c0f47d2..4b8ea8f 100644
> >> --- a/xen/arch/x86/hvm/emulate.c
> >> +++ b/xen/arch/x86/hvm/emulate.c
> >> @@ -52,12 +52,14 @@ static void hvmtrace_io_assist(int is_mmio, ioreq_t
> >> *p)
> >>
> >>   static int hvmemul_do_io(
> >>       int is_mmio, paddr_t addr, unsigned long *reps, int size,
> >> -    paddr_t ram_gpa, int dir, int df, void *p_data)
> >> +    paddr_t ram_gpa, int dir, int df, void *p_data,
> >> +    struct cpu_user_regs *regs)
> >>   {
> >>       struct vcpu *curr = current;
> >>       struct hvm_vcpu_io *vio;
> >>       ioreq_t p = {
> >> -        .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
> >> +        .type = regs ? IOREQ_TYPE_VMWARE_PORT :
> >> +                is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
> >>           .addr = addr,
> >>           .size = size,
> >>           .dir = dir,
> >> @@ -65,11 +67,15 @@ static int hvmemul_do_io(
> >>           .data = ram_gpa,
> >>           .data_is_ptr = (p_data == NULL),
> >>       };
> >> +    vmware_ioreq_t vp;
> >> +    vmware_ioreq_t *vpp;
> >>       unsigned long ram_gfn = paddr_to_pfn(ram_gpa);
> >>       p2m_type_t p2mt;
> >>       struct page_info *ram_page;
> >>       int rc;
> >>
> >> +    BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_ioreq_t));
> >> +
> >>       /* Check for paged out page */
> >>       ram_page = get_page_from_gfn(curr->domain, ram_gfn, &p2mt,
> >> P2M_UNSHARE);
> >>       if ( p2m_is_paging(p2mt) )
> >> @@ -101,7 +107,17 @@ static int hvmemul_do_io(
> >>           return X86EMUL_UNHANDLEABLE;
> >>       }
> >>
> >> -    if ( !p.data_is_ptr && (dir == IOREQ_WRITE) )
> >> +    if ( regs )
> >> +    {
> >> +        vpp = &vp;
> >> +        p.data = regs->rax;
> >> +        vp.ebx = regs->rbx;
> >> +        vp.ecx = regs->rcx;
> >> +        vp.edx = regs->rdx;
> >> +        vp.esi = regs->rsi;
> >> +        vp.edi = regs->rdi;
> >> +    }
> >> +    else if ( !p.data_is_ptr && (dir == IOREQ_WRITE) )
> >>       {
> >>           memcpy(&p.data, p_data, size);
> >>           p_data = NULL;
> >> @@ -161,7 +177,19 @@ static int hvmemul_do_io(
> >>                   put_page(ram_page);
> >>               return X86EMUL_RETRY;
> >>           }
> >> +    case HVMIO_awaiting_completion:
> >> +        if ( regs )
> >> +        {
> >> +            /* May have to wait for previous cycle of a multi-write to
> complete.
> >> */
> >> +            if ( vio->mmio_retry ) {
> >> +                gdprintk(XENLOG_WARNING, "WARNING: mmio_retry
> >> io_state=%d?\n", vio->io_state);
> >> +                return X86EMUL_RETRY;
> >> +            }
> >> +            /* These are expected if we get here via hvmemul_do_io() */
> >> +            break;
> >> +        }
> >>       default:
> >> +        gdprintk(XENLOG_WARNING, "WARNING: io_state=%d?\n", vio-
> >>> io_state);
> >>           if ( ram_page )
> >>               put_page(ram_page);
> >>           return X86EMUL_UNHANDLEABLE;
> >> @@ -175,7 +203,7 @@ static int hvmemul_do_io(
> >>           return X86EMUL_UNHANDLEABLE;
> >>       }
> >>
> >> -    vio->io_state =
> >> +    vio->io_state = regs ? HVMIO_handle_vmport_awaiting_completion :
> >>           (p_data == NULL) ? HVMIO_dispatched :
> HVMIO_awaiting_completion;
> >>       vio->io_size = size;
> >>
> >> @@ -197,6 +225,9 @@ static int hvmemul_do_io(
> >>           if ( rc == X86EMUL_UNHANDLEABLE )
> >>               rc = hvm_buffered_io_intercept(&p);
> >>       }
> >> +    else if ( regs ) {
> >> +        rc = X86EMUL_UNHANDLEABLE;
> >> +    }
> >>       else
> >>       {
> >>           rc = hvm_portio_intercept(&p);
> >> @@ -210,7 +241,7 @@ static int hvmemul_do_io(
> >>           p.state = STATE_IORESP_READY;
> >>           if ( !vio->mmio_retry )
> >>           {
> >> -            hvm_io_assist(&p);
> >> +            hvm_io_assist(&p, vpp);
> >>               vio->io_state = HVMIO_none;
> >>           }
> >>           else
> >> @@ -226,13 +257,19 @@ static int hvmemul_do_io(
> >>           }
> >>           else
> >>           {
> >> -            rc = X86EMUL_RETRY;
> >> -            if ( !hvm_send_assist_req(&p) )
> >> +            if ( regs )
> >> +                rc = X86EMUL_VMPORT_RETRY;
> >> +            else
> >> +                rc = X86EMUL_RETRY;
> >> +            if ( !hvm_send_assist_req(&p, vpp) )
> >>                   vio->io_state = HVMIO_none;
> >>               else if ( p_data == NULL )
> >>                   rc = X86EMUL_OKAY;
> >>           }
> >>           break;
> >> +    case X86EMUL_VMPORT_RETRY:
> >> +        rc = X86EMUL_RETRY;
> >> +        break;
> >>       default:
> >>           BUG();
> >>       }
> > I still fail to see why you need to make such intrusive modifications to 
> > this
> code.
> 
> I am confused.  The code is doing what you say:
> 

Ok, I guess I was confused as to why you had to pass the vmware_ioreq_t all the 
way into send_assist_req. My initial reading suggested that you were still 
overlaying the ioreq_t at that point, but re-reading, I can see you're copying 
it elsewhere. It would seem like the register info could be copied into the 
shared page much much earlier though so you would not need to modify 
send_assist_req at all.
My thinking is that, to get register info to QEMU, you could copy data into the 
shared page in hvmemul_do_vmport() and then basically just send an assist req 
with your new type. The code in hvm_io_assist could then be modified to take 
note if the ioreq type being completed, and if it detects your new type, it can 
then retrieve updated register info from the shared page and copy it back into 
the guest registers. So, no need for a new io state. Would that not work?

> >   I would have thought you could add a new ioreq type, as you've done, but
> not make it 'data bearing'. I.e. you use your new VMPORT_IOREQ_PFN to
> carry the register values back and forth between Xen and QEMU, but you still
> issue a 'normal' ioreq_t structure (with your new type) via the 'normal'
> shared IOREQ_PFN. That way you need do nothing to the majority of the
> emulation code path - you'd just need to add code t copy the register values
> into and out of your new shared page at start and completion of I/O.
> 
> I did adjust hvmemul_do_io() instead of duplicating it's code.
> hvmemul_do_io() cannot be used without adjustment because of
> the new type.

Well, I think duplicating whatever you actually need from hvmemul_do_io() would 
be better because I think, given what I've said above, you need so little of 
what it does.

  Paul

> 
> I will code this up with a new routine.
> 
>      -Don Slutz
> 
> 
> >    Paul
> >
> >
> 
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.