[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/8] ioreq-server: centralize access to ioreq structures



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 07 April 2014 12:10
> To: Paul Durrant
> Cc: Eddie Dong; Jun Nakajima; Kevin Tian; xen-devel@xxxxxxxxxxxxx; Keir
> (Xen.org)
> Subject: Re: [PATCH v4 2/8] ioreq-server: centralize access to ioreq 
> structures
> 
> >>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -173,15 +160,6 @@ static int hvmemul_do_io(
> >          return X86EMUL_UNHANDLEABLE;
> >      }
> >
> > -    if ( p->state != STATE_IOREQ_NONE )
> > -    {
> > -        gdprintk(XENLOG_WARNING, "WARNING: io already pending
> (%d)?\n",
> > -                 p->state);
> > -        if ( ram_page )
> > -            put_page(ram_page);
> > -        return X86EMUL_UNHANDLEABLE;
> > -    }
> > -
> 
> Shouldn't this be replaced with a call to hvm_io_pending() instead of
> just getting deleted?

Yes, that's probably better.

> 
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -352,6 +352,26 @@ void hvm_migrate_pirqs(struct vcpu *v)
> >      spin_unlock(&d->event_lock);
> >  }
> >
> > +static ioreq_t *get_ioreq(struct vcpu *v)
> > +{
> > +    struct domain *d = v->domain;
> > +    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > +
> > +    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > +
> > +    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > +}
> > +
> > +bool_t hvm_io_pending(struct vcpu *v)
> > +{
> > +    ioreq_t *p = get_ioreq(v);
> > +
> > +    if ( !p )
> > +        return 0;
> > +
> > +    return ( p->state != STATE_IOREQ_NONE );
> 
> The parentheses are pointless but a matter of taste (but then again
> using them here makes little sense if you don't also use them around
> the conditional expression in the function right above), but the blanks
> inside them clearly don't belong there according to our coding style.
> 

Ok.

> > @@ -1432,14 +1533,23 @@ bool_t hvm_send_assist_req(struct vcpu *v)
> >          return 0;
> >      }
> >
> > -    prepare_wait_on_xen_event_channel(v->arch.hvm_vcpu.xen_port);
> > +    p->dir = proto_p->dir;
> > +    p->data_is_ptr = proto_p->data_is_ptr;
> > +    p->type = proto_p->type;
> > +    p->size = proto_p->size;
> > +    p->addr = proto_p->addr;
> > +    p->count = proto_p->count;
> > +    p->df = proto_p->df;
> > +    p->data = proto_p->data;
> 
> I realize that you do this piecemeal copying because of wanting to
> leave alone ->state. If you didn't have the input pointer const, and
> if no caller depended on that field having any specific contents (I'm
> sure none of them cares), you could set the input structure's field
> first to STATE_IOREQ_NONE and then copy the whole structure in
> one go. And that all if the field's value is meaningful at all across the
> call to prepare_wait_on_xen_event_channel(), as it gets set to
> STATE_IOREQ_READY right afterwards.
> 
> Hmm, I see vp_eport also needs to be left untouched. I wonder
> whether there shouldn't be a sub-structure with all the fields set
> above, and with only that sub-structure getting passed around
> after setting up on-stack.
> 

The layout of ioreq_t is stable (as it's an ABI with QEMU) so I can't change 
it. I could introduce a new variant without vp_eport or state and change xen to 
use that internally, but that's going to be quite a big patch. Alternatively I 
could copy vp_eport aside here an assign the struct - which should lead to a 
smaller hunk, so I'll do that.

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.