[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 4/8] ioreq-server: on-demand creation of ioreq server



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 07 April 2014 12:51
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v4 4/8] ioreq-server: on-demand creation of ioreq
> server
> 
> >>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -645,13 +643,68 @@ static void
> hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
> >      spin_unlock(&s->lock);
> >  }
> >
> > -static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> > +static void hvm_ioreq_server_remove_all_vcpus(struct
> hvm_ioreq_server *s)
> >  {
> > -    struct hvm_ioreq_server *s;
> > +    struct list_head *entry, *next;
> >
> > -    s = xzalloc(struct hvm_ioreq_server);
> > -    if ( !s )
> > -        return -ENOMEM;
> > +    spin_lock(&s->lock);
> > +
> > +    list_for_each_safe ( entry, next, &s->ioreq_vcpu_list )
> > +    {
> > +        struct hvm_ioreq_vcpu *sv = container_of(entry,
> > +                                                 struct hvm_ioreq_vcpu,
> > +                                                 list_entry);
> 
> list_for_each_entry_safe() avoids the need for the explicit use of
> container_of(), making the code easier to read.
> 

It also expands the scope of sv.

> > +static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> > +{
> > +    struct hvm_ioreq_server *s;
> > +    int rc;
> > +
> > +    spin_lock(&d->arch.hvm_domain.ioreq_server_lock);
> > +
> > +    rc = -EEXIST;
> > +    if ( d->arch.hvm_domain.ioreq_server != NULL )
> > +        goto fail1;
> > +
> > +    rc = -ENOMEM;
> > +    s = xzalloc(struct hvm_ioreq_server);
> 
> Similar comment as on an earlier patch: Please try to avoid allocations
> with lock held.
> 

Ok.

> > +    if ( !s )
> > +        goto fail2;
> > +
> > +    domain_pause(d);
> 
> And with that adjusted I would then again wonder whether taking
> the lock after pausing the domain wouldn't be the better model.
> 

Yes, it would.

> > @@ -4570,6 +4724,18 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              case HVM_PARAM_ACPI_S_STATE:
> >                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> >                  break;
> > +            case HVM_PARAM_IOREQ_PFN:
> > +            case HVM_PARAM_BUFIOREQ_PFN:
> > +            case HVM_PARAM_BUFIOREQ_EVTCHN: {
> > +                domid_t domid;
> > +
> > +                /* May need to create server */
> > +                domid = d-
> >arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> > +                rc = hvm_create_ioreq_server(d, domid);
> 
> Pretty odd that you do this on reads, but not on writes. What's the
> rationale behind this?
> 

The default server does not actually need to be there until something (i.e. 
QEMU) looks for it by reading one of these params. In future I hope that QEMU 
can be modified to use the explicit ioreq server creation API and we can 
eventually drop the default server.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.