[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hvm: wait for at least one ioreq server to be enabled



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 06 February 2015 14:45
> To: Paul Durrant
> Cc: Andrew Cooper; Ian Campbell; Wei Liu; xen-devel@xxxxxxxxxxxxx; Keir
> (Xen.org)
> Subject: Re: [PATCH] x86/hvm: wait for at least one ioreq server to be
> enabled
> 
> >>> On 06.02.15 at 13:27, <paul.durrant@xxxxxxxxxx> wrote:
> > In the case where a stub domain is providing emulation for an HVM
> > guest, there is no interlock in the toolstack to make sure that
> > the stub domain is up and running before the guest is unpaused.
> >
> > Prior to the introduction of ioreq servers this was not a problem,
> > since there was only ever one emulator so ioreqs were simply
> > created anyway and the vcpu remained blocked until the stub domain
> > started and picked up the ioreq.
> >
> > Since ioreq servers allow for multiple emulators for a single guest
> > it's not possible to know a priori which emulator will handle a
> > particular ioreq, so emulators must attach to a guest before the
> > guest runs.
> >
> > This patch works around the lack of interlock in the toolstack for
> > stub domains by keeping the domain paused until at least one ioreq
> > server is created and enabled, which in practice means the stub
> > domain is indeed up and running.
> 
> Do I understand correctly that this only deals correctly with the
> single server case, and the multi server case becoming functional
> depends on the tool stack change to be done?
> 

Yes, and that was deemed too invasive for backport to stable; plus xl/libxl 
have not yet been taught about multiple emulators.

> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -885,6 +885,12 @@ static void hvm_ioreq_server_enable(struct
> hvm_ioreq_server *s,
> >
> >    done:
> >      spin_unlock(&s->lock);
> > +
> > +    if ( d->arch.hvm_domain.ioreq_server.waiting )
> > +    {
> > +        d->arch.hvm_domain.ioreq_server.waiting = 0;
> > +        domain_unpause(d);
> > +    }
> >  }
> 
> So it takes looking up all callers to understand that this doesn't need
> to be atomic. This would have been avoided by mentioning this in
> the description or in a comment.

Ok, I'll stick a comment in.

> 
> > @@ -1435,6 +1441,19 @@ int hvm_domain_initialise(struct domain *d)
> >
> >      spin_lock_init(&d->arch.hvm_domain.ioreq_server.lock);
> >      INIT_LIST_HEAD(&d->arch.hvm_domain.ioreq_server.list);
> > +
> > +    /*
> > +     * In the case where a stub domain is providing emulation for
> > +     * the guest, there is no interlock in the toolstack to prevent
> > +     * the guest from running before the stub domain is ready.
> > +     * Hence the domain must remain paused until at least one ioreq
> > +     * server is created and enabled.
> > +     */
> > +    if ( !is_pvh_domain(d) ) {
> 
> Coding style.
> 

Ok.

> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -84,6 +84,7 @@ struct hvm_domain {
> >          spinlock_t       lock;
> >          ioservid_t       id;
> >          struct list_head list;
> > +        bool_t           waiting;
> 
> At least non normal non-debug builds you've got a slot of 32 bits
> between id and list - please fill such gaps when adding new
> members rather than adding further slack space.

True, I'll re-order.

   Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.