[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 for-xen-4.5 1/3] dpci: Move from domain centric model to hvm_dirq_dpci model.



On Thu, Sep 25, 2014 at 04:04:46PM +0100, Jan Beulich wrote:
> >>> On 25.09.14 at 16:48, <konrad.wilk@xxxxxxxxxx> wrote:
> > On Thu, Sep 25, 2014 at 03:24:53PM +0100, Jan Beulich wrote:
> >> >>> On 23.09.14 at 04:10, <konrad.wilk@xxxxxxxxxx> wrote:
> >> > @@ -130,6 +146,7 @@ int pt_irq_create_bind(
> >> >          return -ENOMEM;
> >> >      }
> >> >      pirq_dpci = pirq_dpci(info);
> >> > +    pt_pirq_reset(d, pirq_dpci);
> >> >  
> >> >      switch ( pt_irq_bind->irq_type )
> >> >      {
> >> > @@ -232,7 +249,6 @@ int pt_irq_create_bind(
> >> >          {
> >> >              unsigned int share;
> >> >  
> >> > -            pirq_dpci->dom = d;
> >> >              if ( pt_irq_bind->irq_type == PT_IRQ_TYPE_MSI_TRANSLATE )
> >> >              {
> >> >                  pirq_dpci->flags = HVM_IRQ_DPCI_MAPPED |
> >> > @@ -258,7 +274,6 @@ int pt_irq_create_bind(
> >> >              {
> >> >                  if ( pt_irq_need_timer(pirq_dpci->flags) )
> >> >                      kill_timer(&pirq_dpci->timer);
> >> > -                pirq_dpci->dom = NULL;
> >> >                  list_del(&girq->list);
> >> >                  list_del(&digl->list);
> >> >                  hvm_irq_dpci->link_cnt[link]--;
> >> > @@ -391,7 +406,6 @@ int pt_irq_destroy_bind(
> >> >          msixtbl_pt_unregister(d, pirq);
> >> >          if ( pt_irq_need_timer(pirq_dpci->flags) )
> >> >              kill_timer(&pirq_dpci->timer);
> >> > -        pirq_dpci->dom   = NULL;
> >> >          pirq_dpci->flags = 0;
> >> >          pirq_cleanup_check(pirq, d);
> >> >      }
> >> 
> >> Is all of the above really necessary? I.e. I can neither see why setting
> >> ->dom earlier is needed, nor why clearing it on the error paths should
> >> be dropped.
> > 
> > Yes. We need the ->dom so that the hvm_dirq_assist can run without
> > hitting an NULL pointer exception. Please keep in mind that the moment
> > we setup the IRQ action handler, we are "live" - [...]
> 
> But all you need is that this happens before respective
> pirq_guest_bind() calls. I.e. in the PT_IRQ_TYPE_PCI and
> PT_IRQ_TYPE_MSI_TRANSLATE cases it was already done early enough
> (avoiding it remaining set on error paths), so all you'd need is adding
> it for the PT_IRQ_TYPE_MSI path too.

.. and with the below statement ["pt_pirq_reset() is just replacing
that assigment"] I believe having just
 pirq->dom = d;

before the switch would be correct.

> 
> I agree that the clearing of the field in error paths might need a
> little care, but otoh you could equally well have hvm_dirq_assist()
> bail when it finds it to be NULL?

Can't do it as is. I added in the 'get_knowalive_domain()' and 'put_domain()'
in the 'schedule_softirq_for' and 'dpci_softirq' respectively.

Which means that we would have an outstanding refcount as 'dpci_softirq'
could not access the '->dom' to get the domain and do proper refcounting.

Unless I rip out the refcounting there which I believe is OK.

The reason it was added was to make sure that the domain destruction
would delay until the last of the 'hvm_dirq_assist' would be called.

The 'domain_destroy' (as of patch #3) does -EAGAIN if it finds an
outstanding 'hvm_dirq_assist' to be called (with or without the refcounting).

That means if we outright kill the guest we still have the guarantee
that the 'hvm_dirq_assist' has run - and won't get an crash
(as pirq_dpci would not be cleared underneath it). 

> 
> > The extra '->dom = d' in the pt_irq_create_bind was an extra one since the
> > pt_pirq_reset does it now.
> 
> Yeah, effectively pt_pirq_reset() is just replacing that assignment.
> It's not even clear wrapping this in a function is really worthwhile.

OK. Moved it out to be a simple assigment.
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.