[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/5] xen/arm: Only enable physical IRQs when the guest asks



On Tue, 25 Jun 2013, Julien Grall wrote:
> On 06/25/2013 05:28 PM, Ian Campbell wrote:
> 
> > On Tue, 2013-06-25 at 00:04 +0100, Julien Grall wrote:
> >> If the guest VCPU receives an interrupts when it's disabled, it will throw
> >> away the IRQ with EOIed it.
> > 
> > Did you mean "without EOIing it" or perhaps "having EOIed it"?
> > 
> >>  This is result to lost IRQ forever.
> > 
> > "This results in losing the IRQ forever".
> > 
> >> Directly EOIed the interrupt doesn't help because the IRQ could be fired
> > 
> > EOIing in this context.
> > 
> >> again and result to an infinited loop.
> > 
> >                         infinite
> > 
> >> It happens during dom0 boot on the versatile express TC2 with the ethernet
> >> card.
> >>
> >> Let the interrupt disabled when Xen setups the route and enable it when 
> >> Linux
> > 
> >   "Lets ... when Xen sets up the route..."
> > 
> >> asks to enable it.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> >> ---
> >>  xen/arch/arm/domain_build.c |   14 ++++++++++++++
> >>  xen/arch/arm/gic.c          |   25 +++++++++++++++++++++++--
> >>  xen/arch/arm/vgic.c         |    7 +++----
> >>  xen/include/asm-arm/gic.h   |    4 ++++
> >>  xen/include/asm-arm/irq.h   |    6 ++++++
> >>  5 files changed, 50 insertions(+), 6 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >> index f5befbd..0470a2d 100644
> >> --- a/xen/arch/arm/domain_build.c
> >> +++ b/xen/arch/arm/domain_build.c
> >> @@ -408,6 +408,20 @@ static int map_device(struct domain *d, const struct 
> >> dt_device_node *dev)
> >>          }
> >>  
> >>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
> >> +
> >> +        /*
> >> +         * Only map SGI interrupt in the guest as XEN won't handle
> >> +         * it correctly.
> >> +         * TODO: Fix it
> >> +         */
> >> +        if ( !irq_is_sgi(irq.irq) )
> >> +        {
> >> +            printk(XENLOG_WARNING "WARNING: Don't map irq %u for %s has "
> > 
> > s/has/as/ I think.
> > 
> > Did you really mean SGI here? I'd have thought from the context that you
> > would mean SPIs. SGIs aren't anything to do with any real devices almost
> > by definition -- if you saw on in the device tree I'd be very surprised!
> 
> 
> It was SPIs. I will fix it in the next patch series.
> 
> > 
> >> +                   "XEN doesn't handle properly non-SGI interrupt\n",
> >> +                   i, dt_node_full_name(dev));
> >> +            continue;
> >> +        }
> >> +
> >>          /* Don't check return because the IRQ can be use by multiple 
> >> device */
> >>          gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> >>      }
> >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> >> index b16ba8c..e7d082a 100644
> >> --- a/xen/arch/arm/gic.c
> >> +++ b/xen/arch/arm/gic.c
> >> @@ -179,6 +179,17 @@ static hw_irq_controller gic_guest_irq_type = {
> >>      .set_affinity = gic_irq_set_affinity,
> >>  };
> >>  
> >> +void gic_irq_enable(struct irq_desc *desc)
> >> +{
> >> +    spin_lock_irq(&desc->lock);
> >> +    spin_lock(&gic.lock);
> >> +
> >> +    desc->handler->enable(desc);
> >> +
> >> +    spin_unlock(&gic.lock);
> >> +    spin_unlock_irq(&desc->lock);
> >> +}
> >> +
> >>  /* needs to be called with gic.lock held */
> >>  static void gic_set_irq_properties(unsigned int irq, bool_t level,
> >>          unsigned int cpu_mask, unsigned int priority)
> >> @@ -543,8 +554,6 @@ static int __setup_irq(struct irq_desc *desc, unsigned 
> >> int irq,
> >>      desc->status &= ~IRQ_DISABLED;
> >>      dsb();
> >>  
> >> -    desc->handler->startup(desc);
> >> -
> >>      return 0;
> >>  }
> >>  
> >> @@ -560,6 +569,8 @@ int __init setup_dt_irq(const struct dt_irq *irq, 
> >> struct irqaction *new)
> >>  
> >>      rc = __setup_irq(desc, irq->irq, new);
> >>  
> >> +    desc->handler->startup(desc);
> >> +
> >>      spin_unlock_irqrestore(&desc->lock, flags);
> >>  
> >>      return rc;
> >> @@ -711,6 +722,7 @@ void gic_inject(void)
> >>          gic_inject_irq_start();
> >>  }
> >>  
> >> +/* TODO: Handle properly non SGI-interrupt */
> >>  int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
> >>                             const char * devname)
> >>  {
> >> @@ -719,11 +731,18 @@ int gic_route_irq_to_guest(struct domain *d, const 
> >> struct dt_irq *irq,
> >>      unsigned long flags;
> >>      int retval;
> >>      bool_t level;
> >> +    struct pending_irq *p;
> >> +    /* XXX: handler other VCPU than 0 */
> > 
> > That should be something like "XXX: handle VCPUs other than 0".
> > 
> > This only matters if we can route SGIs or PPIs to the guest though I
> > think, since they are the only banked interrupts? For SPIs we actually
> > want to actively avoid doing this multiple times, don't we?
> 
> 
> Yes. Here the VCPU is only used to retrieved the struct pending_irq.
> 
> > 
> > For the banked interrupts I think we just need a loop here, or for
> > p->desc to not be part of the pending_irq struct but actually part of
> > some separate per-domain datastructure, since it would be very weird to
> > have a domain where the PPIs differed between CPUs. (I'm not sure if
> > that is allowed by the hardware, I bet it is, but it would be a
> > pathological case IMHO...).
> 
> > I think a perdomain irq_desc * array is probably the right answer,
> > unless someone can convincingly argue that PPI routing differing between
> > VCPUs in a guest is a useful thing...
> 
> 
> Until now, I didn't see PPIs on other devices than the arch timers and
> the GIC. I don't know if it's possible, but pending_irq are also banked
> for PPIs, so it's not an issue.
> 
> The issue is how do we link the physical PPI to the virtual PPI? Is a
> 1:1 mapping. How does Xen handle PPI when a it is coming on VCPUs which
> doesn't handle it (for instance a domU)?

We should be already able to handle this case: vgic_vcpu_inject_irq uses
smp_send_event_check_mask to make sure the other physical processor
receives the notification and injects the virq.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.