[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 05/17] xen/arm: ITS: implement hw_irq_controller for LPIs
Hi, On 10/07/2015 09:42, vijay.kilari@xxxxxxxxx wrote: diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c index c41e82e..4f3801b 100644 --- a/xen/arch/arm/gic.c +++ b/xen/arch/arm/gic.c +static inline hw_irq_controller *get_host_hw_irq_controller(unsigned int irq) +{ + if ( is_lpi(irq) ) + return its_hw_ops->lpi_host_irq_type; + else + return gic_hw_ops->gic_host_irq_type; +} This is not what I asked on v3 [1]. The ITS hardware controller shouldn't be exposed to the common GIC. We have to keep a clean and comprehensible interface. What I asked is to replace the gic_host_irq_type variable by a new callback which will return the correct hw_irq_controller. For GICv2, it will return the same hw_irq_controller as today. For GICv3, it will check is the IRQ is an LPI and return the correct controller. FWIW, it was "ack" by Ian [2]. + +static inline hw_irq_controller *get_guest_hw_irq_controller(unsigned int irq) +{ + if ( is_lpi(irq) ) + return its_hw_ops->lpi_guest_irq_type; + else + return gic_hw_ops->gic_guest_irq_type; +} + /* * needs to be called with a valid cpu_mask, ie each cpu in the mask has * already called gic_cpu_init @@ -104,7 +126,8 @@ static void gic_set_irq_properties(struct irq_desc *desc, const cpumask_t *cpu_mask, unsigned int priority) { - gic_hw_ops->set_irq_properties(desc, cpu_mask, priority); + if ( desc->irq < gic_number_lines() ) + gic_hw_ops->set_irq_properties(desc, cpu_mask, priority); } Why this function can't be called for LPI? The configuration should likely be the same... @@ -149,7 +173,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) goto out; - desc->handler = gic_hw_ops->gic_guest_irq_type; + desc->handler = get_guest_hw_irq_controller(desc->irq); set_bit(_IRQ_GUEST, &desc->status); gic_set_irq_properties(desc, cpumask_of(v_target->processor), priority); diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index 2dd43ee..ba8528a 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c @@ -35,7 +35,13 @@ static DEFINE_SPINLOCK(local_irqs_type_lock); struct irq_guest { struct domain *d; - unsigned int virq; + union + { + /* virq refer to virtual irq in case of spi */ + unsigned int virq; + /* virq refer to event ID in case of lpi */ + unsigned int vid; Why can't we store the event ID in the irq_guest? As said on v3, this is not domain specific [3]. Furthermore, you add support to route LPI in Xen (see gic_route_irq_to_xen) where you will clearly need the event ID. void irq_set_affinity(struct irq_desc *desc, const cpumask_t *cpu_mask) { if ( desc != NULL ) diff --git a/xen/include/asm-arm/gic-its.h b/xen/include/asm-arm/gic-its.h index b5e09bd..e8d244f 100644 --- a/xen/include/asm-arm/gic-its.h +++ b/xen/include/asm-arm/gic-its.h @@ -161,6 +161,10 @@ typedef union { * The ITS view of a device. */ struct its_device { + /* Physical ITS */ + struct its_node *its; + /* Number of Physical LPIs assigned */ + int nr_lpis; Why didn't you add this field directly in the patch #4? It would be more logical. /* * ITS registers, offsets from ITS_base diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h index 34b492b..55e219f 100644 --- a/xen/include/asm-arm/irq.h +++ b/xen/include/asm-arm/irq.h @@ -17,6 +17,8 @@ struct arch_pirq struct arch_irq_desc { int eoi_cpu; unsigned int type; + struct its_device *dev; + u16 col_id; It has been suggested by Ian to move col_id in the its_device in the previous version [4]. Any reason to not doing it? Regards, [1] http://www.gossamer-threads.com/lists/xen/devel/386493#386493 [2] http://www.gossamer-threads.com/lists/xen/devel/386771#386771 [3] http://www.gossamer-threads.com/lists/xen/devel/385521#385521 [4] http://www.gossamer-threads.com/lists/xen/devel/387586#387586 -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |