[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 12/17] xen/arm: ITS: Initialize LPI irq descriptors and route
Hi Vijay, On 10/07/2015 09:42, vijay.kilari@xxxxxxxxx wrote: diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c index 3806d98..c8ea627 100644 --- a/xen/arch/arm/irq.c +++ b/xen/arch/arm/irq.c [...] int __init arch_init_one_irq_desc(struct irq_desc *desc) @@ -88,6 +97,15 @@ static int __init init_irq_data(void) desc->action = NULL; } +#ifdef CONFIG_ARM_64 + for ( irq = NR_GIC_LPI; irq < MAX_LPI; irq++ ) NR_GIC_LPI = 4096 which is the maximum number of LPIs supported you've hardcoded. Here you want to use FIRST_GIC_LPI. + { + struct irq_desc *desc = irq_to_desc(irq); + init_one_irq_desc(desc); + desc->irq = irq; + desc->action = NULL; + } +#endif return 0; } @@ -208,7 +226,7 @@ int request_irq(unsigned int irq, unsigned int irqflags, * which interrupt is which (messes up the interrupt freeing * logic etc). */ - if ( irq >= nr_irqs ) + if ( irq >= nr_irqs && !is_lpi(irq) ) It's expecting that nr_irqs will encompass SPIs, LPIs... In this particular case, you want to use gic_is_valid_irq. @@ -436,7 +459,8 @@ err: bool_t is_assignable_irq(unsigned int irq) { /* For now, we can only route SPIs to the guest */ - return ((irq >= NR_LOCAL_IRQS) && (irq < gic_number_lines())); + return (((irq >= NR_LOCAL_IRQS) && (irq < gic_number_lines())) || + is_lpi(irq)); } /* @@ -452,7 +476,7 @@ int route_irq_to_guest(struct domain *d, unsigned int virq, It's not right to re-use route_irq_to_guest for a completely different meaning. The virq stands for virtual IRQ and not eventID. For more details, please see my remarks on patch #8, #5 and the feature freeze thread. IHMO, the prototype to route an LPI to a guest should be route_lpi_to_guest(struct domain *d, unsigned int lpi); [...] spin_unlock_irqrestore(&desc->lock, flags); [...] diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index 25b69a0..4e14439 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -1111,12 +1111,19 @@ static const struct mmio_handler_ops vgic_distr_mmio_handler = { static int vgic_v3_get_irq_priority(struct vcpu *v, unsigned int irq) { - int priority; - struct vgic_irq_rank *rank = vgic_rank_irq(v, irq); + int priority = 0; + struct vgic_irq_rank *rank; - ASSERT(spin_is_locked(&rank->lock)); - priority = vgic_byte_read(rank->ipriority[REG_RANK_INDEX(8, + if ( !is_lpi(irq) ) is_lpi is checking the validity of the host LPI not the guest LPI. + { + rank = vgic_rank_irq(v, irq); + + ASSERT(spin_is_locked(&rank->lock)); + priority = vgic_byte_read(rank->ipriority[REG_RANK_INDEX(8, irq, DABT_WORD)], 0, irq & 0x3); + } + if ( is_lpi(irq) && gic_lpi_supported() ) + priority = vgic_its_get_priority(v, irq); I'm wondering how you tested the series... To get the priority in the LPI configuration table, you have to use substract 8192 before reading in the config table. But, here you are directly using the value to read in the table. return priority; } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index a5f66f6..8190a46 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -30,6 +30,7 @@ #include <asm/mmio.h> #include <asm/gic.h> +#include <asm/gic-its.h> I'm not in favor to see any include of the gic-its.h in the common code. It will likely means that the build on arm32 is broken... #include <asm/vgic.h> static inline struct vgic_irq_rank *vgic_get_rank(struct vcpu *v, int rank) @@ -111,6 +112,15 @@ int domain_vgic_init(struct domain *d, unsigned int nr_spis) for (i=0; i<d->arch.vgic.nr_spis; i++) vgic_init_pending_irq(&d->arch.vgic.pending_irqs[i], i + 32); +#ifdef CONFIG_ARM_64 + d->arch.vgic.pending_lpis = xzalloc_array(struct pending_irq, NR_GIC_LPI); + if ( d->arch.vgic.pending_lpis == NULL ) + return -ENOMEM; + + for ( i = 0; i < NR_GIC_LPI; i++ ) + vgic_init_pending_irq(&d->arch.vgic.pending_lpis[i], i); +#endif + This should go in the vITS code. for (i=0; i<DOMAIN_NR_RANKS(d); i++) spin_lock_init(&d->arch.vgic.shared_irqs[i].lock); @@ -157,6 +167,7 @@ void domain_vgic_free(struct domain *d) #ifdef CONFIG_ARM_64 free_xenheap_pages(d->arch.vits->prop_page, get_order_from_bytes(d->arch.vits->prop_size)); + xfree(d->arch.vgic.pending_lpis); #endif } @@ -381,13 +392,17 @@ int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode, int struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq) { - struct pending_irq *n; + struct pending_irq *n = NULL; /* Pending irqs allocation strategy: the first vgic.nr_spis irqs * are used for SPIs; the rests are used for per cpu irqs */ Please update this comment. if ( irq < 32 ) n = &v->arch.vgic.pending_irqs[irq]; - else + else if ( irq < NR_IRQS ) You should use vgic_num_irqs and not NR_IRQS which is Xen specific. n = &v->domain->arch.vgic.pending_irqs[irq - 32]; +#ifdef CONFIG_ARM_64 + else if ( is_lpi(irq) ) You helper is_lpi is checking that the irq is valid based of the number of LPIs supported by the host. In the case of the guest, the number of LPIs may be different. Please check this irq against the virtual gic. + n = &v->domain->arch.vgic.pending_lpis[irq - FIRST_GIC_LPI]; +#endif Please add ASSERT(n != NULL) given that with your change, it may be possible to return NULL. return n; } @@ -413,14 +428,20 @@ void vgic_clear_pending_irqs(struct vcpu *v) void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq) { uint8_t priority; - struct vgic_irq_rank *rank = vgic_rank_irq(v, virq); + struct vgic_irq_rank *rank; struct pending_irq *iter, *n = irq_to_pending(v, virq); unsigned long flags; bool_t running; - vgic_lock_rank(v, rank, flags); - priority = v->domain->arch.vgic.handler->get_irq_priority(v, virq); - vgic_unlock_rank(v, rank, flags); + if ( virq < NR_GIC_LPI ) + { + rank = vgic_rank_irq(v, virq); + vgic_lock_rank(v, rank, flags); + priority = v->domain->arch.vgic.handler->get_irq_priority(v, virq); + vgic_unlock_rank(v, rank, flags); + } + else + priority = v->domain->arch.vgic.handler->get_irq_priority(v, virq); Why didn't you push vgic_rank_* call in get_irq_priority?IHMO, a function should be called the same way for all the arguments rather than taking a lock depending on the value of the argument. Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |