[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v6 3/6] xen/arm: vgic-v2: Don't ignore a write in ITARGETSR if one field is 0
The current implementation ignores the whole write if one of the field is 0. Although, based on the spec (4.3.12 IHI 0048B.b), 0 is a valid value when: - The interrupt is not wired in the distributor. From the Xen point of view, it means that the corresponding bit is not set in d->arch.vgic.allocated_irqs. - The user wants to disable the IRQ forwarding in the distributor. I.e the IRQ stays pending in the distributor and never received by the guest. Implementing the later will require more work in Xen because we always assume the interrupt is forwarded to a valid vCPU. So for now, ignore any field where the value is 0. The emulation of the write access of ITARGETSR has been reworked and moved to a new function because it would have been difficult to implement properly the behavior with the current code. The new implementation is breaking the register in 4 distinct bytes. For each byte, it will check the validity of the target list, find the new target, migrate the interrupt and store the value if necessary. In the new implementation there is nearly no distinction of the access size to avoid having too many different path which is harder to test. Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> --- This change used to be embedded in "xen/arm: vgic: Optimize the way to store the target vCPU in the rank". It has been moved out to avoid having too much functional changes in a single patch. I'm not sure if this patch should be backported to Xen 4.6. Without it any guest writing 0 in one the field won't be able to migrate other interrupts. Although, in all the use case I've seen, the guest is read ITARGETSR first and write-back the value with the corresponding byte changed. Note that NR_BITS_PER_TARGET has been kept as a define because it will be use in another function in the next patch and I wanted to avoid having to duplicate const var = ... Also I didn't bother to move the incrementation of the offset on the next patch as it's currently harmless. Changes in v5: - s/NR_TARGET_PER_REG/NR_TARGETS_PER_ITARGETSR/ - s/NR_BIT_PER_TARGET/NR_BITS_PER_TARGET/ - Compute NR_BITS_PER_TARGET based on NR_TARGETS_PER_ITARGETSR - Typoes in the comments - Compute the shift for itargetsr on every loop - Use gprintk(XENLOG_WARNING,...) - Clarify commit message Changes in v4: - Patch added --- xen/arch/arm/vgic-v2.c | 145 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 102 insertions(+), 43 deletions(-) diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index 486e497..ad2ea0a 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -57,6 +57,98 @@ void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr_t csize, vgic_v2_hw.aliased_offset = aliased_offset; } +#define NR_TARGETS_PER_ITARGETSR 4U +#define NR_BITS_PER_TARGET (32U / NR_TARGETS_PER_ITARGETSR) + +/* + * Store an ITARGETSR register. This function only deals with ITARGETSR8 + * and onwards. + * + * Note the offset will be aligned to the appropriate boundary. + */ +static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank *rank, + unsigned int offset, uint32_t itargetsr) +{ + unsigned int i; + unsigned int regidx = REG_RANK_INDEX(8, offset, DABT_WORD); + unsigned int virq; + + ASSERT(spin_is_locked(&rank->lock)); + + /* + * The ITARGETSR0-7, used for SGIs/PPIs, are implemented RO in the + * emulation and should never call this function. + * + * They all live in the first rank. + */ + BUILD_BUG_ON(NR_INTERRUPT_PER_RANK != 32); + ASSERT(rank->index >= 1); + + offset &= INTERRUPT_RANK_MASK; + offset &= ~(NR_TARGETS_PER_ITARGETSR - 1); + + virq = rank->index * NR_INTERRUPT_PER_RANK + offset; + + for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++, virq++ ) + { + unsigned int new_target, old_target; + uint8_t new_mask, old_mask; + + /* + * Don't need to mask as we rely on new_mask to fit for only one + * target. + */ + BUILD_BUG_ON((sizeof (new_mask) * 8) != NR_BITS_PER_TARGET); + + new_mask = itargetsr >> (i * NR_BITS_PER_TARGET); + old_mask = vgic_byte_read(rank->v2.itargets[regidx], i); + + /* + * SPIs are using the 1-N model (see 1.4.3 in ARM IHI 0048B). + * While the interrupt could be set pending to all the vCPUs in + * target list, it's not guaranteed by the spec. + * For simplicity, always route the vIRQ to the first interrupt + * in the target list + */ + new_target = ffs(new_mask); + old_target = ffs(old_mask); + + /* The current target should always be valid */ + ASSERT(old_target && (old_target <= d->max_vcpus)); + + /* + * Ignore the write request for this interrupt if the new target + * is invalid. + * XXX: From the spec, if the target list is not valid, the + * interrupt should be ignored (i.e not forwarded to the + * guest). + */ + if ( !new_target || (new_target > d->max_vcpus) ) + { + gprintk(XENLOG_WARNING, + "No valid vCPU found for vIRQ%u in the target list (%#x). Skip it\n", + virq, new_mask); + continue; + } + + /* The vCPU ID always starts from 0 */ + new_target--; + old_target--; + + /* Only migrate the vIRQ if the target vCPU has changed */ + if ( new_target != old_target ) + { + vgic_migrate_irq(d->vcpu[old_target], + d->vcpu[new_target], + virq); + } + + /* Bit corresponding to unimplemented CPU is write-ignore. */ + new_mask &= (1 << d->max_vcpus) - 1; + vgic_byte_write(&rank->v2.itargets[regidx], new_mask, i); + } +} + static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info, register_t *r, void *priv) { @@ -344,56 +436,23 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v, mmio_info_t *info, case GICD_ITARGETSR8 ... GICD_ITARGETSRN: { - /* unsigned long needed for find_next_bit */ - unsigned long target; - int i; + uint32_t itargetsr; + if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto bad_width; rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR, DABT_WORD); if ( rank == NULL) goto write_ignore; - /* 8-bit vcpu mask for this domain */ - BUG_ON(v->domain->max_vcpus > 8); - target = (1 << v->domain->max_vcpus) - 1; - target = target | (target << 8) | (target << 16) | (target << 24); + vgic_lock_rank(v, rank, flags); if ( dabt.size == DABT_WORD ) - target &= r; + itargetsr = r; else - target &= (r << (8 * (gicd_reg & 0x3))); - /* ignore zero writes */ - if ( !target ) - goto write_ignore; - /* For word reads ignore writes where any single byte is zero */ - if ( dabt.size == 2 && - !((target & 0xff) && (target & (0xff << 8)) && - (target & (0xff << 16)) && (target & (0xff << 24)))) - goto write_ignore; - vgic_lock_rank(v, rank, flags); - i = 0; - while ( (i = find_next_bit(&target, 32, i)) < 32 ) { - unsigned int irq, new_target, old_target; - unsigned long old_target_mask; - struct vcpu *v_target, *v_old; - - new_target = i % 8; - old_target_mask = vgic_byte_read(rank->v2.itargets[REG_RANK_INDEX(8, - gicd_reg - GICD_ITARGETSR, DABT_WORD)], i/8); - old_target = find_first_bit(&old_target_mask, 8); - - if ( new_target != old_target ) - { - irq = (gicd_reg & ~0x3) - GICD_ITARGETSR + (i / 8); - v_target = v->domain->vcpu[new_target]; - v_old = v->domain->vcpu[old_target]; - vgic_migrate_irq(v_old, v_target, irq); - } - i += 8 - new_target; + itargetsr = rank->v2.itargets[REG_RANK_INDEX(8, + gicd_reg - GICD_ITARGETSR, + DABT_WORD)]; + vgic_byte_write(&itargetsr, r, gicd_reg); } - if ( dabt.size == DABT_WORD ) - rank->v2.itargets[REG_RANK_INDEX(8, gicd_reg - GICD_ITARGETSR, - DABT_WORD)] = target; - else - vgic_byte_write(&rank->v2.itargets[REG_RANK_INDEX(8, - gicd_reg - GICD_ITARGETSR, DABT_WORD)], r, gicd_reg); + vgic_store_itargetsr(v->domain, rank, gicd_reg - GICD_ITARGETSR, + itargetsr); vgic_unlock_rank(v, rank, flags); return 1; } -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |