|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 2/5] xen/arm: vgic-v2: Don't ignore a write in ITARGETSR if one field is 0
On Mon, 2015-10-12 at 15:22 +0100, Julien Grall wrote:
> The current implementation ignores the whole write if one of the field is
> 0. Although, based on the spec (4.3.12 IHI 0048B.b), 0 is a valid value
> when:
> - The interrupt is not wired in the distributor. From the Xen
> point of view, it means that the corresponding bit is not set in
> d->arch.vgic.allocated_irqs.
> - The user wants to disable the IRQ forwarding in the distributor.
> I.e the IRQ stays pending in the distributor and never received by
> the guest.
>
> Implementing the later will require more work in Xen because we always
> assume the interrupt is forwarded to vCPU. So for now, ignore any field
> where the value is 0.
>
> The emulation of the write access of ITARGETSR has been reworked and
> moved to a new function because it would have been difficult to
> implement properly the behavior with the current code.
>
> The new implementation is breaking the register in 4 distinct bytes. For
> each byte, it will check the validity of the target list, find the new
> target, migrate the interrupt and store the value if necessary.
>
> In the new implementation there is nearly no distinction of the access
> size to avoid having too many different path which is harder to test.
>
> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
>
> ---
> This change used to be embedded in "xen/arm: vgic: Optimize the way
> to store the target vCPU in the rank". It has been moved out to
> avoid having too much functional changes in a single patch.
>
> I'm not sure if this patch should be backported to Xen 4.6. Without
> it any guest writing 0 in one the field won't be able to migrate
> other interrupts. Although, in all the use case I've seen, the guest
> is read ITARGETSR first and write-back the value with the
> corresponding byte changed.
>
> Changes in v4:
> - Patch added
> ---
> xen/arch/arm/vgic-v2.c | 141 ++++++++++++++++++++++++++++++++++---------
> ------
> 1 file changed, 98 insertions(+), 43 deletions(-)
>
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index 665afeb..6b7eab3 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -50,6 +50,94 @@ void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase,
> paddr_t vbase)
> vgic_v2_hw.vbase = vbase;
> }
>
> +#define NR_TARGET_PER_REG 4U
> +#define NR_BIT_PER_TARGET 8U
NR_TARGETS_ and NR_BITS_...
"REG" there is a bit generic, given this only works for registers with 8
-bit fields, _ITARGETSR instead?
> +/*
> + * Store a ITARGETSR register. This function only deals with ITARGETSR8
> + * and onwards.
> + *
> + * Note the offset will be aligned to the appropriate boundary.
> + */
> +static void vgic_store_itargetsr(struct domain *d, struct vgic_irq_rank
> *rank,
> + unsigned int offset, uint32_t
> itargetsr)
> +{
> + unsigned int i;
> + unsigned int regidx = REG_RANK_INDEX(8, offset, DABT_WORD);
> + unsigned int virq;
> +
> + ASSERT(spin_is_locked(&rank->lock));
> +
> + /*
> + * The ITARGETSR0-7, used for SGIs/PPIs, are implemented RO in the
> + * emulation and should never call this function.
> + *
> + * They all live in the first rank.
> + */
> + BUILD_BUG_ON(NR_INTERRUPT_PER_RANK != 32);
> + ASSERT(rank->index >= 1);
> +
> + offset &= INTERRUPT_RANK_MASK;
> + offset &= ~(NR_TARGET_PER_REG - 1);
> +
> + virq = rank->index * NR_INTERRUPT_PER_RANK + offset;
> +
> + for ( i = 0; i < NR_TARGET_PER_REG; i++, offset++, virq++ )
> + {
> + unsigned int new_target, old_target;
> + uint8_t new_mask, old_mask;
> +
> + new_mask = itargetsr & ((1 << NR_BIT_PER_TARGET) - 1);
> + old_mask = vgic_byte_read(rank->v2.itargets[regidx], i);
> +
> + itargetsr >>= NR_BIT_PER_TARGET;
This is really a part of the for() iteration expression, but oddly place
here.
If you turn the "((1 << NR_BIT_PER_TARGET) - 1)" thing into a #define or
constant, then you may find that extracting the relevant byte from the
unshifted itargetsr using (itargetsr >> (i*NR_BITS_PER_TARGET) and then
applying the mask is clean enough to use here instead.
I don't think you rely on the shifting of itargetsr apart from when
calculating new_mask.
> + /*
> + * SPIs are using the 1-N model (see 1.4.3 in ARM IHI 0048B).
> + * While the interrupt could be set pending to all the vCPUs in
> + * target list, it's not guarantee by the spec.
"guaranteed"
> + * For simplicity, always route the vIRQ to the first interrupt
> + * in the target list
> + */
> + new_target = ffs(new_mask);
> + old_target = ffs(old_mask);
> +
> + /* The current target should always be valid */
> + ASSERT(old_target && (old_target <= d->max_vcpus));
> +
> + /*
> + * Ignore the write request for this interrupt if the new target
> + * is invalid.
> + * XXX: From the spec, if the target list is not valid, the
> + * interrupt should be ignored (i.e not forwarding to the
"forwarded".
> + * guest).
> + */
> + if ( !new_target || (new_target > d->max_vcpus) )
> + {
> + printk(XENLOG_G_DEBUG
gdprintk?
> + "d%d: No valid vCPU found for vIRQ%u in the target list
> (%#x). Skip it\n",
> + d->domain_id, virq, new_mask);
> + continue;
> + }
> +
> + /* The vCPU ID always starts from 0 */
> + new_target--;
> + old_target--;
> +
> + /* Only migrate the vIRQ if the target vCPU has changed */
> + if ( new_target != old_target )
> + {
> + vgic_migrate_irq(d->vcpu[old_target],
> + d->vcpu[new_target],
> + virq);
> + }
> +
> + /* Bit corresponding to unimplemented CPU is write-ignored. */
"write-ignore"
> + new_mask &= (1 << d->max_vcpus) - 1;
> + vgic_byte_write(&rank->v2.itargets[regidx], new_mask, i);
> + }
> +}
> +
> static int vgic_v2_distr_mmio_read(struct vcpu *v, mmio_info_t *info,
> register_t *r, void *priv)
> {
> @@ -337,56 +425,23 @@ static int vgic_v2_distr_mmio_write(struct vcpu *v,
> mmio_info_t *info,
>
> case GICD_ITARGETSR + 8 ... GICD_ITARGETSRN:
> {
> - /* unsigned long needed for find_next_bit */
> - unsigned long target;
> - int i;
> + uint32_t itargetsr;
> +
> if ( dabt.size != DABT_BYTE && dabt.size != DABT_WORD ) goto
> bad_width;
> rank = vgic_rank_offset(v, 8, gicd_reg - GICD_ITARGETSR,
> DABT_WORD);
> if ( rank == NULL) goto write_ignore;
> - /* 8-bit vcpu mask for this domain */
> - BUG_ON(v->domain->max_vcpus > 8);
> - target = (1 << v->domain->max_vcpus) - 1;
> - target = target | (target << 8) | (target << 16) | (target <<
> 24);
> + vgic_lock_rank(v, rank, flags);
> if ( dabt.size == DABT_WORD )
> - target &= r;
> + itargetsr = r;
> else
> - target &= (r << (8 * (gicd_reg & 0x3)));
> - /* ignore zero writes */
> - if ( !target )
> - goto write_ignore;
> - /* For word reads ignore writes where any single byte is zero */
> - if ( dabt.size == 2 &&
> - !((target & 0xff) && (target & (0xff << 8)) &&
> - (target & (0xff << 16)) && (target & (0xff << 24))))
> - goto write_ignore;
> - vgic_lock_rank(v, rank, flags);
> - i = 0;
> - while ( (i = find_next_bit(&target, 32, i)) < 32 )
> {
> - unsigned int irq, new_target, old_target;
> - unsigned long old_target_mask;
> - struct vcpu *v_target, *v_old;
> -
> - new_target = i % 8;
> - old_target_mask = vgic_byte_read(rank
> ->v2.itargets[REG_RANK_INDEX(8,
> - gicd_reg - GICD_ITARGETSR,
> DABT_WORD)], i/8);
> - old_target = find_first_bit(&old_target_mask, 8);
> -
> - if ( new_target != old_target )
> - {
> - irq = (gicd_reg & ~0x3) - GICD_ITARGETSR + (i / 8);
> - v_target = v->domain->vcpu[new_target];
> - v_old = v->domain->vcpu[old_target];
> - vgic_migrate_irq(v_old, v_target, irq);
> - }
> - i += 8 - new_target;
> + itargetsr = rank->v2.itargets[REG_RANK_INDEX(8,
> + gicd_reg - GICD_ITARGETSR,
> + DABT_WORD)];
> + vgic_byte_write(&itargetsr, r, gicd_reg);
> }
> - if ( dabt.size == DABT_WORD )
> - rank->v2.itargets[REG_RANK_INDEX(8, gicd_reg -
> GICD_ITARGETSR,
> - DABT_WORD)] = target;
> - else
> - vgic_byte_write(&rank->v2.itargets[REG_RANK_INDEX(8,
> - gicd_reg - GICD_ITARGETSR, DABT_WORD)], r,
> gicd_reg);
> + vgic_store_itargetsr(v->domain, rank, gicd_reg - GICD_ITARGETSR,
> + itargetsr);
> vgic_unlock_rank(v, rank, flags);
> return 1;
> }
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |