[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 16/30] ARM: vGICv3: handle disabled LPIs
On Thu, 6 Apr 2017, Andre Przywara wrote: > On 06/04/17 00:58, Stefano Stabellini wrote: > > On Thu, 6 Apr 2017, Andre Przywara wrote: > >> If a guest disables an LPI, we do not forward this to the associated > >> host LPI to avoid queueing commands to the host ITS command queue. > >> So it may happen that an LPI fires nevertheless on the host. In this > >> case we can bail out early, but have to save the pending state on the > >> virtual side. We do this by storing the pending bit in struct > >> pending_irq, which is assiociated with mapped LPIs. > >> > >> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx> > >> --- > >> xen/arch/arm/gic-v3-lpi.c | 26 +++++++++++++++++++++++++- > >> 1 file changed, 25 insertions(+), 1 deletion(-) > >> > >> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c > >> index d8baebc..7d20986 100644 > >> --- a/xen/arch/arm/gic-v3-lpi.c > >> +++ b/xen/arch/arm/gic-v3-lpi.c > >> @@ -136,6 +136,22 @@ uint64_t gicv3_get_redist_address(unsigned int cpu, > >> bool use_pta) > >> return per_cpu(lpi_redist, cpu).redist_id << 16; > >> } > >> > >> +static bool vgic_can_inject_lpi(struct vcpu *vcpu, uint32_t vlpi) > >> +{ > >> + struct pending_irq *p; > >> + > >> + p = vcpu->domain->arch.vgic.handler->lpi_to_pending(vcpu->domain, > >> vlpi); > >> + if ( !p ) > >> + return false; > >> + > >> + if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) > >> + return true; > >> + > >> + set_bit(GIC_IRQ_GUEST_LPI_PENDING, &p->status); > > > > See alpine.DEB.2.10.1701051422020.2866@sstabellini-ThinkPad-X260 > > (from this email:) > > I suggest vgic_can_inject_lpi doesn't only return true or false, but > > also if the vlpi is already set to pending. In that case, I think we > > should disable the plpi to avoid storms (also see > > http://marc.info/?l=xen-devel&m=148055519432739). > > So I can surely change the function to return that information, but I > think we don't have a good way of handling the storm easily. > First sending the required INV command to let the host know about our > change to the property table might take some time (we have a timeout in > there), also takes a spinlock. Which doesn't sound too good in the > interrupt injection path. > But secondly re-enabling the interrupt is not easily possible currently. > Ideally one would expect this to happen when the guest enables the > corresponding virtual LPI, but that would again require to send an INV > command to the host ITS, which is something that we avoid when triggered > by a guest (the MAPD exception is only for Dom0 and will hopefully go > away later). > So a guest could send virtual INVs (disabling and enabling the virtual > LPI) and trying to flood the host command queue. I was thinking of clearing the enable bit in the LPI configuration table for the physical LPI, and clearing GIC_IRQ_GUEST_ENABLED. We need to set a new flag to say "this LPI has been forcefully disabled." Re- enabling the LPI is not as important, but it could be done on EOI (gic_update_one_lr). > So shall I change the function anyway and add a TODO comment, so that we > can later revisit this scenario? That would be the minimum requirement for me, adding a reference to http://marc.info/?l=xen-devel&m=148055519432739. > >> + return false; > >> +} > >> + > >> /* > >> * Handle incoming LPIs, which are a bit special, because they are > >> potentially > >> * numerous and also only get injected into guests. Treat them specially > >> here, > >> @@ -187,7 +203,15 @@ void gicv3_do_LPI(unsigned int lpi) > >> > >> /* Check if the VCPU is ready to receive LPIs. */ > >> if ( vcpu->arch.vgic.flags & VGIC_V3_LPIS_ENABLED ) > >> - vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi); > >> + { > >> + /* > >> + * We keep all host LPIs enabled, so check if it's disabled on the > >> + * guest side and just record this LPI in the virtual pending > >> table > >> + * in this case. The guest picks it up once it gets enabled again. > >> + */ > >> + if ( vgic_can_inject_lpi(vcpu, hlpi.virt_lpi) ) > >> + vgic_vcpu_inject_irq(vcpu, hlpi.virt_lpi); > >> + } > >> > >> rcu_unlock_domain(d); > >> } > >> -- > >> 2.8.2 > >> > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |