[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 26/32] ARM: vITS: handle MOVI command
On Wed, 31 May 2017, Julien Grall wrote: > Hi Stefano, > > On 30/05/17 23:35, Stefano Stabellini wrote: > > > + /* > > > + * TODO: lookup currently-in-guest virtual IRQs and migrate them, > > > + * as the locking may be fragile otherwise. > > > + * This is not easy to do at the moment, but should become easier > > > + * with the introduction of a per-IRQ lock. > > > + */ > > > > Sure but at least we can handle the inflight, but not in guest, case. It > > is just a matter of adding (withing the arch.vgic.lock locked region): > > > > if ( !list_empty(&p->lr_queue) ) > > { > > gic_remove_irq(old, p); > > clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status); > > list_del_init(&p->lr_queue); > > list_del_init(&p->inflight); > > > > spin_unlock_irqrestore(&old->arch.vgic.lock, flags); > > vgic_vcpu_inject_irq(new, irq); > > } > > Please no more hardcoding of migration code. If we are going to support > migration we should rework the current function vgic_migrate_irq to support > LPIs. I agree, it is true, in fact that is what I actually suggested last time. My preference would be to just call vgic_migrate_irq, providing an empty implementation of irq_set_affinity for LPIs. > But ... I don't see any drawback to not support this today. Per the > specification, when you migrate an interrupt the only things you have to > ensure if the pending interrupt only fire once either on the old or new vCPU. > > IHMO, if the interrupt has already been queued then it is too late. We should > aim at simplifying the code if there are no drawback to do it. In this case, > what would be the drawback to leave pending on the old vCPU? It is inconsistent with what we do elsewhere (vgic_migrate_irq). It is also inconsistent with the TODO comment (!list_empty(&p->lr_queue) interrupts are not yet currently-in-guest). I don't want to introduce any more hardcoding, I just would like the existing vgic_migrate_irq to be called. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |