[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH] x86/dpci: remove the dpci EOI timer
> From: Roger Pau Monne <roger.pau@xxxxxxxxxx> > Sent: Wednesday, January 13, 2021 1:33 AM > > Current interrupt pass though code will setup a timer for each > interrupt injected to the guest that requires an EOI from the guest. > Such timer would perform two actions if the guest doesn't EOI the > interrupt before a given period of time. The first one is deasserting > the virtual line, the second is perform an EOI of the physical > interrupt source if it requires such. > > The deasserting of the guest virtual line is wrong, since it messes > with the interrupt status of the guest. It's not clear why this was > odne in the first place, it should be the guest the one to EOI the > interrupt and thus deassert the line. > > Performing an EOI of the physical interrupt source is redundant, since > there's already a timer that takes care of this for all interrupts, > not just the HVM dpci ones, see irq_guest_action_t struct eoi_timer > field. > > Since both of the actions performed by the dpci timer are not > required, remove it altogether. > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > --- > As with previous patches, I'm having a hard time figuring out why this > was required in the first place. I see no reason for Xen to be > deasserting the guest virtual line. There's a comment: > > /* > * Set a timer to see if the guest can finish the interrupt or not. For > * example, the guest OS may unmask the PIC during boot, before the > * guest driver is loaded. hvm_pci_intx_assert() may succeed, but the > * guest will never deal with the irq, then the physical interrupt line > * will never be deasserted. > */ > > Did this happen because the device was passed through in a bogus state > where it would generate interrupts without the guest requesting It could be a case where two devices share the same interrupt line and are assigned to different domains. In this case, the interrupt activity of two devices interfere with each other. > > Won't the guest face the same issues when booted on bare metal, and > thus would already have the means to deal with such issues? The original commit was added by me in ~13yrs ago (0f843ba00c95) when enabling Xen in a client virtualization environment where interrupt sharing is popular. I believe above comment was recorded for a real problem at the moment (deassert resets the intx line to unblock further interrupts). But I'm not sure whether it is still the case after both Xen and guest OS have changed a lot. At least some test from people who still use Xen in shared interrupt scenario would be helpful. Or, if such usage is already niche, maybe we can consider disallow passing through devices which share the same interrupt line to different domains and then safely remove this dpci EOI trick. Thanks Kevin > --- > xen/drivers/passthrough/vtd/x86/hvm.c | 3 - > xen/drivers/passthrough/x86/hvm.c | 95 +-------------------------- > xen/include/asm-x86/hvm/irq.h | 3 - > xen/include/xen/iommu.h | 5 -- > 4 files changed, 2 insertions(+), 104 deletions(-) > > diff --git a/xen/drivers/passthrough/vtd/x86/hvm.c > b/xen/drivers/passthrough/vtd/x86/hvm.c > index f77b35815c..b531fe907a 100644 > --- a/xen/drivers/passthrough/vtd/x86/hvm.c > +++ b/xen/drivers/passthrough/vtd/x86/hvm.c > @@ -36,10 +36,7 @@ static int _hvm_dpci_isairq_eoi(struct domain *d, > { > hvm_pci_intx_deassert(d, digl->device, digl->intx); > if ( --pirq_dpci->pending == 0 ) > - { > - stop_timer(&pirq_dpci->timer); > pirq_guest_eoi(dpci_pirq(pirq_dpci)); > - } > } > } > > diff --git a/xen/drivers/passthrough/x86/hvm.c > b/xen/drivers/passthrough/x86/hvm.c > index edc8059518..5d901db50c 100644 > --- a/xen/drivers/passthrough/x86/hvm.c > +++ b/xen/drivers/passthrough/x86/hvm.c > @@ -136,77 +136,6 @@ static void pt_pirq_softirq_reset(struct > hvm_pirq_dpci *pirq_dpci) > pirq_dpci->masked = 0; > } > > -bool pt_irq_need_timer(uint32_t flags) > -{ > - return !(flags & (HVM_IRQ_DPCI_GUEST_MSI | > HVM_IRQ_DPCI_TRANSLATE | > - HVM_IRQ_DPCI_NO_EOI)); > -} > - > -static int pt_irq_guest_eoi(struct domain *d, struct hvm_pirq_dpci > *pirq_dpci, > - void *arg) > -{ > - if ( __test_and_clear_bit(_HVM_IRQ_DPCI_EOI_LATCH_SHIFT, > - &pirq_dpci->flags) ) > - { > - pirq_dpci->masked = 0; > - pirq_dpci->pending = 0; > - pirq_guest_eoi(dpci_pirq(pirq_dpci)); > - } > - > - return 0; > -} > - > -static void pt_irq_time_out(void *data) > -{ > - struct hvm_pirq_dpci *irq_map = data; > - const struct hvm_irq_dpci *dpci; > - const struct dev_intx_gsi_link *digl; > - > - spin_lock(&irq_map->dom->event_lock); > - > - if ( irq_map->flags & HVM_IRQ_DPCI_IDENTITY_GSI ) > - { > - ASSERT(is_hardware_domain(irq_map->dom)); > - /* > - * Identity mapped, no need to iterate over the guest GSI list to > find > - * other pirqs sharing the same guest GSI. > - * > - * In the identity mapped case the EOI can also be done now, this way > - * the iteration over the list of domain pirqs is avoided. > - */ > - hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq); > - irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH; > - pt_irq_guest_eoi(irq_map->dom, irq_map, NULL); > - spin_unlock(&irq_map->dom->event_lock); > - return; > - } > - > - dpci = domain_get_irq_dpci(irq_map->dom); > - if ( unlikely(!dpci) ) > - { > - ASSERT_UNREACHABLE(); > - spin_unlock(&irq_map->dom->event_lock); > - return; > - } > - list_for_each_entry ( digl, &irq_map->digl_list, list ) > - { > - unsigned int guest_gsi = hvm_pci_intx_gsi(digl->device, digl->intx); > - const struct hvm_girq_dpci_mapping *girq; > - > - list_for_each_entry ( girq, &dpci->girq[guest_gsi], list ) > - { > - struct pirq *pirq = pirq_info(irq_map->dom, girq->machine_gsi); > - > - pirq_dpci(pirq)->flags |= HVM_IRQ_DPCI_EOI_LATCH; > - } > - hvm_pci_intx_deassert(irq_map->dom, digl->device, digl->intx); > - } > - > - pt_pirq_iterate(irq_map->dom, pt_irq_guest_eoi, NULL); > - > - spin_unlock(&irq_map->dom->event_lock); > -} > - > struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *d) > { > if ( !d || !is_hvm_domain(d) ) > @@ -568,15 +497,10 @@ int pt_irq_create_bind( > } > } > > - /* Init timer before binding */ > - if ( pt_irq_need_timer(pirq_dpci->flags) ) > - init_timer(&pirq_dpci->timer, pt_irq_time_out, pirq_dpci, 0); > /* Deal with gsi for legacy devices */ > rc = pirq_guest_bind(d->vcpu[0], info, share); > if ( unlikely(rc) ) > { > - if ( pt_irq_need_timer(pirq_dpci->flags) ) > - kill_timer(&pirq_dpci->timer); > /* > * There is no path for __do_IRQ to schedule softirq as > * IRQ_GUEST is not set. As such we can reset 'dom' directly. > @@ -743,8 +667,6 @@ int pt_irq_destroy_bind( > { > pirq_guest_unbind(d, pirq); > msixtbl_pt_unregister(d, pirq); > - if ( pt_irq_need_timer(pirq_dpci->flags) ) > - kill_timer(&pirq_dpci->timer); > pirq_dpci->flags = 0; > /* > * See comment in pt_irq_create_bind's PT_IRQ_TYPE_MSI before the > @@ -934,16 +856,6 @@ static void hvm_dirq_assist(struct domain *d, struct > hvm_pirq_dpci *pirq_dpci) > __msi_pirq_eoi(pirq_dpci); > goto out; > } > - > - /* > - * Set a timer to see if the guest can finish the interrupt or not. > For > - * example, the guest OS may unmask the PIC during boot, before the > - * guest driver is loaded. hvm_pci_intx_assert() may succeed, but the > - * guest will never deal with the irq, then the physical interrupt > line > - * will never be deasserted. > - */ > - ASSERT(pt_irq_need_timer(pirq_dpci->flags)); > - set_timer(&pirq_dpci->timer, NOW() + PT_IRQ_TIME_OUT); > } > > out: > @@ -967,10 +879,10 @@ static void hvm_pirq_eoi(struct pirq *pirq) > * since interrupt is still not EOIed > */ > if ( --pirq_dpci->pending || > - !pt_irq_need_timer(pirq_dpci->flags) ) > + /* When the interrupt source is MSI no Ack should be performed. */ > + pirq_dpci->flags & HVM_IRQ_DPCI_TRANSLATE ) > return; > > - stop_timer(&pirq_dpci->timer); > pirq_guest_eoi(pirq); > } > > @@ -1038,9 +950,6 @@ static int pci_clean_dpci_irq(struct domain *d, > > pirq_guest_unbind(d, dpci_pirq(pirq_dpci)); > > - if ( pt_irq_need_timer(pirq_dpci->flags) ) > - kill_timer(&pirq_dpci->timer); > - > list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list ) > { > list_del(&digl->list); > diff --git a/xen/include/asm-x86/hvm/irq.h b/xen/include/asm- > x86/hvm/irq.h > index 532880d497..d40e13de6e 100644 > --- a/xen/include/asm-x86/hvm/irq.h > +++ b/xen/include/asm-x86/hvm/irq.h > @@ -117,7 +117,6 @@ struct dev_intx_gsi_link { > #define _HVM_IRQ_DPCI_MACH_PCI_SHIFT 0 > #define _HVM_IRQ_DPCI_MACH_MSI_SHIFT 1 > #define _HVM_IRQ_DPCI_MAPPED_SHIFT 2 > -#define _HVM_IRQ_DPCI_EOI_LATCH_SHIFT 3 > #define _HVM_IRQ_DPCI_GUEST_PCI_SHIFT 4 > #define _HVM_IRQ_DPCI_GUEST_MSI_SHIFT 5 > #define _HVM_IRQ_DPCI_IDENTITY_GSI_SHIFT 6 > @@ -126,7 +125,6 @@ struct dev_intx_gsi_link { > #define HVM_IRQ_DPCI_MACH_PCI (1u << > _HVM_IRQ_DPCI_MACH_PCI_SHIFT) > #define HVM_IRQ_DPCI_MACH_MSI (1u << > _HVM_IRQ_DPCI_MACH_MSI_SHIFT) > #define HVM_IRQ_DPCI_MAPPED (1u << > _HVM_IRQ_DPCI_MAPPED_SHIFT) > -#define HVM_IRQ_DPCI_EOI_LATCH (1u << > _HVM_IRQ_DPCI_EOI_LATCH_SHIFT) > #define HVM_IRQ_DPCI_GUEST_PCI (1u << > _HVM_IRQ_DPCI_GUEST_PCI_SHIFT) > #define HVM_IRQ_DPCI_GUEST_MSI (1u << > _HVM_IRQ_DPCI_GUEST_MSI_SHIFT) > #define HVM_IRQ_DPCI_IDENTITY_GSI (1u << > _HVM_IRQ_DPCI_IDENTITY_GSI_SHIFT) > @@ -173,7 +171,6 @@ struct hvm_pirq_dpci { > struct list_head digl_list; > struct domain *dom; > struct hvm_gmsi_info gmsi; > - struct timer timer; > struct list_head softirq_list; > }; > > diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h > index f0295fd6c3..4f3098b6ed 100644 > --- a/xen/include/xen/iommu.h > +++ b/xen/include/xen/iommu.h > @@ -184,11 +184,6 @@ int pt_irq_destroy_bind(struct domain *, const > struct xen_domctl_bind_pt_irq *); > void hvm_dpci_isairq_eoi(struct domain *d, unsigned int isairq); > struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *); > void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci); > -#ifdef CONFIG_HVM > -bool pt_irq_need_timer(uint32_t flags); > -#else > -static inline bool pt_irq_need_timer(unsigned int flags) { return false; } > -#endif > > struct msi_desc; > struct msi_msg; > -- > 2.29.2
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |