[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.14 v2 2/2] x86/passthrough: introduce a flag for GSIs not requiring an EOI or unmask
On Tue, Jun 16, 2020 at 08:27:54AM +0200, Jan Beulich wrote: > On 10.06.2020 16:29, Roger Pau Monne wrote: > > @@ -558,6 +559,12 @@ int pt_irq_create_bind( > > */ > > ASSERT(!mask); > > share = trigger_mode; > > + if ( trigger_mode == VIOAPIC_EDGE_TRIG ) > > + /* > > + * Edge IO-APIC interrupt, no EOI or unmask to > > perform > > + * and hence no timer needed. > > + */ > > + pirq_dpci->flags |= HVM_IRQ_DPCI_NO_EOI; > > Is this really limited to edge triggered IO-APIC interrupts? > MSI ones are effectively edge triggered too, aren't they? MSIs do sometimes require an EOI, depending on whether they can be masked, see irq_acktype. > Along the lines of irq_acktype() maskable MSI may then also > not need any such arrangements? The pirq_guest_eoi() -> > desc_guest_eoi() path looks to confirm this. Yes, that's correct. AFAICT MSI interrupts won't get the EOI timer, since pt_irq_need_timer will return false because the HVM_IRQ_DPCI_GUEST_MSI flag will be set. > > @@ -920,6 +923,8 @@ static void hvm_dirq_assist(struct domain *d, struct > > hvm_pirq_dpci *pirq_dpci) > > if ( pirq_dpci->flags & HVM_IRQ_DPCI_IDENTITY_GSI ) > > { > > hvm_gsi_assert(d, pirq->pirq); > > + if ( pirq_dpci->flags & HVM_IRQ_DPCI_NO_EOI ) > > + goto out; > > Immediately ahead of this there's a similar piece of code > dealing with PCI INTx. They're commonly level triggered, but > I don't think there's a strict need for this to be the case. > At least hvm_pci_intx_assert() -> assert_gsi() -> > vioapic_irq_positive_edge() also cover the edge triggered one. Hm, I'm not sure it's safe to passthrough edge triggered IO-APIC interrupts, as Xen will mark those as 'shared' always, and sharing edge interrupts cannot reliably work. In any case the EOI timer is definitely set for those, and needs to be disabled before exiting hvm_dirq_assist. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |