|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCHv2] x86/hvm: add more callback/upcall info to 'I' debug key
On 07/01/2022 12:55, David Vrabel wrote:
> @@ -630,9 +634,46 @@ static void irq_dump(struct domain *d)
> hvm_irq->pci_link_assert_count[1],
> hvm_irq->pci_link_assert_count[2],
> hvm_irq->pci_link_assert_count[3]);
> - printk("Callback via %i:%#"PRIx32",%s asserted\n",
> - hvm_irq->callback_via_type, hvm_irq->callback_via.gsi,
> - hvm_irq->callback_via_asserted ? "" : " not");
> +
> + printk("Per-VCPU upcall vector:\n");
> + for_each_vcpu ( d, v )
> + {
> + if ( v->arch.hvm.evtchn_upcall_vector )
> + {
> + printk(" v%u: %u\n",
> + v->vcpu_id, v->arch.hvm.evtchn_upcall_vector);
Here, and...
> + have_upcall_vector = true;
> + }
> + }
> + if ( !have_upcall_vector )
> + printk(" none\n");
> +
> + via_asserted = hvm_irq->callback_via_asserted ? " (asserted)" : "";
> + switch( hvm_irq->callback_via_type )
> + {
> + case HVMIRQ_callback_none:
> + printk("Callback via none\n");
> + break;
> +
> + case HVMIRQ_callback_gsi:
> + printk("Callback via GSI %u%s\n",
> + hvm_irq->callback_via.gsi,
> + via_asserted);
> + break;
> +
> + case HVMIRQ_callback_pci_intx:
> + printk("Callback via PCI dev %u INTx %u%s\n",
PCI 00:%02x.0 ?
Also, how about INT%c with 'A' + intx as a parameter?
> + hvm_irq->callback_via.pci.dev,
> + hvm_irq->callback_via.pci.intx,
> + via_asserted);
> + break;
> +
> + case HVMIRQ_callback_vector:
> + printk("Callback via vector %u%s\n",
> + hvm_irq->callback_via.vector,
> + via_asserted);
... here, vectors ought to be 0x%02x. Amongst other things, it makes
the priority class instantly readable.
I realise this is all a complete mess, but is via_asserted correct for
HVMIRQ_callback_vector? It's mismatched between the two, and the best
metric that exists is "is pending in IRR". Also, looking at struct
hvm_irq, all the callback information is in the wrong structure, because
it absolutely shouldn't be duplicated for each GSI.
~Andrew
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |