|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen optimization
Hi,
> On Wed, Oct 24, 2018 at 2:24 AM Stefano Stabellini
> <stefano.stabellini@xxxxxxxxxx> wrote:
> It is good that there are no physical interrupts interrupting the cpu.
> serrors=panic makes the context switch faster. I guess there are not
> enough context switches to make a measurable difference.
Yes, when I did:
grep ctxt /proc/2153/status
I got:
voluntary_ctxt_switches: 5
nonvoluntary_ctxt_switches: 3
> I don't have any other things to suggest right now. You should be able
> to measure an overall 2.5us IRQ latency (if the interrupt rate is not
> too high).
This bare-metal application is the most suspicious, indeed. Still
waiting answer on Xilinx forum.
> Just to be paranoid, we might also want to check the following, again it
> shouldn't get printed:
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 5a4f082..6cf6814 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -532,6 +532,8 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v,
> unsigned int virq,
> struct pending_irq *iter, *n;
> unsigned long flags;
> + if ( d->domain_id != 0 && virq != 68 )
> + printk("DEBUG virq=%d local=%d\n",virq,v == current);
> /*
> * For edge triggered interrupts we always ignore a "falling edge".
> * For level triggered interrupts we shouldn't, but do anyways.
Checked it again, no prints. I hoped that I will discover some vIRQs
or pIRQs slowing things down but no, no prints.
I might try something else instead of this bare-metal application
because this Xilinx SDK example is very suspicious.
Thank you for your time.
Milan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |