|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [XenARM] XEN tools for ARM with Virtualization Extensions
> >
> > I added code in the arm domain.c to release the IRQs when a domain is
> destroyed.
> > I am providing my changes, but I believe there may be more work to have a
> clean
> > solution. Specifically, the following items may need to be addressed.
>
> Thanks for that patch!
>
> > 1. The dom.cfg "irqs" section has the 32 added to the device-tree IRQ
> number because
> > I wasn't sure where to add the translation. This can be cleaned
> > up when
> > guests are able to be given DTB files and Xen can parse them.
> >
> > 2. I assume level-triggered IRQs since the "irqs" list is only irq
> > numbers. This
> also
> > could be left until guest DTB files are supported.
> >
> > 3. I added clearing of the IRQ_GUEST bit in the desc->status in
> > release_irq
> because
> > I didn't see where it would be unset. This is probably not a
> > big deal since
> not
> > many situations arise where an IRQ is sometimes host and
> > sometimes
> guest.
>
> Could you send a separate patch for this fix?
>
I have included a separate patch for just this fix.
> > 4. I changed vgic.c to use the d->nr_irqs instead of assuming guests
> > have no
> SPIs.
> > This allowed me to use the extra_domu_irqs boot param to allow
> guests to
> > have SPIs.
>
> How do you define the number of SPIs for dom0? Also with extra_dom0_irqs?
>
> By default nr_pirqs = static_nr_irqs + extra_dom{0,U}_irqs.
>
> On ARM, start_nr_irqs equals to 1024. So you are allocating much more IRQs
> than requested/supported by the GIC.
>
> In any case, the number of IRQs per guest must not be "hardcoded" via the
> command line. This value is different on each board.
>
> For dom0, we can use the same number as the host and for a guest we can
> give a parameters during the domain creation to let know how many SPIs is
> needed for the guest.
>
I will use gic_number_lines - 32 as you mention in your comment on vgic.c
changes
below and only support 1:1 IRQ mapping for now. This and other changes based
on your comments below will be sent in a new patch shortly.
> > 5. I added a check for whether an IRQ was already in use, earlier in
> > the flow
> of
> > gic_route_irq_to_guest() so that the desc->handler is not
> > messed with
> before
> > __setup_irq does the check and returns an error. Also,
> gic_set_irq_properties
> > will be avoided in the error case as well.
> >
> > I rebased to commit 9eabb07 and verified my changes. I needed the fix in
> gic_irq_shutdown
> > or my release_irq changes caused other IRQs to be disabled when a domain
> was destroyed.
>
> I'm surprised, this issue should have been corrected with the commit
> 751554b. I don't see a fix in this patch, do you have one?
>
My apology for the confusion. I was originally testing on master before the
751554b
commit that you made and spent an hour or so debugging the issue until I
remembered
your comments about the fix relating to ICENABLER and so I rebased to master
yesterday
and then, with your fix, was able to get guests to be
created/destroyed/created, etc.
> > From 551f174c9653def01890fadd2dd769ab8a9acde7 Mon Sep 17 00:00:00
> 2001
> > From: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
> > Date: Thu, 11 Jul 2013 20:03:51 -0400
> > Subject: [PATCH] Add support for Guest physdev irqs
> >
> > ---
> > xen/arch/arm/domain.c | 16 ++++++++++++++++
> > xen/arch/arm/gic.c | 15 ++++++++++-----
> > xen/arch/arm/physdev.c | 48
> ++++++++++++++++++++++++++++++++++++++++++++++--
> > xen/arch/arm/vgic.c | 5 +----
> > 4 files changed, 73 insertions(+), 11 deletions(-)
> >
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 4c434a1..52d3429 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -31,6 +31,8 @@
> > #include <asm/gic.h>
> > #include "vtimer.h"
> > #include "vpl011.h"
> > +#include <xen/iocap.h>
> > +#include <xen/irq.h>
> >
> > DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
> >
> > @@ -513,8 +515,22 @@ fail:
> > return rc;
> > }
> >
> > +static int release_domain_irqs(struct domain *d)
> > +{
> > + int i;
> > + for (i = 0; i <= d->nr_pirqs; i++) {
> > + if (irq_access_permitted(d, i)) {
> > + release_irq(i);
> > + }
> > + }
> > + return 0;
> > +}
>
> As you may know, release_irq will spin until the flags IRQ_INPROGRESS
> is unset. This is done my the maintenance handler.
>
> Now, let say the domain has crashed and the IRQ is not yet EOIed, then you
> will spin forever.
>
> > +
> > void arch_domain_destroy(struct domain *d)
> > {
> > + if (d->irq_caps != NULL)
> You don't need this check.
> During the domain create, Xen ensures that irq_caps is not NULL.
>
> > + release_domain_irqs(d);
> > p2m_teardown(d);
> > domain_vgic_free(d);
> > domain_uart0_free(d);
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index cafb681..1f576d1 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -510,7 +510,7 @@ void gic_route_spis(void)
> > }
> > }
> >
> > -void __init release_irq(unsigned int irq)
> > +void release_irq(unsigned int irq)
> > {
> > struct irq_desc *desc;
> > unsigned long flags;
> > @@ -522,6 +522,7 @@ void __init release_irq(unsigned int irq)
> > action = desc->action;
> > desc->action = NULL;
> > desc->status |= IRQ_DISABLED;
> > + desc->status &= ~IRQ_GUEST;
> >
> > spin_lock(&gic.lock);
> > desc->handler->shutdown(desc);
> > @@ -707,6 +708,12 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> > spin_lock_irqsave(&desc->lock, flags);
> > spin_lock(&gic.lock);
> >
> > + if ( desc->action != NULL )
> > + {
> > + retval = -EBUSY;
> > + goto out;
> > + }
> > +
> > desc->handler = &gic_guest_irq_type;
> > desc->status |= IRQ_GUEST;
> >
> > @@ -715,12 +722,10 @@ int gic_route_irq_to_guest(struct domain *d, const
> struct dt_irq *irq,
> > gic_set_irq_properties(irq->irq, level, 1u << smp_processor_id(),
> > 0xa0);
> >
> > retval = __setup_irq(desc, irq->irq, action);
> > - if (retval) {
> > - xfree(action);
> > - goto out;
> > - }
> >
> > out:
> > + if (retval)
> > + xfree(action);
> > spin_unlock(&gic.lock);
> > spin_unlock_irqrestore(&desc->lock, flags);
> > return retval;
> > diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> > index 61b4a18..8a5f770 100644
> > --- a/xen/arch/arm/physdev.c
> > +++ b/xen/arch/arm/physdev.c
> > @@ -9,12 +9,56 @@
> > #include <xen/lib.h>
> > #include <xen/errno.h>
> > #include <asm/hypercall.h>
> > +#include <public/physdev.h>
> > +#include <xen/guest_access.h>
> > +#include <xen/irq.h>
> > +#include <xen/sched.h>
> > +#include <asm/gic.h>
> >
> >
> > int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> > {
> > - printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__,
> cmd);
> > - return -ENOSYS;
> > + int ret;
> > +
> > + switch ( cmd )
> > + {
> > + case PHYSDEVOP_map_pirq: {
> > + physdev_map_pirq_t map;
> > + struct dt_irq irq;
> > + struct domain *d;
> > +
> > + ret = -EFAULT;
> > + if ( copy_from_guest(&map, arg, 1) != 0 )
> > + break;
> > +
> > + d = rcu_lock_domain_by_any_id(map.domid);
> > + if ( d == NULL ) {
> > + ret = -ESRCH;
> > + break;
> > + }
>
> Missing sanity check on the map.pirq value.
>
> > + irq.irq = map.pirq;
> > + irq.type = DT_IRQ_TYPE_LEVEL_MASK;
> > +
> > + ret = gic_route_irq_to_guest(d, &irq, "GuestIRQ");
>
> Do you plan to handle non 1:1 IRQ mapping?
> How does work your the IRQ mapping if the IRQ is already mapped to dom0?
>
> > + if (!ret)
> > + printk("%s gic_route_irq_to_guest added IRQ %d to dom%d\n",
> > + __FUNCTION__, irq.irq, d->domain_id);
> > +
> > + rcu_unlock_domain(d);
> > +
> > + if (!ret && __copy_to_guest(arg, &map, 1) )
> > + ret = -EFAULT;
> > + break;
> > + }
> > +
> > + default:
> > + printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__,
> cmd);
> > + ret = -ENOSYS;
> > + break;
> > + }
> > +
> > + return ret;
> > }
> >
> > /*
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index 2e4b11f..0ebcdac 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d)
> > /* Currently nr_lines in vgic and gic doesn't have the same meanings
> > * Here nr_lines = number of SPIs
> > */
> > - if ( d->domain_id == 0 )
> > - d->arch.vgic.nr_lines = gic_number_lines() - 32;
> > - else
> > - d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > + d->arch.vgic.nr_lines = d->nr_pirqs - 32;
>
> If you want to stick on the 1:1 mapping, the best solution
> is to set "nr_lines to gic_number_lines() - 32" for all the domains.
>
> --
> Julien Grall
Here is the separate patch for the clearing of IRQ_GUEST in desc->status during
release_irq
execution.
From 84cf2265c5eabffa9de538e9b35bca20fe8f55ef Mon Sep 17 00:00:00 2001
From: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
Date: Fri, 12 Jul 2013 13:30:48 -0400
Subject: [PATCH] xen/arm: Clear the IRQ_GUEST bit in desc->status when
releasing an IRQ
While adding support for guest domU IRQs, I noticed that release_irq did
not clear the IRQ_GUEST bit in the IRQ's desc->status field.
This is probably not a big deal since not many situations are likely to arise
where an IRQ is sometimes host and sometimes guest.
Signed-off-by: Eric Trudeau <etrudeau@xxxxxxxxxxxx>
---
xen/arch/arm/gic.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 1487f27..ed15ec3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -524,6 +524,7 @@ void release_irq(unsigned int irq)
action = desc->action;
desc->action = NULL;
desc->status |= IRQ_DISABLED;
+ desc->status &= ~IRQ_GUEST;
spin_lock(&gic.lock);
desc->handler->shutdown(desc);
--
1.8.1.2
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |