[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC 06/19] xen/arm: Implement hypercall PHYSDEVOP_map_pirq
On Mon, 16 Jun 2014, Julien Grall wrote: > The physdev sub-hypercall PHYSDEVOP_map_pirq allow the toolstack to route > a physical IRQ to the guest (via the config options "irqs" for xl). > For now, we allow only SPIs to be mapped to the guest. > The type MAP_PIRQ_TYPE_GSI is used for this purpose. > > The virtual IRQ number is equal to the physical one. This will avoid adding > logic in Xen to allocate the vIRQ number. The drawbacks is we allocated > unconditionally the same amount of SPIs as the host. This value will never > be more than 1024 with GICv2. > > Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx> > > --- > I'm wondering if we should introduce an alias of MAP_PIRQ_TYPE_GSI > for ARM. It's will be less confuse for the user. > --- > xen/arch/arm/physdev.c | 77 > ++++++++++++++++++++++++++++++++++++++++++++++-- > xen/arch/arm/vgic.c | 5 +--- > 2 files changed, 76 insertions(+), 6 deletions(-) > > diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c > index 61b4a18..d17589c 100644 > --- a/xen/arch/arm/physdev.c > +++ b/xen/arch/arm/physdev.c > @@ -8,13 +8,86 @@ > #include <xen/types.h> > #include <xen/lib.h> > #include <xen/errno.h> > +#include <xen/iocap.h> > +#include <xen/guest_access.h> > +#include <xsm/xsm.h> > +#include <asm/current.h> > #include <asm/hypercall.h> > +#include <public/physdev.h> > + > +static int physdev_map_pirq(domid_t domid, int type, int index, int *pirq_p) > +{ > + struct domain *d; > + int ret; > + int irq = index; > + > + d = rcu_lock_domain_by_any_id(domid); > + if ( d == NULL ) > + return -ESRCH; > + > + ret = xsm_map_domain_pirq(XSM_TARGET, d); > + if ( ret ) > + goto free_domain; > + > + /* For now we only suport GSI */ > + if ( type != MAP_PIRQ_TYPE_GSI ) > + { > + ret = -EINVAL; > + dprintk(XENLOG_G_ERR, "dom%u: wrong map_pirq type 0x%x\n", > + d->domain_id, type); > + goto free_domain; > + } > + > + if ( !is_routable_irq(irq) ) > + { > + ret = -EINVAL; > + dprintk(XENLOG_G_ERR, "IRQ%u is not routable to a guest\n", irq); > + goto free_domain; > + } > + > + ret = -EPERM; > + if ( !irq_access_permitted(current->domain, irq) ) > + goto free_domain; > + > + ret = route_irq_to_guest(d, irq, "routed IRQ"); > + > + /* GSIs are mapped 1:1 to the guest */ > + if ( !ret ) > + *pirq_p = irq; > + > +free_domain: > + rcu_unlock_domain(d); > + > + return ret; > +} > > > int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg) > { > - printk("%s %d cmd=%d: not implemented yet\n", __func__, __LINE__, cmd); > - return -ENOSYS; > + int ret; > + > + switch ( cmd ) > + { > + case PHYSDEVOP_map_pirq: > + { > + physdev_map_pirq_t map; > + > + ret = -EFAULT; > + if ( copy_from_guest(&map, arg, 1) != 0 ) > + break; > + > + ret = physdev_map_pirq(map.domid, map.type, map.index, > &map.pirq); > + > + if ( __copy_to_guest(arg, &map, 1) ) > + ret = -EFAULT; > + } > + break; > + default: > + ret = -ENOSYS; > + break; > + } > + > + return ret; > } > > /* > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c > index e451324..c18b2ca 100644 > --- a/xen/arch/arm/vgic.c > +++ b/xen/arch/arm/vgic.c > @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d) > /* Currently nr_lines in vgic and gic doesn't have the same meanings > * Here nr_lines = number of SPIs > */ > - if ( is_hardware_domain(d) ) > - d->arch.vgic.nr_lines = gic_number_lines() - 32; > - else > - d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */ > + d->arch.vgic.nr_lines = gic_number_lines() - 32; > > d->arch.vgic.shared_irqs = > xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d)); I see what you mean about virq != pirq. It seems to me that setting d->arch.vgic.nr_lines = gic_number_lines() - 32 for the hardware domain is OK, but it is really a waste for the others. We could find a way to pass down the info about how many SPIs we need from libxl. Or we could delay the vgic allocations until the first SPI is assigned to the domU. Similarly to the MMIO hole sizing, I don't think that it would be a requirement for this patch series but it is something to keep in mind. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |