[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v3 07/18] xen/arm: ITS: implement hw_irq_controller for LPIs
On Fri, Jun 26, 2015 at 8:35 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote: > Hi Vijay, > > On 26/06/2015 14:54, Vijay Kilari wrote: >> >> On Tue, Jun 23, 2015 at 8:02 PM, Julien Grall <julien.grall@xxxxxxxxxx> >> wrote: >>> >>> Hi Vijay, >>> >>> On 22/06/15 13:01, vijay.kilari@xxxxxxxxx wrote: >>>> >>>> From: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxxxxxxxxxx> >>>> >>>> Implements hw_irq_controller api's required >>>> to handle LPI's >>> >>> >>> This patch doesn't hw_irq_controller for LPI but just hack around the >>> current GICv3 host hw_irq_controller. >>> >>> As said on the previous version, the goal of hw_irq_controller is too >>> keep things simple (i.e few conditional code). Please introduce a >>> separate hw_irq_controller for LPIs. >> >> >> If new hw_irq_controller is introduced for LPIs, then this has to >> be exported using some lpi structure which holds pointer to >> hw_irq_controller >> for guest & host type similar to gic_hw_ops > > > The interface is not set in stone, you are free to change what you want as > long as we keep something clean and comprehensible. It's the same for the > functions (I have in mind route_irq_to_guest). > > In this case, I would prefer to see 2 callbacks (one for the host the other > for the guest) which return the correct IRQ controller for a specific IRQ. I > have in mind something like: > > get_guest_hw_irq_controller(unsigned int irq) > { > if ( !is_lpi ) > return &gicv3_guest_irq_controller > else > return &gicv3_guest_lpi_controller > } > > Same for the host irq controller. So the selection of the IRQ controller > would be hidden from gic.c and keep the code a generic as possible. > >>>> + /* >>>> + * Make the above write visible to the redistributors. >>>> + * And yes, we're flushing exactly: One. Single. Byte. >>>> + * Humpf... >>>> + */ >>>> + if ( gic_rdists->flags & RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING ) >>>> + clean_and_invalidate_dcache_va_range(cfg, sizeof(*cfg)); >>>> + else >>>> + dsb(ishst); >>>> + >>>> + /* Get collection id for this event id */ >>>> + col = &its_dev->its->collections[virq % num_online_cpus()]; >>> >>> >>> This is fragile, you are assuming that num_online_cpus() will never >>> change. Why don't you store the collection in every irq_desc? >> >> >> This will add additional 8 bytes overhead for each irq_desc. >> >> Also is there a macro to get number of actual number processors in >> system.? >> AUI, nr_cpu_ids always returns 128 > > > AFAIU, nr_cpu_ids should reflect the number of CPU of the platform. x86 > correctly set it when parsing the ACPI. So I think this is a bug in the ARM > code. > > In fact, I wasn't able to find a place in the ARM code where this value is > changed. nr_cpu_ids is not changed in case of ARM. I think this value has to be updated in start_xen with cpus value void __init start_xen(unsigned long boot_phys_offset, unsigned long fdt_paddr, unsigned long cpuid) .... cpus = smp_get_max_cpus(); .. } _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |