[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v3 07/18] xen/arm: ITS: implement hw_irq_controller for LPIs
On Thu, 2015-07-02 at 17:51 +0530, Vijay Kilari wrote: > On Mon, Jun 29, 2015 at 5:29 PM, Ian Campbell <ian.campbell@xxxxxxxxxx> wrote: > > On Tue, 2015-06-23 at 15:32 +0100, Julien Grall wrote: > > [...] > >> > +{ > >> > + struct its_collection *col; > >> > + struct its_device *its_dev = get_irq_device(desc); > >> > + u8 *cfg; > >> > + u32 virq = irq_to_virq(desc); > >> > + > >> > + ASSERT(virq < its_dev->nr_lpis); > >> > + > >> > + cfg = gic_rdists->prop_page + desc->irq - NR_GIC_LPI; > >> > + if ( enable ) > >> > + *cfg |= LPI_PROP_ENABLED; > >> > + else > >> > + *cfg &= ~LPI_PROP_ENABLED; > >> > + > >> > + /* > >> > + * Make the above write visible to the redistributors. > >> > + * And yes, we're flushing exactly: One. Single. Byte. > >> > + * Humpf... > >> > + */ > >> > + if ( gic_rdists->flags & RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING ) > >> > + clean_and_invalidate_dcache_va_range(cfg, sizeof(*cfg)); > >> > + else > >> > + dsb(ishst); > >> > + > >> > + /* Get collection id for this event id */ > >> > + col = &its_dev->its->collections[virq % num_online_cpus()]; > >> > >> This is fragile, you are assuming that num_online_cpus() will never > >> change. Why don't you store the collection in every irq_desc? > > > > The original Linux code upon which this is based doesn't seem to need to > > lookup the collection here, why is flushing needed for us but not Linux? > > We are writing to lpi property table. Even linux code flushes it. Sorry I was referring to the collection look up and inv, not the cache flush, i.e. this bit: + /* Get collection id for this event id */ + col = &its_dev->its->collections[virq % num_online_cpus()]; + its_send_inv(its_dev, col, virq); Linux doesn't seem to do that INV there. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |