[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 05/15] xen/arm: segregate GIC low level functionality
On 04/09/2014 12:34 PM, Vijay Kilari wrote: >> GICD_SGI_TARGERT_OTHERS will send an SGI to every CPUs even if the CPU is >> not yet online (i.e. not registered by Xen). It's used during secondary boot >> (cpu_up_send_sgi). > > > cpumask_andnot(&all_others_mask, > &cpu_possible_map,cpumask_of(smp_processor_id())); > > In my understanding, with the above statement, I am using > cpu_possible_map (all possible cpu's) which should > contains all the cpu possible cpu masks. so this is fine. > > The issue could be in gic_send_sgi() call which is always "and" with > cpu_online_map > > static void gic_send_sgi(const cpumask_t *cpumask, enum gic_sgi sgi) > { > unsigned int mask = 0; > cpumask_t online_mask; > > ASSERT(sgi < 16); /* There are only 16 SGIs */ > > cpumask_and(&online_mask, cpumask, &cpu_online_map); > mask = gic_cpu_mask(&online_mask); > > dsb(sy); > > GICD[GICD_SGIR] = GICD_SGI_TARGET_LIST > | (mask<<GICD_SGI_TARGET_SHIFT) > | sgi; > } > > I think gic_send_sgi should be passed with absolute mask value > and get rid of and'ing with cpu_online_map It won't work... gic_cpu_mask will translate the set of CPU ID into a set of GIC CPU ID. The mapping is initialized by gic_cpu_init. This function is only called when the CPU is booting. So the mask is invalid for offline CPU... >>> -static void do_sgi(struct cpu_user_regs *regs, int othercpu, enum gic_sgi >>> sgi) >>> +static void do_sgi(struct cpu_user_regs *regs, enum gic_sgi sgi) >> >> >> Why did you drop the othercpu here? >> > othercpu is not used at all. and also othercpu is computed with > IAR fields #defines which is not required in this generic code. It should not be part of this patch. Please send a separate patch for this change. Regards, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |