|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 13/16] xen/arm: Add support for GIC v3
On Wed, Apr 23, 2014 at 10:31 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Tue, 2014-04-15 at 16:47 +0530, vijay.kilari@xxxxxxxxx wrote:
>
>> +
>> +/* per-cpu re-distributor base */
>> +static DEFINE_PER_CPU(void __iomem*, rbase);
>
> Does this end up containing one of the gic.rdist_regions[i].map entries?
> Any reason to duplicate this in that map field as well? Can't each
> processor map it as it is initialised and store it here directly?
>
> I suppose we have plenty of ioremap space on v8, so nr_cpus*2*64k isn't
> too bad and there's no need to go for per-pcpu pagetables with a
> dedicated virtual address region for the redistributors.
>
Already complete redistributor region is mapped. Here we are just storing
the re-distributor base of each cpu to access GICR SGI region of per cpu.
>> +/* Wait for completion of a distributor change */
>> +static void gicv3_do_wait_for_rwp(void __iomem *base)
>> +{
>> + u32 val;
>> + u32 count = 1000000;
>> +
>> + do {
>> + val = readl_relaxed(base + GICD_CTLR);
>> + if ( !(val & GICD_CTLR_RWP) )
>> + break;
>> + cpu_relax();
>> + udelay(1);
>
> Ick. Is there no event when rwp changes, so we could do wfe here?
>
> Could we at least use NOW() and MILLISECS() to construct a delay of a
> known length?
Do you mean, to use timer handler?. I thought this is simple enough
[...]
>
>> + } while ( count-- );
>> +
>> + if ( !count )
>> + dprintk(XENLOG_WARNING, "RWP timeout\n");
>
> Shouldn't we panic here?
>
> And if we are going to panic, we might as well wait forever? (Perhaps
> with a warning after some number of iterations.
>
Already after 1sec there is warning. I think warning is enough here
this is not such a critical scenario
[...]
>> +static u16 gicv3_compute_target_list(int *base_cpu, const struct cpumask
>> *mask,
>> + u64 cluster_id)
>> +{
>> + int cpu = *base_cpu;
>> + u64 mpidr = cpu_logical_map(cpu);
>> + u16 tlist = 0;
>> +
>> + while ( cpu < nr_cpu_ids )
>> + {
>> + /*
>> + * If we ever get a cluster of more than 16 CPUs, just
>> + * scream and skip that CPU.
>> + */
>> + if ( (mpidr & 0xff) >= 16 )
>> + {
>> + dprintk(XENLOG_WARNING, "Cluster more than 16's cpus\n");
>> + goto out;
>> + }
>> + tlist |= 1 << (mpidr & 0xf);
>> +
>> + cpu = cpumask_next(cpu, mask);
>
> Aren't you essentially opencoding for_each_cpu with this check and the
> while loop?
>
> The caller of this function already has a for_each_cpu in it. Can you
> explain the algorithm please.
Though the caller of this function calls for_each_cpu(), this gicv3_compute_
target_list() will update the *base_cpu always to the first cpu in the cluster
So essentially this for_each_cpu() will loop once per cluster
>
>> + if ( cpu == nr_cpu_ids )
>> + {
>> + cpu--;
>> + goto out;
>> + }
>> +
>> + mpidr = cpu_logical_map(cpu);
>> + if ( cluster_id != (mpidr & ~0xffUL) ) {
>> + cpu--;
>> + goto out;
>> + }
>> + }
>> +out:
>> + *base_cpu = cpu;
>> + return tlist;
>> +}
>> +
>> + dsb();
>
> The dsb() macro needs a scope parameter for a while now. Which version
> of Xen is all this based on?
Based on Stefano's git where your scope parameter patch does not exist
[..]
>> +int gicv_v3_init(struct domain *d)
>> +{
>> + int i;
>> + /*
>> + * Domain 0 gets the hardware address.
>> + * Guests get the virtual platform layout.
>> + */
>> + if ( d->domain_id == 0 )
>> + {
>> + d->arch.vgic.dbase = gic.dbase;
>> + d->arch.vgic.dbase_size = gic.dbase_size;
>> + for ( i = 0; i < gic.rdist_count; i++ )
>> + {
>> + d->arch.vgic.rbase[i] = gic.rdist_regions[i].rdist_base;
>> + d->arch.vgic.rbase_size[i] =
>> gic.rdist_regions[i].rdist_base_size;
>> + }
>> + d->arch.vgic.rdist_stride = gic.rdist_stride;
>> + d->arch.vgic.rdist_count = gic.rdist_count;
>> + }
>> + else
>> + {
>
> Does this need a comment like "Currently guests see a GICv2"? Otherwise
> doesn't it need an rbase etc?
Yes, will update it once I test DomU booting
- Vijay
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |