[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 13/15] xen/arm: Add support for GIC v3



On Mon, Apr 14, 2014 at 1:57 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Fri, 2014-04-11 at 18:29 +0530, Vijay Kilari wrote:
>> On Thu, Apr 10, 2014 at 3:30 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> 
>> wrote:
>> >> +
>> >> +static void gic_clear_lr(int lr)
>> >> +{
>> >> +    gich_write_lr(lr, 0);
>> >> +}
>> >> +
>> >> +static void gic_read_lr(int lr, struct gic_lr *lr_reg)
>> >
>> > I can't find struct gic_lr anywhere.
>> Already defined in previous patch
>
> Which? I looked at the time and couldn't find it.
>
>> >
>> >> +{
>> >> +    u64 lrv;
>> >> +    lrv = gich_read_lr(lr);
>> >> +    lr_reg->pirq = (lrv >> GICH_LR_PHYSICAL_SHIFT) & 
>> >> GICH_LR_PHYSICAL_MASK;
>> >> +    lr_reg->virq = (lrv >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
>> >> +    lr_reg->priority = (lrv >> GICH_LR_PRIORITY_SHIFT) & 
>> >> GICH_LR_PRIORITY_MASK;
>> >> +    lr_reg->state    = (lrv >> GICH_LR_STATE_SHIFT) & GICH_LR_STATE_MASK;
>> >> +    lr_reg->hw_status = (lrv >> GICH_LR_HW_SHIFT) & GICH_LR_HW_MASK;
>> >> +    lr_reg->grp = (lrv >> GICH_LR_GRP_SHIFT) & GICH_LR_GRP_MASK;
>> >
>> > If you want to be able to do field-by-field accesses then please do what
>> > the page table code does and use a union and bit fields. See lpae_pt_t.
>> >
>> >         typedef union __packed {
>> >                 uint64_t bits;
>> >                 struct {
>> >                         unsigned long pirq:4;
>> >                         unsugned long virq:4;
>> >                 etc, including explicit padding
>> >                 };
>> >         } gicv3_lr;
>> >
>> > Then:
>> >         gicv3 lrv = {.bits = gich_read_lr(lr)};
>> >
>> The complexity is LR register is 64 bit in v3 and 32 bit v2.
>> Though I define this like above for v2 & v3, generic code still need to 
>> access
>> based on hw version. So I have make it without bit-fields
>
> How does the generic code access without knowing the size? And if it
> knows the size it can equally choose between gicv3_lr and gicv2_lr at
> the appropriate point.
There is one common gic_lr as below where respective GIC {v2/v3}
will decode in the callback and generic code just access by its fields.

struct gic_lr {
   uint32_t pirq;
   uint32_t virq;
   uint8_t priority;
   uint8_t state;
   uint8_t hw_status;
   uint8_t grp;
};

>
>> >> +      WRITE_SYSREG32((READ_SYSREG32(ICH_HCR_EL2) | flag), ICH_HCR_EL2);
>> >> +    else
>> >> +      WRITE_SYSREG32((READ_SYSREG32(ICH_HCR_EL2) & (~flag)), 
>> >> ICH_HCR_EL2);
>> >> +}
>> >> +
>> >> +    rdist_regs = xzalloc_array(struct rdist_region, gic.rdist_count);
>> >
>> > I thought I saw a comment at the top saying that only a single region
>> > was supported? Shouldn't this be checked somewhere, or even better
>> > fixed.
>> Yes, only rdist_region[0] is read from dt, which supports upto 32 cores.
>> So one can add if more than 32 cores is required.
>
> Something should error out if more than 32 cores are present then. Does
> it?
>
>> > Is there a limit on gic.rdist_count? How large is it? Can rdist_regs be
>> > a static array?
>> The rdist count is read from dt. so it is implementation defined
>
> Surely there must be some architectural limit which sets the maximum
> size an implementation can use?

I fixed this in next version. GIC v3 driver will read all the
re-distributor regions
from dt. However, I set MAX_RDIST_COUNT to 4 because
of static allocation in domain struct to hold these addresses.
Check is made in GICv3 driver if number of regions in dt is more than 4.
If we go for dynamic allocation we can remove this constraint as well.

         paddr_t cbase; /* CPU base address */
+        /* GIC V3 addressing */
+        paddr_t dbase_size; /* Distributor base size */
+        paddr_t rbase[MAX_RDIST_COUNT];      /* Re-Distributor base address */
+        paddr_t rbase_size[MAX_RDIST_COUNT]; /* Re-Distributor size */
+        uint32_t rdist_stride;               /* Re-Distributor stride */
+        int rdist_count;                     /* No. of Re-Distributors */
     } vgic;


>
> Ian.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.