[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen/arm: vgic-v3: Correctly retrieve the vCPU associated to a re-distributor



On Mon, 2015-09-28 at 21:31 +0100, Julien Grall wrote:
> When the guest is accessing the re-distributor, Xen retrieves the base
> of the re-distributor using a mask based on the stride.
> 
> Although, when the stride contains multiple set, the corresponding mask

                                                 ^bits?

(Also, drop the "Although, ").

> will be computed incorrectly [1] and therefore giving invalid vCPU and
> offset:
> 
> (XEN) d0v0: vGICR: unknown gpa read address 000000008d130008
> (XEN) traps.c:2447:d0v1 HSR=0x93c08006 pc=0xffffffc00032362c
> gva=0xffffff80000b0008 gpa=0x0000008d130008
> 
> For instance if the region of re-distributor is starting at 0x8d100000
> and the stride is 0x30000, an access to the address 0x8d130008 should
> be valid and use the re-distributor of vCPU1 with an offset of 0x8.
> Although, Xen is returning the vCPU0 and an offset of 0x20008.
> 
> I didn't find a way to replace the current computation of the mask with
> a valid. The only solution I have found is to pass the region in private

        ^one

> data of the handler. So we can directly get the offset from the
> beginning of the region and find the corresponding vCPU/offset in the
> re-distributor.
> 
> This is also make the code simpler and avoid fast/slow path.
> 
> [1] http://lists.xen.org/archives/html/xen-devel/2015-09/msg03372.html
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> 
> ---
> Cc: shameerali.kolothum.thodi@xxxxxxxxxx
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> 
>     This is technically a good candidate for Xen 4.6. Without it DOM0
>     may not be able to bring secondary CPU on platform using a stride
>     with multiple bit set [1]. Although, we don't support such platform
>     right now. So I would rather defer this change to Xen 4.6.1 and
>     avoid possible downside/bug in this patch.

Agreed.

Other than the text:

Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> 
>     Shamaeerali, I've only tested quickly on the foundation model. Can
>     you give a try on your platform to check if it fixes DOM0 boot?
> 
>     [1] 
> http://lists.xen.org/archives/html/xen-devel/2015-09/msg03372.html
> ---
>  xen/arch/arm/vgic-v3.c | 58 +++++++++++++++++---------------------------
> ------
>  1 file changed, 20 insertions(+), 38 deletions(-)
> 
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 6a4feb2..0a14184 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -593,55 +593,34 @@ write_ignore_32:
>      return 1;
>  }
>  
> -static inline struct vcpu *get_vcpu_from_rdist(paddr_t gpa,
> -                                               struct vcpu *v,
> -                                               uint32_t *offset)
> +static struct vcpu *get_vcpu_from_rdist(struct domain *d,
> +    const struct vgic_rdist_region *region,
> +    paddr_t gpa, uint32_t *offset)
>  {
> -    struct domain *d = v->domain;
> +    struct vcpu *v;
>      uint32_t stride = d->arch.vgic.rdist_stride;
> -    paddr_t base;
> -    int i, vcpu_id;
> -    struct vgic_rdist_region *region;
> -
> -    *offset = gpa & (stride - 1);
> -    base = gpa & ~((paddr_t)stride - 1);
> -
> -    /* Fast path: the VCPU is trying to access its re-distributor */
> -    if ( likely(v->arch.vgic.rdist_base == base) )
> -        return v;
> -
> -    /* Slow path: the VCPU is trying to access another re-distributor */
> -
> -    /*
> -     * Find the region where the re-distributor lives. For this purpose,
> -     * we look one region ahead as only MMIO range for redistributors
> -     * traps here.
> -     * Note: The region has been ordered during the GIC initialization
> -     */
> -    for ( i = 1; i < d->arch.vgic.nr_regions; i++ )
> -    {
> -        if ( base < d->arch.vgic.rdist_regions[i].base )
> -            break;
> -    }
> -
> -    region = &d->arch.vgic.rdist_regions[i - 1];
> -
> -    vcpu_id = region->first_cpu + ((base - region->base) / stride);
> +    unsigned int vcpu_id;
>  
> +    vcpu_id = region->first_cpu + ((gpa - region->base) / stride);
>      if ( unlikely(vcpu_id >= d->max_vcpus) )
>          return NULL;
>  
> -    return d->vcpu[vcpu_id];
> +    v = d->vcpu[vcpu_id];
> +
> +    *offset = gpa - v->arch.vgic.rdist_base;
> +
> +    return v;
>  }
>  
>  static int vgic_v3_rdistr_mmio_read(struct vcpu *v, mmio_info_t *info,
>                                      void *priv)
>  {
>      uint32_t offset;
> +    const struct vgic_rdist_region *region = priv;
>  
>      perfc_incr(vgicr_reads);
>  
> -    v = get_vcpu_from_rdist(info->gpa, v, &offset);
> +    v = get_vcpu_from_rdist(v->domain, region, info->gpa, &offset);
>      if ( unlikely(!v) )
>          return 0;
>  
> @@ -661,10 +640,11 @@ static int vgic_v3_rdistr_mmio_write(struct vcpu
> *v, mmio_info_t *info,
>                                       void *priv)
>  {
>      uint32_t offset;
> +    const struct vgic_rdist_region *region = priv;
>  
>      perfc_incr(vgicr_writes);
>  
> -    v = get_vcpu_from_rdist(info->gpa, v, &offset);
> +    v = get_vcpu_from_rdist(v->domain, region, info->gpa, &offset);
>      if ( unlikely(!v) )
>          return 0;
>  
> @@ -1212,10 +1192,12 @@ static int vgic_v3_domain_init(struct domain *d)
>       * redistributor is targeted.
>       */
>      for ( i = 0; i < d->arch.vgic.nr_regions; i++ )
> +    {
> +        struct vgic_rdist_region *region = &d
> ->arch.vgic.rdist_regions[i];
> +
>          register_mmio_handler(d, &vgic_rdistr_mmio_handler,
> -            d->arch.vgic.rdist_regions[i].base,
> -            d->arch.vgic.rdist_regions[i].size,
> -            NULL);
> +                              region->base, region->size, region);
> +    }
>  
>      d->arch.vgic.ctlr = VGICD_CTLR_DEFAULT;
>  

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.