[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch RFC 04/13] vt-d: Clear invalidation table in invaidation interrupt handler



>>> On 16.09.15 at 15:23, <quan.xu@xxxxxxxxx> wrote:
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1098,6 +1098,28 @@ static void _qi_msi_mask(struct iommu *iommu)
>  
>  static void _do_iommu_qi(struct iommu *iommu)
>  {
> +    unsigned long nr_dom, i;
> +    struct domain *d = NULL;
> +
> +    nr_dom = cap_ndoms(iommu->cap);
> +    i = find_first_bit(iommu->domid_bitmap, nr_dom);
> +    while ( i < nr_dom )
> +    {
> +        if ( iommu->domid_map[i] > 0 )

This is a pointless check when the bit was already found set. What
instead you need to consider are races with table entries getting
removed (unless following the suggestions made on the previous
patch already make this impossible).

> +        {
> +            d = rcu_lock_domain_by_id(iommu->domid_map[i]);
> +            if ( d == NULL )
> +                continue;
> +
> +            if ( qi_table_pollslot(d) == qi_table_data(d) )

So qi_table_data() gets (non-atomically) incremented in the
previous patch when a new wait command gets issued. How is
this check safe (and the zapping below) against races, and
against out of order completion of invalidations?

Jan

> +            {
> +                qi_table_data(d) = 0;
> +                qi_table_pollslot(d) = 0;
> +            }
> +            rcu_unlock_domain(d);
> +        }
> +        i = find_next_bit(iommu->domid_bitmap, nr_dom, i+1);
> +    }
>  }
>  
>  static void do_iommu_qi_completion(unsigned long data)



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.