[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/3] x86/physdev: factor out the code to allocate and map a pirq



On Wed, Apr 19, 2017 at 04:11:26PM +0100, Roger Pau Monne wrote:
[...]
> +    if ( *pirq_p < 0 )
> +    {
> +        if ( pirq )
> +        {
> +            dprintk(XENLOG_G_ERR, "dom%d: %d:%d already mapped to %d\n",
> +                    d->domain_id, *index, *pirq_p, pirq);
> +            if ( pirq < 0 )
> +            {
> +                ret = -EBUSY;
> +                goto done;
> +            }
> +        }
> +        else if ( msi->entry_nr > 1 && !msi->table_base )
> +        {
> +            if ( msi->entry_nr <= 0 || msi->entry_nr > 32 )
> +                ret = -EDOM;
> +            else if ( msi->entry_nr != 1 && !iommu_intremap )
> +                ret = -EOPNOTSUPP;
> +            else
> +            {
> +                while ( msi->entry_nr & (msi->entry_nr - 1) )
> +                    msi->entry_nr += msi->entry_nr & -msi->entry_nr;
> +                pirq = get_free_pirqs(d, msi->entry_nr);
> +                ret = 0;
> +                if ( pirq < 0 )
> +                {
> +                    while ( (msi->entry_nr >>= 1) > 1 )
> +                        if ( get_free_pirqs(d, msi->entry_nr) > 0 )
> +                            break;

Maybe I'm missing something, but I think the code above should be:

                        if ( (pirq = get_free_pirqs(d, msi->entry_nr)) > 0 )
                            break;

Or else the pirq returned by get_free_pirqs is not stored anywhere, and thus
pirq will be -ENOSPC even when PIRQs have been allocated.

Let me know if this is true or not, so that I can send a patch to fix the
current code (which also has this issue).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.