[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v4 11/17] vt-d: Add API to update IRTE when VT-d PI is used




> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Friday, July 24, 2015 11:28 PM
> To: Wu, Feng
> Cc: Andrew Cooper; Tian, Kevin; Zhang, Yang Z; xen-devel@xxxxxxxxxxxxx; Keir
> Fraser
> Subject: Re: [v4 11/17] vt-d: Add API to update IRTE when VT-d PI is used
> 
> >>> On 23.07.15 at 13:35, <feng.wu@xxxxxxxxx> wrote:
> > +int pi_update_irte(struct vcpu *v, struct pirq *pirq, uint8_t gvec)
> 
> More constification is possible here.
> 
> > +{
> > +    struct irq_desc *desc;
> > +    const struct msi_desc *msi_desc;
> > +    int remap_index;
> > +    int rc = 0;
> > +    const struct pci_dev *pci_dev;
> > +    const struct acpi_drhd_unit *drhd;
> > +    struct iommu *iommu;
> > +    struct ir_ctrl *ir_ctrl;
> > +    struct iremap_entry *iremap_entries = NULL, *p = NULL;
> > +    struct iremap_entry new_ire, old_ire;
> > +    const struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> > +    unsigned long flags;
> > +    __uint128_t ret;
> > +
> > +    desc = pirq_spin_lock_irq_desc(pirq, NULL);
> > +    if ( !desc )
> > +        return -EINVAL;
> > +
> > +    msi_desc = desc->msi_desc;
> > +    if ( !msi_desc )
> > +    {
> > +        rc = -EBADSLT;
> > +        goto unlock_out;
> > +    }
> > +
> > +    pci_dev = msi_desc->dev;
> > +    if ( !pci_dev )
> > +    {
> > +        rc = -ENODEV;
> > +        goto unlock_out;
> > +    }
> > +
> > +    remap_index = msi_desc->remap_index;
> > +
> > +    /*
> > +     * For performance concern, we will store the 'iommu' pointer in
> > +     * 'struct msi_desc' in some other place, so we don't need to waste
> > +     * time searching it here. I will fix this soon.
> > +     */
> > +    drhd = acpi_find_matched_drhd_unit(pci_dev);
> > +    if ( !drhd )
> > +    {
> > +        rc = -ENODEV;
> > +        goto unlock_out;
> > +    }
> > +
> > +    iommu = drhd->iommu;
> > +    ir_ctrl = iommu_ir_ctrl(iommu);
> > +    if ( !ir_ctrl )
> > +    {
> > +        rc = -ENODEV;
> > +        goto unlock_out;
> > +    }
> > +
> > +    spin_unlock_irq(&desc->lock);
> > +
> > +    spin_lock_irqsave(&ir_ctrl->iremap_lock, flags);
> 
> So dropping the lock like this eliminates the lock nesting, but doesn't
> address my concern of namely acpi_find_matched_drhd_unit() being
> (apparently pointlessly) being called with the lock held. As I think I
> said before - perhaps what you really want here is to hold
> pcidevs_lock (and maybe your caller(s) already do so, in which case
> you'd just want to add a respective [documenting] ASSERT()).
> 
> Furthermore, having used spin_unlock_irq() right before, I can't see
> the point in then using spin_lock_irqsave() instead of just
> spin_lock_irq().
> 
> > +    GET_IREMAP_ENTRY(ir_ctrl->iremap_maddr, remap_index,
> iremap_entries, p);
> > +
> > +    old_ire = new_ire = *p;
> > +
> > +    /* Setup/Update interrupt remapping table entry. */
> > +    setup_posted_irte(&new_ire, pi_desc, gvec);
> > +    ret = cmpxchg16b(p, &old_ire, &new_ire);
> > +
> > +    ASSERT(ret == *(__uint128_t *)&old_ire);
> > +
> > +    iommu_flush_cache_entry(p, sizeof(struct iremap_entry));
> 
> sizeof(*p) please.
> 
> > +    iommu_flush_iec_index(iommu, 0, remap_index);
> > +
> > +    if ( iremap_entries )
> > +        unmap_vtd_domain_page(iremap_entries);
> 
> The conditional comes way too late: Either GET_IREMAP_ENTRY()
> can produce NULL, in which case you're hosed above. Or it can't,
> in which case the check here is pointless.

I cannot find the case GET_IREMAP_ENTRY() produce NULL for
"iremap_entries", if it is, GET_IREMAP_ENTRY() itself will get
a big problem, right? So this check is not needed, maybe I can
add an ASSERT() after GET_IREMAP_ENTRY().

Thanks,
Feng

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.