[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 21/28] ARM: vITS: handle MAPTI command



On Tue, 23 May 2017, Andre Przywara wrote:
> Hi,
> 
> On 23/05/17 00:39, Stefano Stabellini wrote:
> > On Thu, 11 May 2017, Andre Przywara wrote:
> >> @@ -556,6 +583,93 @@ static int its_handle_mapd(struct virt_its *its, 
> >> uint64_t *cmdptr)
> >>      return ret;
> >>  }
> >>  
> >> +static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
> >> +{
> >> +    uint32_t devid = its_cmd_get_deviceid(cmdptr);
> >> +    uint32_t eventid = its_cmd_get_id(cmdptr);
> >> +    uint32_t intid = its_cmd_get_physical_id(cmdptr), _intid;
> >> +    uint16_t collid = its_cmd_get_collection(cmdptr);
> >> +    struct pending_irq *pirq;
> >> +    struct vcpu *vcpu = NULL;
> >> +    int ret = -1;
> > 
> > I think we need to check the eventid to be valid, don't we?
> 
> Yes, but this will be done below as part of write_itte_locked(), in fact
> already read_itte_locked().
> Shall I add a comment about this?

No need, thanks

> 
> >> +    if ( its_cmd_get_command(cmdptr) == GITS_CMD_MAPI )
> >> +        intid = eventid;
> >> +
> >> +    spin_lock(&its->its_lock);
> >> +    /*
> >> +     * Check whether there is a valid existing mapping. If yes, behavior 
> >> is
> >> +     * unpredictable, we choose to ignore this command here.
> >> +     * This makes sure we start with a pristine pending_irq below.
> >> +     */
> >> +    if ( read_itte_locked(its, devid, eventid, &vcpu, &_intid) &&
> >> +         _intid != INVALID_LPI )
> >> +    {
> >> +        spin_unlock(&its->its_lock);
> >> +        return -1;
> >> +    }
> >> +
> >> +    /* Enter the mapping in our virtual ITS tables. */
> >> +    if ( !write_itte_locked(its, devid, eventid, collid, intid, &vcpu) )
> >> +    {
> >> +        spin_unlock(&its->its_lock);
> >> +        return -1;
> >> +    }
> >> +
> >> +    spin_unlock(&its->its_lock);
> >> +
> >> +    /*
> >> +     * Connect this virtual LPI to the corresponding host LPI, which is
> >> +     * determined by the same device ID and event ID on the host side.
> >> +     * This returns us the corresponding, still unused pending_irq.
> >> +     */
> >> +    pirq = gicv3_assign_guest_event(its->d, its->doorbell_address,
> >> +                                    devid, eventid, vcpu, intid);
> >> +    if ( !pirq )
> >> +        goto out_remove_mapping;
> >> +
> >> +    vgic_init_pending_irq(pirq, intid);
> >> +
> >> +    /*
> >> +     * Now read the guest's property table to initialize our cached state.
> >> +     * It can't fire at this time, because it is not known to the host 
> >> yet.
> >> +     * We don't need the VGIC VCPU lock here, because the pending_irq 
> >> isn't
> >> +     * in the radix tree yet.
> >> +     */
> >> +    ret = update_lpi_property(its->d, pirq);
> >> +    if ( ret )
> >> +        goto out_remove_host_entry;
> >> +
> >> +    pirq->lpi_vcpu_id = vcpu->vcpu_id;
> >> +    /*
> >> +     * Mark this LPI as new, so any older (now unmapped) LPI in any LR
> >> +     * can be easily recognised as such.
> >> +     */
> >> +    set_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &pirq->status);
> >> +
> >> +    /*
> >> +     * Now insert the pending_irq into the domain's LPI tree, so that
> >> +     * it becomes live.
> >> +     */
> >> +    write_lock(&its->d->arch.vgic.pend_lpi_tree_lock);
> >> +    ret = radix_tree_insert(&its->d->arch.vgic.pend_lpi_tree, intid, 
> >> pirq);
> >> +    write_unlock(&its->d->arch.vgic.pend_lpi_tree_lock);
> >> +
> >> +    if ( !ret )
> >> +        return 0;
> >> +
> >> +out_remove_host_entry:
> >> +    gicv3_remove_guest_event(its->d, its->doorbell_address, devid, 
> >> eventid);
> >> +
> >> +out_remove_mapping:
> >> +    spin_lock(&its->its_lock);
> >> +    write_itte_locked(its, devid, eventid,
> >> +                      UNMAPPED_COLLECTION, INVALID_LPI, NULL);
> >> +    spin_unlock(&its->its_lock);
> >> +
> >> +    return ret;
> >> +}
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.