[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 19/28] ARM: vITS: handle MAPD command



Hi,

On 25/05/17 21:44, Stefano Stabellini wrote:
> On Thu, 25 May 2017, Julien Grall wrote:
>> On 25/05/2017 19:55, Stefano Stabellini wrote:
>>> On Wed, 24 May 2017, Andre Przywara wrote:
>>>> Hi,
>>>>
>>>> On 24/05/17 10:56, Julien Grall wrote:
>>>>> Hi Andre,
>>>>>
>>>>> On 05/24/2017 10:10 AM, Andre Przywara wrote:
>>>>>> On 17/05/17 19:07, Julien Grall wrote:
>>>>>>>>  /*
>>>>>>>>   * Lookup the address of the Interrupt Translation Table
>>>>>>>> associated
>>>>>>>> with
>>>>>>>>   * that device ID.
>>>>>>>> @@ -414,6 +429,133 @@ out_unlock:
>>>>>>>>      return ret;
>>>>>>>>  }
>>>>>>>>
>>>>>>>> +/* Must be called with the ITS lock held. */
>>>>>>>> +static int its_discard_event(struct virt_its *its,
>>>>>>>> +                             uint32_t vdevid, uint32_t vevid)
>>>>>>>> +{
>>>>>>>> +    struct pending_irq *p;
>>>>>>>> +    unsigned long flags;
>>>>>>>> +    struct vcpu *vcpu;
>>>>>>>> +    uint32_t vlpi;
>>>>>>>> +
>>>>>>>> +    ASSERT(spin_is_locked(&its->its_lock));
>>>>>>>> +
>>>>>>>> +    if ( !read_itte_locked(its, vdevid, vevid, &vcpu, &vlpi) )
>>>>>>>> +        return -ENOENT;
>>>>>>>> +
>>>>>>>> +    if ( vlpi == INVALID_LPI )
>>>>>>>> +        return -ENOENT;
>>>>>>>> +
>>>>>>>> +    /* Lock this VCPU's VGIC to make sure nobody is using the
>>>>>>>> pending_irq. */
>>>>>>>> +    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);
>>>>>>>
>>>>>>> There is an interesting issue happening with this code. You don't
>>>>>>> check
>>>>>>> the content of the memory provided by the guest. So a malicious
>>>>>>> guest
>>>>>>> could craft the memory in order to setup mapping with known vlpi and
>>>>>>> a
>>>>>>> different vCPU.
>>>>>>>
>>>>>>> This would lead to use the wrong lock here and corrupt the list.
>>>>>>
>>>>>> What about this:
>>>>>> Right now (mostly due to the requirements of the INVALL
>>>>>> implementation)
>>>>>> we store the VCPU ID in our struct pending_irq, populated upon MAPTI.
>>>>>> So
>>>>>> originally this was just for caching (INVALL being the only user of
>>>>>> this), but I was wondering if we should move the actual instance of
>>>>>> this
>>>>>> information to pending_irq instead of relying on the collection ID
>>>>>> from
>>>>>> the ITS table. So we would never need to look up and trust the ITS
>>>>>> tables for this information anymore. Later with the VGIC rework we
>>>>>> will
>>>>>> need this field anyway (even for SPIs).
>>>>>>
>>>>>> I think this should solve this threat, where a guest can manipulate
>>>>>> Xen
>>>>>> by crafting the tables. Tinkering with the other information stored in
>>>>>> the tables should not harm Xen, the guest would just shoot itself into
>>>>>> the foot.
>>>>>>
>>>>>> Does that make sense?
>>>>>
>>>>> I think so. If I understand correctly, with that solution we would not
>>>>> need to protect the memory provided by the guest?
>>>>
>>>> Well, it gets better (though also a bit scary):
>>>> Currently we use the guest's ITS tables to translate a DeviceID/EventID
>>>> pair to a vLPI/vCPU pair. Now there is this new
>>>> gicv3_its_get_event_pending_irq() function, which also takes an ITS and
>>>> an DeviceID/EventID pair and gives us a struct pending_irq.
>>>> And here we have both the vLPI number and the VCPU number in there
>>>> already, so actually we don't need read_itte() anymore. And if we don't
>>>> read, we don't need write. And if we don't write, we don't need to
>>>> access guest memory. So this seems to ripple through and allows us to
>>>> possibly dump the guest memory tables altogether.
>>>
>>> Sounds like a good idea to me for DeviceID/EventID to vLPI/vCPU
>>> translations.
>>>
>>>
>>>> Now we still use the collection table in guest memory, but I was
>>>> wondering if we could store the collection ID in the vcpu struct and use
>>>> some hashing scheme to do the reverse lookup. But that might be
>>>> something for some future cleanup / optimization series.
>>>
>>> Leaving the security angle aside for a moment, I would prefer to keep
>>> the guest memory accesses rather than adding another hashing scheme to
>>> Xen for collection IDs.
>>
>> The spec only require you to implement max cpus + 1 collections. I don't 
>> think
>> an hashing scheme would be necessary here. It is a simple array (1 byte per
>> entry in today).
>>
>>>
>>> Going back to security: it looks like it should be possible to check for
>>> the validity of collection IDs without too much troubles?
>>> too?
>>
>> If we store everthing in the pending_irq the use of the collection would
>> limited to a few commands (e.g MOVI, MAPTI...). We don't much care if the
>> guest modify the collection table as long as we check the vCPU is valid.
> 
> That's what I thought. In that case we might as well keep the info in
> guest memory.

Well, if we have a chance to drop guest memory accesses from the ITS
altogether, we should consider this.
But trying to implement this I saw that this requires quite some code
changes, which Julien suggested to postpone for the rework. And I agree.
So I kept it as it is now and added TODOs.

Cheers,
Andre

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.