[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/5] xen/arm: ffa: Introduce VM to VM support


  • To: Julien Grall <julien@xxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Tue, 1 Apr 2025 07:45:52 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LygCG9fwK/L9LXwvWXAWI159V+0t6R493yF13OMpwCY=; b=scXsXHlCbqdmmM5pbZqzPqbx6hwMOPYGSNRyeIj7Xv/Mib28hDXwq/ArdNCfRbnEKglfHr7ZU1GqaYkIIXRxU7u+D5tz3lU4dUEcnTDnKqLT0GW/hPbFIq+UNCwBWqAiWbGc8TqCez1Fa7+Vuy0E5rbiPNUzqajxButIaCQM3qkA7sWEoBexA1lnTEihhPjeurpeTBktnUUqkqGoDm9jNLeF1Q9zvfdE/fJHQp/5wtRvcu3tIQAGx88bOjC/zjOWQERVANvfQ5kxi5QYOX05wYjeyq4kWsFtFOr1oBKMhO1bTFgAtPy1sdXxZbOR673vxx0lyEvavX7cphewzyDVhQ==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LygCG9fwK/L9LXwvWXAWI159V+0t6R493yF13OMpwCY=; b=SrC//G6lckOYuBK09+pkNVZ8bcM0U/327CIQUuzG4guW6Umjz2bKC1P9YXifQy6MGloPP7DN2Oy5wYiq9AV+lvsnwsEmAv69eK3JXw8TsO848PxxjvDn0Banb3dMWzfnJqAuxUUIp9nvugWPlmY7cN8r/pEJeeya3Rs3cRCJmk4MIrSA0Ivh7H29CBvYVFj+3cHMAO4UfbF76ULeFSRLeZYYFhy2WGPjdnhfXHjFZF/7nSbfKQtoOCjbSITXLZSB8o6ExoAKbUcjs9JPRKeO37wlrum13aPipUGwsFjDISRNPH6EKdSHFm4Ti5Ka7JNNNOfD1MRwLWNkUkWYNf5WBA==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=XAmTR3FOhGUikYzq9mojtKQEHHdgwc4rjJdWUb2hMB2eDgMEbjzCWvECm6viy3nDnVq9u7E57oM1jB9s4LLdDxQaurkX+npxwx4e5m+rLmWN+zJKQqfiLQzVpz6h9msUPv3V23g/HSJVBXuV+t27SdiCaJMJwkhxdaLa3nnI31JVqjHudpeDMzQllFIGObzdbQwqY/BUz2x3badDJuh93uGKtqjOJJmKk/qk1BS0T8E5hzJ0+bIPgPAW7W71DroJl4TN38emvPyLxyOUM25xP5VespnMSTeFGDLQDFggzUJzV2i91ORQpv535zxa+YLEg1XiS+lvnyAJ/GaBBiK0VA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=lkSpUJM3BBWF5741O0SXOd7sx4NxTE5a+oToCdF9NXI/o9XAerwr+BBiGeb+1mWSdV8JMbfDjgXdqaah2niesmXmcSaUqZsk6b7KtXveGmOYccdSr/LsyyRUQ6GN5JWV+hXBkht4ZqwwMzcnuGJZbJB3eWNr0/pQVJyH4BKfRDtSnofhLk0oxNbD/FYMo1aNFotLiD9pLcnFhVDIfqhtLyMg35YjlZBk0i3u9OWRkb28fq7GXUcHbzgtdBDNktGE0sgzjZbdpVQIYObTCexjD2oBP/x1kBCSloGjrfuuSaSbOXRpzi57o5noWHYer6d2jmdb79vSHpfO61/9F+4YVQ==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "jens.wiklander@xxxxxxxxxx" <jens.wiklander@xxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Tue, 01 Apr 2025 07:46:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHbnMQnVQCSJAleWkGtVlVLXLrx+bOGED+AgACZt4CABZFtgIACPzcA
  • Thread-topic: [PATCH v4 2/5] xen/arm: ffa: Introduce VM to VM support

Hi Julien,

> On 30 Mar 2025, at 23:26, Julien Grall <julien@xxxxxxx> wrote:
> 
> Hi Bertrand,
> 
> On 27/03/2025 08:25, Bertrand Marquis wrote:
>>> On 27 Mar 2025, at 00:14, Julien Grall <julien@xxxxxxx> wrote:
>>>> +static int32_t ffa_get_sp_partinfo(uint32_t *uuid, uint32_t *sp_count,
>>>> +                                   void *dst_buf, void *end_buf,
>>>> +                                   uint32_t dst_size)
>>>>  {
>>>>      int32_t ret;
>>>> +    uint32_t src_size, real_sp_count;
>>>> +    void *src_buf = ffa_rx;
>>>> +    uint32_t count = 0;
>>>> +
>>>> +    /* Do we have a RX buffer with the SPMC */
>>>> +    if ( !ffa_rx )
>>>> +        return FFA_RET_DENIED;
>>>> +
>>>> +    /* We need to use the RX buffer to receive the list */
>>>> +    spin_lock(&ffa_rx_buffer_lock);
>>>> +
>>>> +    ret = ffa_partition_info_get(uuid, 0, &real_sp_count, &src_size);
>>>> +    if ( ret )
>>>> +        goto out;
>>>> +
>>>> +    /* We now own the RX buffer */
>>>> +
>>>> +    /* We only support a 1.1 firmware version */
>>>> +    if ( src_size < sizeof(struct ffa_partition_info_1_0) )
>>>> +    {
>>>> +        ret = FFA_RET_NOT_SUPPORTED;
>>>> +        goto out_release;
>>>> +    }
>>>> +
>>>> +    for ( uint32_t sp_num = 0; sp_num < real_sp_count; sp_num++ )
>>> 
>>> What's the upper limit of real_sp_count? Asking just in case we need to 
>>> handle preemption.
>> In theory that would be 32k but in practice the number of entries we can 
>> receive is
>> limited by the size of the exchange area we have with the secure world.
>> This area is currently defined to be one page and each entry is 8 byte in the
>> case where firmware is 1.0 (24 bytes otherwise).
>> This is an upper limit of 500 entries.
>> Now I do think this is completely unrealistic to imagine a secure world with 
>> 500 SPs
>> so If you are ok I would rather define an upper limit in Xen (I would say 64 
>> SPs) that
>> can be changed in the code (through a define).
>> This call currently does not support preemption in the FF-A spec (and that 
>> is something
>> i will check for future versions) so I would have no solution to "continue" 
>> it.
> 
>> Would the "define" solution be acceptable for now ?
> 
> I think the define solution is acceptable for now and possibly longer. It is 
> an easy way to avoid dealing with preemption.

I will do something with a default value of 64 which should fit for now (and 
maybe longer).

> 
> [...]
> 
> 
>>>> +static uint32_t ffa_get_vm_count(void)
>>> 
>>> Is this meant to be called often? If so, can we instead have a counter that 
>>> will be incremented when the vTEE is first initialized and then decremented 
>>> when the VM is destroyed?
>> As of now we could have a global counter that we increase or decrease
>> when a domain version is negociated and when a domain is destroyed.
>> We could also have some kind of global save of the result to be returned
>> to a guest.
>> But I did not do that because:
>> - cpu time required to update the list would be taken out an FF-A call
>> for FFA_VERSION of a VM which would require a global lock to protect
>> the data
> 
> I would have thought an atomic counter would be sufficient. Is there anything 
> you had in mind?

Very good idea. I only need to increase it when an FFA_VERSION has been
negociated and decrease on VM destruction.

I will do that.

> 
>> - when we will have filtering the data will be per VM (which would make
>> the initial update more complex)
>> - incoming we have a notion of UUID and filtering depending on the
>> requested UUID which will make the global value only useable in a
>> limited number of cases.
>> I have 2 pending series on top of this one which would have to remove
>> such optimisations.
>> At the end this is definitely not something supposed to call often (linux
>> driver is calling it once during init).
> 
> I think it was a mistake for me to asked whether this is called often or not. 
> When it comes to security, what matter is whether a malicious guest could 
> indirectly call ffa_get_vm_count() and delay any work on the pCPU (Xen is not 
> preemptible).
> 
> We don't have to resolve this now. But this will need to be addressed before 
> we can we consider FFA security supported. So we should at least add it in 
> the list of issue at the top of the file.

No this is really useful and I will fix it and also the partition info part 
because:
- all information is read only and known when the VM is created (VM ID, max 
vcpus and 64bit or not)
- I do not need anything else than that to build the result
- If we can prevent to take a lock this will make the code better.

So I will:
- add an atomic to count the number of VMs with FF-A
- create a chain list of VM FF-A contexts
- add a context to the chain when a version is negociated
- put the infos i need in the ffa ctx structure
- just go through the list to build the partinfo result

> 
>>> 
>>>> +{
>>>> +    struct domain *d = current->domain;
>>>> +    struct domain *dom;
>>> 
>>> NIT: "d" and "dom" are a bit too close. Could we rename "d" with "curr_d"?
>> i will go with curr_d dest_d to make this clearer.
>>> 
>>>> +    uint32_t vm_count = 0;
>>>> +
>>>> +    /* Count the number of VM with FF-A support */
>>> 
>>> This comment implies this is including the current VM. But ...
>>> 
>>>> +    rcu_read_lock(&domlist_read_lock);
>>>> +    for_each_domain( dom )
>>>> +    {
>>>> +        struct ffa_ctx *vm = dom->arch.tee;
>>>> +
>>>> +        if ( dom != d && vm != NULL && vm->guest_vers != 0 )
>>> 
>>> ... here you are excluding it. Also, it sounds like this is support by the 
>>> OS rather than the VM itself. Is that correct?
>> I have a comment to explain that one in a different serie that i will put 
>> here.
> > > Basically before 1.2, the spec was a bit blurry on if or not the result 
> > > should include the
>> calling VM and in fact Linux driver (before 1.2) was ending with an error if 
>> its own data
>> was included in the result hence this filter.
> 
> Thanks for the clarification. Just to clarify...
> 
>> I will add a comment for that.
> 
> ... will the comment be added in this patch?

Yes i will add the comment in the next version of the patch.

> 
>>> 
>>>> +            vm_count++;> +    }
>>>> +    rcu_read_unlock(&domlist_read_lock);
>>>> +> +    return vm_count;
>>> 
>>> OOI, I guess this value is just used as an hint? Asking because the number 
>>> of domains can change at any point.
>> Definitely yes. The data is what it is when called but anything could change 
>> after.
>> This is mostly used as hint by callers.
> 
> Does this mean we would always return a fixed number? Asking because this 
> would solve nicely the preemption problem in ffa_get_vm_count().

Well the number is changing when VMs are created or destroyed but as explain 
earlier i will modify the design to prevent the lock
using an atomic and a chain list.

Now the problem of the potential high number of VMs still stand so I will add a 
todo for that and discuss with others to check if this could be solved in the 
FF-A spec in the feature.

Thanks a lot for the comments.

I will work on v4 for when you get back :-)

Cheers
Bertrand




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.