[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/5] xen/arm: ffa: Introduce VM to VM support


  • To: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Fri, 21 Mar 2025 10:49:24 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yfl2xdhhcndrdxdZvi4/nY+RMNmGhlZfiGVOm+YXzZE=; b=ESNzYNUlEDg3MFRMSrRHsexSz78WxS+vvMulxmlfg4F3oIKgFNx8tUY2vvHaIpU1CrA+HyFAGZ993X2oZf3/b58OtRLFJQjXQxkAEQO7sYRMd24a7S1ccA7B8MZ1ghSqSormiAy+Aqe9ibaH27VuVooSqjifnQfWJ37FyvHHK/mjYtnEVYThoeQDndBgU57GGD0X0YBz+tRyR/XjvEspEs4tS/t+1MUo1hMbEdYVEMdAJT+2Q1/V19Z9pYYRPP8qVt8klE8pcQrbnc3ObHPRUTg9QT8CcRWq4Sck9QFrzA004ziJJRIng6qgHTixoextJz4+pbmYopKybKeL9ARjYQ==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=yfl2xdhhcndrdxdZvi4/nY+RMNmGhlZfiGVOm+YXzZE=; b=U7V+sKuiF2qy9xZjQwqS+aI3dlFGa7r4i6mZIv+Z5NiJwz01hS4a+Ze1S2HHYtV7SWV12w7CPxQ79BoQaXUKIVCUuDd39srfhEG2rgCDEyrA1CMgOduFp6JKDavhbwLsGIyW7rhRg9AFVxgUPl0ZwkeEbf2Y4+GWSCne1UoAZdQ3GbHS3VDPkQwJmGDJvXTuAyaLJIfC3I4ZVlzkT8h2IXNihhvBb91eyfl9c0OI1zFVjMqzQ3UIYXVd2L+MmcWu0If75naOkBI/RYZ6FiU9g3pyvy9u0HautsXxswCgsLzKvBPiSlTXXxrfO147mPQC+wWCnKlJEZWoavs2nRZyIQ==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=CZnvtKyrUI2fiDWvwNlWx79/NWIUbCykzVH7heseWk69HJnrOOWmZQvbHzn7lHpraOhzyGGMQyXoyoF89s+Ld0nBygyZ+3gHH2gG0GviW8vQYY8TxKeAbmZiN//ihB0a3uonuigAbTc/FtYlc9m+waBac4sF1/UzVjc4FRymEmArGrBMbvYmEMurTsVCsSYtzOR1iSVgDmz2P5NmmfW0zPdiiW/KC+QpMjxsVk/7jvsYT0CuMrAFU9ZUSBtFKYjqI0GjbC6fnK0Gkuyp2yXybQJ8+SgXNYm54z0LekpmPBtRGMxNDQlwpYQqAWAWtVJsdNhYazqDLLT5usTctjBmxQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=XxweLR0dO2RnLd8hblO+zZpcCrTCeQQKkgLEKRQhrxsq0jYDp9jCBH5Yx3vri8EWmQinYUoDetJArAtu7l0/tvN4TrW3aiciPpiJ1RCI5YKd+IiD60oSlfnEJMlI7WOGhM2F6YxI7axocKDED9+WqVc7oCUdPzLOx80if873UUKAqD035JK6KrXCOBY6kjsH8JmUEMA6dEATnw+pX+Lw3vm/GBP6XVpNOUX47wACHSfsdtUl//Z6Y8fDGe86ZDfxxIL538uMwciHoJ/wYUvDu9YeOOFdbXe4Ajgqtki17iI1oeO5bLtMTWfc6qwuGT0sAYbBQ3Q9QWPcwxzvAMOYCw==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Fri, 21 Mar 2025 10:49:51 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHbkcvnxcOyM/RSC0GESnnv40poW7N8IsKAgAAYUYCAABYjgIABKPcA
  • Thread-topic: [PATCH v2 2/5] xen/arm: ffa: Introduce VM to VM support

Hi Jens,

> On 20 Mar 2025, at 18:06, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> 
> Hi,
> 
> On Thu, Mar 20, 2025 at 4:47 PM Bertrand Marquis
> <Bertrand.Marquis@xxxxxxx> wrote:
>> 
>> Hi Jens,
>> 
>> Thanks a lot for the review.
>> 
>>> On 20 Mar 2025, at 15:20, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
>>> 
>>> Hi Bertrand,
>>> 
>>> On Mon, Mar 10, 2025 at 3:51 PM Bertrand Marquis
>>> <bertrand.marquis@xxxxxxx> wrote:
>>>> 
>>>> Create a CONFIG_FFA_VM_TO_VM parameter to activate FFA communication
>>>> between VMs.
>>>> When activated list VMs in the system with FF-A support in part_info_get.
>>>> 
>>>> WARNING: There is no filtering for now and all VMs are listed !!
>>>> 
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>>>> ---
>>>> Changes in v2:
>>>> - Switch ifdef to IS_ENABLED
>>>> - dom was not switched to d as requested by Jan because there is already
>>>> a variable d pointing to the current domain and it must not be
>>>> shadowed.
>>>> ---
>>>> xen/arch/arm/tee/Kconfig        |  11 +++
>>>> xen/arch/arm/tee/ffa_partinfo.c | 144 +++++++++++++++++++++++++-------
>>>> xen/arch/arm/tee/ffa_private.h  |  12 +++
>>>> 3 files changed, 137 insertions(+), 30 deletions(-)
>>>> 
>>>> diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
>>>> index c5b0f88d7522..88a4c4c99154 100644
>>>> --- a/xen/arch/arm/tee/Kconfig
>>>> +++ b/xen/arch/arm/tee/Kconfig
>>>> @@ -28,5 +28,16 @@ config FFA
>>>> 
>>>>         [1] https://developer.arm.com/documentation/den0077/latest
>>>> 
>>>> +config FFA_VM_TO_VM
>>>> +    bool "Enable FF-A between VMs (UNSUPPORTED)" if UNSUPPORTED
>>>> +    default n
>>>> +    depends on FFA
>>>> +    help
>>>> +      This option enables to use FF-A between VMs.
>>>> +      This is experimental and there is no access control so any
>>>> +      guest can communicate with any other guest.
>>>> +
>>>> +      If unsure, say N.
>>>> +
>>>> endmenu
>>>> 
>>>> diff --git a/xen/arch/arm/tee/ffa_partinfo.c 
>>>> b/xen/arch/arm/tee/ffa_partinfo.c
>>>> index c0510ceb8338..7af1eca2d0c4 100644
>>>> --- a/xen/arch/arm/tee/ffa_partinfo.c
>>>> +++ b/xen/arch/arm/tee/ffa_partinfo.c
>>>> @@ -77,7 +77,23 @@ void ffa_handle_partition_info_get(struct cpu_user_regs 
>>>> *regs)
>>>>    };
>>>>    uint32_t src_size, dst_size;
>>>>    void *dst_buf;
>>>> -    uint32_t ffa_sp_count = 0;
>>>> +    uint32_t ffa_vm_count = 0, ffa_sp_count = 0;
>>>> +
>>> 
>>> This function is getting quite large and it's hard to follow the
>>> different lock states. How about splitting it into various helper
>>> functions?
>> 
>> Yes I agree.
>> I will try to find a good way to split this in smaller chunks.
>> 
>>> 
>>>> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>>>> +    {
>>>> +        struct domain *dom;
>>>> +
>>>> +        /* Count the number of VM with FF-A support */
>>> 
>>> Why do we need this now? Isn't it enough to count them below when we
>>> fill in dst_buf?
>> 
>> We need it in 3 places in the code after:
>> - to return the number of endpoint in case the flag is set (see final return)
>> - to check that there is enough space in the RX buffer
>> - to prevent going through the list of domains if none is to be listed anyway
> 
> Got it, thanks. The first point is important, the others are
> essentially optimizations.

With your previous point and me splitting things this will not be the case 
anymore.

> 
>> 
>>> 
>>>> +        rcu_read_lock(&domlist_read_lock);
>>>> +        for_each_domain( dom )
>>>> +        {
>>>> +            struct ffa_ctx *vm = dom->arch.tee;
>>>> +
>>>> +            if (dom != d && vm != NULL && vm->guest_vers != 0)
>>>> +                ffa_vm_count++;
>>>> +        }
>>>> +        rcu_read_unlock(&domlist_read_lock);
>>>> +    }
>>>> 
>>>>    /*
>>>>     * If the guest is v1.0, he does not get back the entry size so we must
>>>> @@ -127,33 +143,38 @@ void ffa_handle_partition_info_get(struct 
>>>> cpu_user_regs *regs)
>>>> 
>>>>    dst_buf = ctx->rx;
>>>> 
>>>> -    if ( !ffa_rx )
>>>> +    /* If not supported, we have ffa_sp_count=0 */
>>>> +    if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
>>>>    {
>>>> -        ret = FFA_RET_DENIED;
>>>> -        goto out_rx_release;
>>>> -    }
>>>> +        if ( !ffa_rx )
>>>> +        {
>>>> +            ret = FFA_RET_DENIED;
>>>> +            goto out_rx_release;
>>>> +        }
>>>> 
>>>> -    spin_lock(&ffa_rx_buffer_lock);
>>>> +        spin_lock(&ffa_rx_buffer_lock);
>>>> 
>>>> -    ret = ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size);
>>>> +        ret = ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size);
>>>> 
>>>> -    if ( ret )
>>>> -        goto out_rx_hyp_unlock;
>>>> +        if ( ret )
>>>> +            goto out_rx_hyp_unlock;
>>>> 
>>>> -    /*
>>>> -     * ffa_partition_info_get() succeeded so we now own the RX buffer we
>>>> -     * share with the SPMC. We must give it back using 
>>>> ffa_hyp_rx_release()
>>>> -     * once we've copied the content.
>>>> -     */
>>>> +        /*
>>>> +         * ffa_partition_info_get() succeeded so we now own the RX buffer 
>>>> we
>>>> +         * share with the SPMC. We must give it back using 
>>>> ffa_hyp_rx_release()
>>>> +         * once we've copied the content.
>>>> +         */
>>>> 
>>>> -    /* we cannot have a size smaller than 1.0 structure */
>>>> -    if ( src_size < sizeof(struct ffa_partition_info_1_0) )
>>>> -    {
>>>> -        ret = FFA_RET_NOT_SUPPORTED;
>>>> -        goto out_rx_hyp_release;
>>>> +        /* we cannot have a size smaller than 1.0 structure */
>>>> +        if ( src_size < sizeof(struct ffa_partition_info_1_0) )
>>>> +        {
>>>> +            ret = FFA_RET_NOT_SUPPORTED;
>>>> +            goto out_rx_hyp_release;
>>>> +        }
>>>>    }
>>>> 
>>>> -    if ( ctx->page_count * FFA_PAGE_SIZE < ffa_sp_count * dst_size )
>>>> +    if ( ctx->page_count * FFA_PAGE_SIZE <
>>>> +         (ffa_sp_count + ffa_vm_count) * dst_size )
>>>>    {
>>>>        ret = FFA_RET_NO_MEMORY;
>>>>        goto out_rx_hyp_release;
>>>> @@ -182,25 +203,88 @@ void ffa_handle_partition_info_get(struct 
>>>> cpu_user_regs *regs)
>>>>        }
>>>>    }
>>>> 
>>>> +    if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
>>>> +    {
>>>> +        ffa_hyp_rx_release();
>>>> +        spin_unlock(&ffa_rx_buffer_lock);
>>>> +    }
>>>> +
>>>> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) && ffa_vm_count )
>>>> +    {
>>>> +        struct domain *dom;
>>>> +        uint32_t curr = 0;
>>>> +
>>>> +        /* add the VM informations if any */
>>>> +        rcu_read_lock(&domlist_read_lock);
>>>> +        for_each_domain( dom )
>>>> +        {
>>>> +            struct ffa_ctx *vm = dom->arch.tee;
>>>> +
>>>> +            /*
>>>> +             * we do not add the VM calling to the list and only VMs with
>>>> +             * FF-A support
>>>> +             */
>>>> +            if (dom != d && vm != NULL && vm->guest_vers != 0)
>>>> +            {
>>>> +                /*
>>>> +                 * We do not have UUID info for VMs so use
>>>> +                 * the 1.0 structure so that we set UUIDs to
>>>> +                 * zero using memset
>>>> +                 */
>>>> +                struct ffa_partition_info_1_0 srcvm;
>>>> +
>>>> +                if ( curr == ffa_vm_count )
>>>> +                {
>>>> +                    /*
>>>> +                     * The number of domains changed since we counted 
>>>> them, we
>>>> +                     * can add new ones if there is enough space in the
>>>> +                     * destination buffer so check it or go out with an 
>>>> error.
>>>> +                     */
>>> 
>>> Why do we care if the number has changed? If it fits, all is good
>>> anyway and we're also updating ffa_vm_count with curr after the loop.
>> 
>> Well this is exactly the point here, we check if we have enough space to
>> return the data with the new domains and if not we return an error.
>> 
>> The point here is to make sure that if there are more domains than when
>> we counted first we check that we have enough size and the update at the
>> end is to handle the case where we have actually less domains than when
>> we counted first.
>> 
>> If the number has changed we only care because we need to make sure
>> we have enough space and because we need to return the right number
>> to the caller.
>> 
>> Please tell me how you would like me to change this because I do not
>> understand what I should modify here.
> 
> I was thinking something like:
>        for_each_domain( dom )
>        {
>            struct ffa_ctx *vm = dom->arch.tee;
> 
>            if (dom != d && vm != NULL && vm->guest_vers != 0)
>            {
>                ...
>                ffa_vm_count++;
>                if ( ctx->page_count * FFA_PAGE_SIZE <
>                     (ffa_sp_count + ffa_vm_count) * dst_size )
>                    ... error out
> 
>                ... copy data
>                dst_buf += dst_size;
>            }
>        }

Ok got you.
I will modify things to simplify and just check along the way if there
is enough space to add an extra entry instead of trying to handle
a corner case where the number changes in the middle that will
make things simpler.
Stay tuned for v3 ;-)

Cheers
Bertrand

> 
> Cheers,
> Jens
> 
>> 
>> Cheers
>> Bertrand
>> 
>>> 
>>> Cheers,
>>> Jens
>>> 
>>>> +                    ffa_vm_count++;
>>>> +                    if ( ctx->page_count * FFA_PAGE_SIZE <
>>>> +                         (ffa_sp_count + ffa_vm_count) * dst_size )
>>>> +                    {
>>>> +                        ret = FFA_RET_NO_MEMORY;
>>>> +                        rcu_read_unlock(&domlist_read_lock);
>>>> +                        goto out_rx_release;
>>>> +                    }
>>>> +                }
>>>> +
>>>> +                srcvm.id = ffa_get_vm_id(dom);
>>>> +                srcvm.execution_context = dom->max_vcpus;
>>>> +                srcvm.partition_properties = FFA_PART_VM_PROP;
>>>> +                if ( is_64bit_domain(dom) )
>>>> +                    srcvm.partition_properties |= 
>>>> FFA_PART_PROP_AARCH64_STATE;
>>>> +
>>>> +                memcpy(dst_buf, &srcvm, MIN(sizeof(srcvm), dst_size));
>>>> +
>>>> +                if ( dst_size > sizeof(srcvm) )
>>>> +                    memset(dst_buf + sizeof(srcvm), 0,
>>>> +                           dst_size - sizeof(srcvm));
>>>> +
>>>> +                dst_buf += dst_size;
>>>> +                curr++;
>>>> +            }
>>>> +        }
>>>> +        rcu_read_unlock(&domlist_read_lock);
>>>> +
>>>> +        /* the number of domains could have reduce since the initial 
>>>> count */
>>>> +        ffa_vm_count = curr;
>>>> +    }
>>>> +
>>>> +    goto out;
>>>> +
>>>> out_rx_hyp_release:
>>>>    ffa_hyp_rx_release();
>>>> out_rx_hyp_unlock:
>>>>    spin_unlock(&ffa_rx_buffer_lock);
>>>> out_rx_release:
>>>> -    /*
>>>> -     * The calling VM RX buffer only contains data to be used by the VM 
>>>> if the
>>>> -     * call was successful, in which case the VM has to release the buffer
>>>> -     * once it has used the data.
>>>> -     * If something went wrong during the call, we have to release the RX
>>>> -     * buffer back to the SPMC as the VM will not do it.
>>>> -     */
>>>> -    if ( ret != FFA_RET_OK )
>>>> -        ffa_rx_release(d);
>>>> +    ffa_rx_release(d);
>>>> out:
>>>>    if ( ret )
>>>>        ffa_set_regs_error(regs, ret);
>>>>    else
>>>> -        ffa_set_regs_success(regs, ffa_sp_count, dst_size);
>>>> +        ffa_set_regs_success(regs, ffa_sp_count + ffa_vm_count, dst_size);
>>>> }
>>>> 
>>>> static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
>>>> diff --git a/xen/arch/arm/tee/ffa_private.h 
>>>> b/xen/arch/arm/tee/ffa_private.h
>>>> index c4cd65538908..bd6877d8c632 100644
>>>> --- a/xen/arch/arm/tee/ffa_private.h
>>>> +++ b/xen/arch/arm/tee/ffa_private.h
>>>> @@ -187,6 +187,18 @@
>>>> */
>>>> #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U)
>>>> 
>>>> +/*
>>>> + * Partition properties we give for a normal world VM:
>>>> + * - can send direct message but not receive them
>>>> + * - can handle indirect messages
>>>> + * - can receive notifications
>>>> + * 32/64 bit flag is set depending on the VM
>>>> + */
>>>> +#define FFA_PART_VM_PROP    (FFA_PART_PROP_DIRECT_REQ_SEND | \
>>>> +                             FFA_PART_PROP_INDIRECT_MSGS | \
>>>> +                             FFA_PART_PROP_RECV_NOTIF | \
>>>> +                             FFA_PART_PROP_IS_PE_ID)
>>>> +
>>>> /* Flags used in calls to FFA_NOTIFICATION_GET interface  */
>>>> #define FFA_NOTIF_FLAG_BITMAP_SP        BIT(0, U)
>>>> #define FFA_NOTIF_FLAG_BITMAP_VM        BIT(1, U)
>>>> --
>>>> 2.47.1



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.