[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/5] xen/arm: ffa: Introduce VM to VM support



Hi,

On Thu, Mar 20, 2025 at 4:47 PM Bertrand Marquis
<Bertrand.Marquis@xxxxxxx> wrote:
>
> Hi Jens,
>
> Thanks a lot for the review.
>
> > On 20 Mar 2025, at 15:20, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> >
> > Hi Bertrand,
> >
> > On Mon, Mar 10, 2025 at 3:51 PM Bertrand Marquis
> > <bertrand.marquis@xxxxxxx> wrote:
> >>
> >> Create a CONFIG_FFA_VM_TO_VM parameter to activate FFA communication
> >> between VMs.
> >> When activated list VMs in the system with FF-A support in part_info_get.
> >>
> >> WARNING: There is no filtering for now and all VMs are listed !!
> >>
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
> >> ---
> >> Changes in v2:
> >> - Switch ifdef to IS_ENABLED
> >> - dom was not switched to d as requested by Jan because there is already
> >>  a variable d pointing to the current domain and it must not be
> >>  shadowed.
> >> ---
> >> xen/arch/arm/tee/Kconfig        |  11 +++
> >> xen/arch/arm/tee/ffa_partinfo.c | 144 +++++++++++++++++++++++++-------
> >> xen/arch/arm/tee/ffa_private.h  |  12 +++
> >> 3 files changed, 137 insertions(+), 30 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig
> >> index c5b0f88d7522..88a4c4c99154 100644
> >> --- a/xen/arch/arm/tee/Kconfig
> >> +++ b/xen/arch/arm/tee/Kconfig
> >> @@ -28,5 +28,16 @@ config FFA
> >>
> >>          [1] https://developer.arm.com/documentation/den0077/latest
> >>
> >> +config FFA_VM_TO_VM
> >> +    bool "Enable FF-A between VMs (UNSUPPORTED)" if UNSUPPORTED
> >> +    default n
> >> +    depends on FFA
> >> +    help
> >> +      This option enables to use FF-A between VMs.
> >> +      This is experimental and there is no access control so any
> >> +      guest can communicate with any other guest.
> >> +
> >> +      If unsure, say N.
> >> +
> >> endmenu
> >>
> >> diff --git a/xen/arch/arm/tee/ffa_partinfo.c 
> >> b/xen/arch/arm/tee/ffa_partinfo.c
> >> index c0510ceb8338..7af1eca2d0c4 100644
> >> --- a/xen/arch/arm/tee/ffa_partinfo.c
> >> +++ b/xen/arch/arm/tee/ffa_partinfo.c
> >> @@ -77,7 +77,23 @@ void ffa_handle_partition_info_get(struct cpu_user_regs 
> >> *regs)
> >>     };
> >>     uint32_t src_size, dst_size;
> >>     void *dst_buf;
> >> -    uint32_t ffa_sp_count = 0;
> >> +    uint32_t ffa_vm_count = 0, ffa_sp_count = 0;
> >> +
> >
> > This function is getting quite large and it's hard to follow the
> > different lock states. How about splitting it into various helper
> > functions?
>
> Yes I agree.
> I will try to find a good way to split this in smaller chunks.
>
> >
> >> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
> >> +    {
> >> +        struct domain *dom;
> >> +
> >> +        /* Count the number of VM with FF-A support */
> >
> > Why do we need this now? Isn't it enough to count them below when we
> > fill in dst_buf?
>
> We need it in 3 places in the code after:
> - to return the number of endpoint in case the flag is set (see final return)
> - to check that there is enough space in the RX buffer
> - to prevent going through the list of domains if none is to be listed anyway

Got it, thanks. The first point is important, the others are
essentially optimizations.

>
> >
> >> +        rcu_read_lock(&domlist_read_lock);
> >> +        for_each_domain( dom )
> >> +        {
> >> +            struct ffa_ctx *vm = dom->arch.tee;
> >> +
> >> +            if (dom != d && vm != NULL && vm->guest_vers != 0)
> >> +                ffa_vm_count++;
> >> +        }
> >> +        rcu_read_unlock(&domlist_read_lock);
> >> +    }
> >>
> >>     /*
> >>      * If the guest is v1.0, he does not get back the entry size so we must
> >> @@ -127,33 +143,38 @@ void ffa_handle_partition_info_get(struct 
> >> cpu_user_regs *regs)
> >>
> >>     dst_buf = ctx->rx;
> >>
> >> -    if ( !ffa_rx )
> >> +    /* If not supported, we have ffa_sp_count=0 */
> >> +    if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
> >>     {
> >> -        ret = FFA_RET_DENIED;
> >> -        goto out_rx_release;
> >> -    }
> >> +        if ( !ffa_rx )
> >> +        {
> >> +            ret = FFA_RET_DENIED;
> >> +            goto out_rx_release;
> >> +        }
> >>
> >> -    spin_lock(&ffa_rx_buffer_lock);
> >> +        spin_lock(&ffa_rx_buffer_lock);
> >>
> >> -    ret = ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size);
> >> +        ret = ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size);
> >>
> >> -    if ( ret )
> >> -        goto out_rx_hyp_unlock;
> >> +        if ( ret )
> >> +            goto out_rx_hyp_unlock;
> >>
> >> -    /*
> >> -     * ffa_partition_info_get() succeeded so we now own the RX buffer we
> >> -     * share with the SPMC. We must give it back using 
> >> ffa_hyp_rx_release()
> >> -     * once we've copied the content.
> >> -     */
> >> +        /*
> >> +         * ffa_partition_info_get() succeeded so we now own the RX buffer 
> >> we
> >> +         * share with the SPMC. We must give it back using 
> >> ffa_hyp_rx_release()
> >> +         * once we've copied the content.
> >> +         */
> >>
> >> -    /* we cannot have a size smaller than 1.0 structure */
> >> -    if ( src_size < sizeof(struct ffa_partition_info_1_0) )
> >> -    {
> >> -        ret = FFA_RET_NOT_SUPPORTED;
> >> -        goto out_rx_hyp_release;
> >> +        /* we cannot have a size smaller than 1.0 structure */
> >> +        if ( src_size < sizeof(struct ffa_partition_info_1_0) )
> >> +        {
> >> +            ret = FFA_RET_NOT_SUPPORTED;
> >> +            goto out_rx_hyp_release;
> >> +        }
> >>     }
> >>
> >> -    if ( ctx->page_count * FFA_PAGE_SIZE < ffa_sp_count * dst_size )
> >> +    if ( ctx->page_count * FFA_PAGE_SIZE <
> >> +         (ffa_sp_count + ffa_vm_count) * dst_size )
> >>     {
> >>         ret = FFA_RET_NO_MEMORY;
> >>         goto out_rx_hyp_release;
> >> @@ -182,25 +203,88 @@ void ffa_handle_partition_info_get(struct 
> >> cpu_user_regs *regs)
> >>         }
> >>     }
> >>
> >> +    if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) )
> >> +    {
> >> +        ffa_hyp_rx_release();
> >> +        spin_unlock(&ffa_rx_buffer_lock);
> >> +    }
> >> +
> >> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) && ffa_vm_count )
> >> +    {
> >> +        struct domain *dom;
> >> +        uint32_t curr = 0;
> >> +
> >> +        /* add the VM informations if any */
> >> +        rcu_read_lock(&domlist_read_lock);
> >> +        for_each_domain( dom )
> >> +        {
> >> +            struct ffa_ctx *vm = dom->arch.tee;
> >> +
> >> +            /*
> >> +             * we do not add the VM calling to the list and only VMs with
> >> +             * FF-A support
> >> +             */
> >> +            if (dom != d && vm != NULL && vm->guest_vers != 0)
> >> +            {
> >> +                /*
> >> +                 * We do not have UUID info for VMs so use
> >> +                 * the 1.0 structure so that we set UUIDs to
> >> +                 * zero using memset
> >> +                 */
> >> +                struct ffa_partition_info_1_0 srcvm;
> >> +
> >> +                if ( curr == ffa_vm_count )
> >> +                {
> >> +                    /*
> >> +                     * The number of domains changed since we counted 
> >> them, we
> >> +                     * can add new ones if there is enough space in the
> >> +                     * destination buffer so check it or go out with an 
> >> error.
> >> +                     */
> >
> > Why do we care if the number has changed? If it fits, all is good
> > anyway and we're also updating ffa_vm_count with curr after the loop.
>
> Well this is exactly the point here, we check if we have enough space to
> return the data with the new domains and if not we return an error.
>
> The point here is to make sure that if there are more domains than when
> we counted first we check that we have enough size and the update at the
> end is to handle the case where we have actually less domains than when
> we counted first.
>
> If the number has changed we only care because we need to make sure
> we have enough space and because we need to return the right number
> to the caller.
>
> Please tell me how you would like me to change this because I do not
> understand what I should modify here.

I was thinking something like:
        for_each_domain( dom )
        {
            struct ffa_ctx *vm = dom->arch.tee;

            if (dom != d && vm != NULL && vm->guest_vers != 0)
            {
                ...
                ffa_vm_count++;
                if ( ctx->page_count * FFA_PAGE_SIZE <
                     (ffa_sp_count + ffa_vm_count) * dst_size )
                    ... error out

                ... copy data
                dst_buf += dst_size;
            }
        }

Cheers,
Jens

>
> Cheers
> Bertrand
>
> >
> > Cheers,
> > Jens
> >
> >> +                    ffa_vm_count++;
> >> +                    if ( ctx->page_count * FFA_PAGE_SIZE <
> >> +                         (ffa_sp_count + ffa_vm_count) * dst_size )
> >> +                    {
> >> +                        ret = FFA_RET_NO_MEMORY;
> >> +                        rcu_read_unlock(&domlist_read_lock);
> >> +                        goto out_rx_release;
> >> +                    }
> >> +                }
> >> +
> >> +                srcvm.id = ffa_get_vm_id(dom);
> >> +                srcvm.execution_context = dom->max_vcpus;
> >> +                srcvm.partition_properties = FFA_PART_VM_PROP;
> >> +                if ( is_64bit_domain(dom) )
> >> +                    srcvm.partition_properties |= 
> >> FFA_PART_PROP_AARCH64_STATE;
> >> +
> >> +                memcpy(dst_buf, &srcvm, MIN(sizeof(srcvm), dst_size));
> >> +
> >> +                if ( dst_size > sizeof(srcvm) )
> >> +                    memset(dst_buf + sizeof(srcvm), 0,
> >> +                           dst_size - sizeof(srcvm));
> >> +
> >> +                dst_buf += dst_size;
> >> +                curr++;
> >> +            }
> >> +        }
> >> +        rcu_read_unlock(&domlist_read_lock);
> >> +
> >> +        /* the number of domains could have reduce since the initial 
> >> count */
> >> +        ffa_vm_count = curr;
> >> +    }
> >> +
> >> +    goto out;
> >> +
> >> out_rx_hyp_release:
> >>     ffa_hyp_rx_release();
> >> out_rx_hyp_unlock:
> >>     spin_unlock(&ffa_rx_buffer_lock);
> >> out_rx_release:
> >> -    /*
> >> -     * The calling VM RX buffer only contains data to be used by the VM 
> >> if the
> >> -     * call was successful, in which case the VM has to release the buffer
> >> -     * once it has used the data.
> >> -     * If something went wrong during the call, we have to release the RX
> >> -     * buffer back to the SPMC as the VM will not do it.
> >> -     */
> >> -    if ( ret != FFA_RET_OK )
> >> -        ffa_rx_release(d);
> >> +    ffa_rx_release(d);
> >> out:
> >>     if ( ret )
> >>         ffa_set_regs_error(regs, ret);
> >>     else
> >> -        ffa_set_regs_success(regs, ffa_sp_count, dst_size);
> >> +        ffa_set_regs_success(regs, ffa_sp_count + ffa_vm_count, dst_size);
> >> }
> >>
> >> static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> >> diff --git a/xen/arch/arm/tee/ffa_private.h 
> >> b/xen/arch/arm/tee/ffa_private.h
> >> index c4cd65538908..bd6877d8c632 100644
> >> --- a/xen/arch/arm/tee/ffa_private.h
> >> +++ b/xen/arch/arm/tee/ffa_private.h
> >> @@ -187,6 +187,18 @@
> >>  */
> >> #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U)
> >>
> >> +/*
> >> + * Partition properties we give for a normal world VM:
> >> + * - can send direct message but not receive them
> >> + * - can handle indirect messages
> >> + * - can receive notifications
> >> + * 32/64 bit flag is set depending on the VM
> >> + */
> >> +#define FFA_PART_VM_PROP    (FFA_PART_PROP_DIRECT_REQ_SEND | \
> >> +                             FFA_PART_PROP_INDIRECT_MSGS | \
> >> +                             FFA_PART_PROP_RECV_NOTIF | \
> >> +                             FFA_PART_PROP_IS_PE_ID)
> >> +
> >> /* Flags used in calls to FFA_NOTIFICATION_GET interface  */
> >> #define FFA_NOTIF_FLAG_BITMAP_SP        BIT(0, U)
> >> #define FFA_NOTIF_FLAG_BITMAP_VM        BIT(1, U)
> >> --
> >> 2.47.1
>
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.