[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/6] xen/arm: ffa: Track hypervisor notifications in a bitmap


  • To: Bertrand Marquis <bertrand.marquis@xxxxxxx>
  • From: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • Date: Wed, 22 Apr 2026 11:34:27 +0200
  • Arc-authentication-results: i=1; mx.google.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=IjGNLzZoVRcnaqKTCMBdzdnt+nQoxCffNoAezlXriWA=; fh=wNLC6Hyb5Ukz/ErppBRQBwv8vwa/OMsdh6R8bnNsiPU=; b=XIAndFaDmQZnTMfrlp0TKQhG0KCU/wtNX35Mm8jCbwpfKQDSbGwtlGC0ocr2cD9ni3 xKHoH/wNkisE3sAw+7xf1xKR4i1dXFsWB/ccVkFUmLGVGdkuD2UG3JplhZRWIfbRYApR MroCYx18APy+4j3sRP9o0J5iEb0p1T3BiJ2jbv+kJTrs7KGzuBMOjTtV/+nUJCdboV1G sH050Hj3Cv3QiWPgz3zftlnER379uiYZtBJfcEwMIhnffFBxWY5r1A5ps+JlQHIPkH0X xmGgZtauqL0kDZfWkDpRAiKsHHY2geKwJ+UMLX3ikR4+PDC/pU5kORp23gna/n4Pus9g pyXA==; darn=lists.xenproject.org
  • Arc-seal: i=1; a=rsa-sha256; t=1776850479; cv=none; d=google.com; s=arc-20240605; b=NltxZV6Kyu05P/ejc5gB3yFbLwlQcGYflgtcqodM3cLry+VB+WobdQU3bh6FikaHMH P9976gtnigY09zWId8KzcuwklLeFZBSwFLh0nk1DqkbH43s7OI1an3KdlZOXtQHyxPTg W8qPftT4VylPXcqFpeQfKYb10TsoPWU4BY1L8qxnVmOyg48/fdEs7IH3T/XUp/zU0gik Q9uDhQMtdIyuQE0/Aw57LxLsFUHpKFjD5uROqy/7GYIm34TV7TkqSMRuO/kSEfB8UpO+ hauGLk3xsATL5lOoqNVvoq5eWJccc2gdtLmt3IVmoIvNwhhwL3mM3oNe0X3v0gJO3p7Z Ngpw==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=linaro.org header.i="@linaro.org" header.h="Content-Transfer-Encoding:Cc:To:Subject:Message-ID:Date:From:In-Reply-To:References:MIME-Version"
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Wed, 22 Apr 2026 09:34:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Bertrand,

On Fri, Apr 17, 2026 at 3:41 PM Bertrand Marquis
<bertrand.marquis@xxxxxxx> wrote:
>
> Hypervisor notifications are currently tracked with a dedicated
> buff_full_pending boolean. That state only represents a single HYP
> notification bit and keeps HYP bitmap handling tied to single-purpose
> bookkeeping.
>
> Replace the boolean with a hypervisor notification bitmap protected by
> notif_lock. INFO_GET reports pending when the bitmap is non-zero, GET
> returns and clears the HYP bitmap under the lock, and RX-buffer-full
> sets FFA_NOTIF_RX_BUFFER_FULL in the bitmap instead of updating
> separate state.
>
> Initialize and clear the bitmap during domain lifecycle handling, and
> use ctx->ffa_id for bitmap create and destroy so the notification state
> stays tied to the cached FF-A endpoint ID.
>
> No functional changes.
>
> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
> ---
>  xen/arch/arm/tee/ffa_notif.c   | 46 ++++++++++++++++++++++++++--------
>  xen/arch/arm/tee/ffa_private.h |  9 +++++--
>  2 files changed, 43 insertions(+), 12 deletions(-)
>
> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c
> index 07bc5cb3a430..d15119409a25 100644
> --- a/xen/arch/arm/tee/ffa_notif.c
> +++ b/xen/arch/arm/tee/ffa_notif.c
> @@ -94,8 +94,15 @@ void ffa_handle_notification_info_get(struct cpu_user_regs 
> *regs)
>
>      notif_pending = test_and_clear_bool(ctx->notif.secure_pending);
>      if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
> +    {
>          notif_pending |= test_and_clear_bool(ctx->notif.vm_pending);
>
> +        spin_lock(&ctx->notif.notif_lock);
> +        if ( ctx->notif.hyp_pending )
> +            notif_pending = true;
> +        spin_unlock(&ctx->notif.notif_lock);

Isn't this a functional change? Before this patch, we didn't consider
ctx->notif.buff_full_pending here. Am I missing something?

> +    }
> +
>      if ( notif_pending )
>      {
>          /* A pending global notification for the guest */
> @@ -174,12 +181,17 @@ void ffa_handle_notification_get(struct cpu_user_regs 
> *regs)
>              w6 = resp.a6;
>      }
>
> -    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) &&
> -          flags & FFA_NOTIF_FLAG_BITMAP_HYP &&
> -          test_and_clear_bool(ctx->notif.buff_full_pending) )
> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>      {
> -        ACCESS_ONCE(ctx->notif.vm_pending) = false;
> -        w7 = FFA_NOTIF_RX_BUFFER_FULL;
> +        spin_lock(&ctx->notif.notif_lock);
> +
> +        if ( (flags & FFA_NOTIF_FLAG_BITMAP_HYP) && ctx->notif.hyp_pending )
> +        {
> +            w7 = ctx->notif.hyp_pending;
> +            ctx->notif.hyp_pending = 0;
> +        }
> +
> +        spin_unlock(&ctx->notif.notif_lock);
>      }
>
>      ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7);
> @@ -207,12 +219,17 @@ int32_t ffa_handle_notification_set(struct 
> cpu_user_regs *regs)
>  void ffa_raise_rx_buffer_full(struct domain *d)
>  {
>      struct ffa_ctx *ctx = d->arch.tee;
> +    uint32_t prev_bitmap;
>
>      if ( !ctx )
>          return;
>
> -    ACCESS_ONCE(ctx->notif.buff_full_pending) = true;
> -    if ( !test_and_set_bool(ctx->notif.vm_pending) )
> +    spin_lock(&ctx->notif.notif_lock);
> +    prev_bitmap = ctx->notif.hyp_pending;
> +    ctx->notif.hyp_pending |= FFA_NOTIF_RX_BUFFER_FULL;
> +    spin_unlock(&ctx->notif.notif_lock);
> +
> +    if ( !(prev_bitmap & FFA_NOTIF_RX_BUFFER_FULL) )

Do we need to check for FFA_NOTIF_RX_BUFFER_FULL? Isn't !prev_bitmap
more accurate, if any other bit would ever be used in the bitmap?

Cheers,
Jens

>          inject_notif_pending(d);
>  }
>  #endif
> @@ -426,12 +443,15 @@ void ffa_notif_init(void)
>
>  int ffa_notif_domain_init(struct domain *d)
>  {
> +    struct ffa_ctx *ctx = d->arch.tee;
>      int32_t res;
>
> +    spin_lock_init(&ctx->notif.notif_lock);
> +    ctx->notif.hyp_pending = 0;
> +
>      if ( fw_notif_enabled )
>      {
> -
> -        res = ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vcpus);
> +        res = ffa_notification_bitmap_create(ctx->ffa_id, d->max_vcpus);
>          if ( res )
>              return -ENOMEM;
>      }
> @@ -441,10 +461,16 @@ int ffa_notif_domain_init(struct domain *d)
>
>  void ffa_notif_domain_destroy(struct domain *d)
>  {
> +    struct ffa_ctx *ctx = d->arch.tee;
> +
> +    spin_lock(&ctx->notif.notif_lock);
> +    ctx->notif.hyp_pending = 0;
> +    spin_unlock(&ctx->notif.notif_lock);
> +
>      /*
>       * Call bitmap_destroy even if bitmap create failed as the SPMC will
>       * return a DENIED error that we will ignore.
>       */
>      if ( fw_notif_enabled )
> -        ffa_notification_bitmap_destroy(ffa_get_vm_id(d));
> +        ffa_notification_bitmap_destroy(ctx->ffa_id);
>  }
> diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h
> index c291f32b56ff..5693772481ed 100644
> --- a/xen/arch/arm/tee/ffa_private.h
> +++ b/xen/arch/arm/tee/ffa_private.h
> @@ -340,9 +340,14 @@ struct ffa_ctx_notif {
>      bool vm_pending;
>
>      /*
> -     * True if domain has buffer full notification pending
> +     * Lock protecting the hypervisor-managed notification state.
>       */
> -    bool buff_full_pending;
> +    spinlock_t notif_lock;
> +
> +    /*
> +     * Bitmap of pending hypervisor notifications (for HYP bitmap queries).
> +     */
> +    uint32_t hyp_pending;
>  };
>
>  struct ffa_ctx {
> --
> 2.53.0
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.