[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/6] xen/arm: ffa: Track hypervisor notifications in a bitmap


  • To: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • From: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • Date: Thu, 23 Apr 2026 11:49:11 +0200
  • Arc-authentication-results: i=1; mx.google.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=/fZNqAVn4bagQP5iTaHu/LfssAnCl/Ko4AXImy8Rc6I=; fh=wB0f5JGUSpWYejuxtnrl8SDqvqyWrEsEaWvC32LbdiU=; b=IssusD9Yc0GRoGl7x3oIGrXC012SVqrYaEObL1yWDfZiWWcj5E8fMuvm9FYq8RRtpg 2cCnjD4T9eJlaNF4k7z0Ou/DRH6w6R4wcvX2qUdB295cthWFT3/ArrowuLC8hUY0F6Q2 f2a+guyVNcLWwSMM7yxpSC1lWT6jUgmXukbHfKt7Mnig2uD2kOQqRsqNZJ/5qaTYNZ9r ezLnh1+hoZ7l93Rzd17nXSaCKhZbeXPeQKvNdyC0L1Cteu669pZvZ/C1af34Ew7ycnE3 mBrd4/tD3dbmG4bKt4M3hF39H0btbnO3LnOSCTQAjYG48paKy2Oa3AO2weEJWVxmmWS8 q2ZA==; darn=lists.xenproject.org
  • Arc-seal: i=1; a=rsa-sha256; t=1776937764; cv=none; d=google.com; s=arc-20240605; b=cGSjVgrSBPHy9Hvw2GGSQhAcraIossj786ic1LVvuSquNOPW8AtrRpa2Z4W7/WCYJQ 5ZqhfaSCroqvlvp7SSSH7W/efEVDn2ietP9oPQsKNQq4CygA4fsCwcbhcq5uGupGeXuK 4/vpoKtujQkATg+IZrtDe4g3AjAo347sIM7qoKRfDUqiUBYNObCGTQDkVTooED+pJZJ3 +6G7Wo9udHV8BqLGqDccSJMAA0BmmmbOn33y9et6s8BXN3etMOQBlRdrn1HmukrY1qfC 2+J+m3fPSqthmbHhn7vF1vTP5qa1JScZjbxUpLndyR1dTHv7oFJO95bctnb722TY+Rr5 wWwg==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=linaro.org header.i="@linaro.org" header.h="Content-Transfer-Encoding:Cc:To:Subject:Message-ID:Date:From:In-Reply-To:References:MIME-Version"
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Thu, 23 Apr 2026 09:49:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Bertrand,

On Thu, Apr 23, 2026 at 9:30 AM Bertrand Marquis
<Bertrand.Marquis@xxxxxxx> wrote:
>
> Hi Jens,
>
> Thanks a lot for the review.
>
> > On 22 Apr 2026, at 11:34, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> >
> > Hi Bertrand,
> >
> > On Fri, Apr 17, 2026 at 3:41 PM Bertrand Marquis
> > <bertrand.marquis@xxxxxxx> wrote:
> >>
> >> Hypervisor notifications are currently tracked with a dedicated
> >> buff_full_pending boolean. That state only represents a single HYP
> >> notification bit and keeps HYP bitmap handling tied to single-purpose
> >> bookkeeping.
> >>
> >> Replace the boolean with a hypervisor notification bitmap protected by
> >> notif_lock. INFO_GET reports pending when the bitmap is non-zero, GET
> >> returns and clears the HYP bitmap under the lock, and RX-buffer-full
> >> sets FFA_NOTIF_RX_BUFFER_FULL in the bitmap instead of updating
> >> separate state.
> >>
> >> Initialize and clear the bitmap during domain lifecycle handling, and
> >> use ctx->ffa_id for bitmap create and destroy so the notification state
> >> stays tied to the cached FF-A endpoint ID.
> >>
> >> No functional changes.
> >>
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
> >> ---
> >> xen/arch/arm/tee/ffa_notif.c   | 46 ++++++++++++++++++++++++++--------
> >> xen/arch/arm/tee/ffa_private.h |  9 +++++--
> >> 2 files changed, 43 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c
> >> index 07bc5cb3a430..d15119409a25 100644
> >> --- a/xen/arch/arm/tee/ffa_notif.c
> >> +++ b/xen/arch/arm/tee/ffa_notif.c
> >> @@ -94,8 +94,15 @@ void ffa_handle_notification_info_get(struct 
> >> cpu_user_regs *regs)
> >>
> >>     notif_pending = test_and_clear_bool(ctx->notif.secure_pending);
> >>     if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
> >> +    {
> >>         notif_pending |= test_and_clear_bool(ctx->notif.vm_pending);
> >>
> >> +        spin_lock(&ctx->notif.notif_lock);
> >> +        if ( ctx->notif.hyp_pending )
> >> +            notif_pending = true;
> >> +        spin_unlock(&ctx->notif.notif_lock);
> >
> > Isn't this a functional change? Before this patch, we didn't consider
> > ctx->notif.buff_full_pending here. Am I missing something?
>
> We did consider it implicitly through vm_pending.
>
> This patch makes that cleaner by using hyp_pending for the Hypervisor
> framework notification itself. Previously, RX-buffer-full was made visible
> indirectly via vm_pending, and FFA_NOTIFICATION_INFO_GET
> cleared that summary state.
>
> As a result, the guest-visible pending indication could be lost before
> the Hypervisor notification was actually retrieved with
> FFA_NOTIFICATION_GET.
>
> With this change, the pending state is tracked in hyp_pending and is only
> cleared when the Hypervisor notifications are retrieved through
> FFA_NOTIFICATION_GET.
>
> I will reword the commit message to make that clearer.

Thanks

>
> >
> >> +    }
> >> +
> >>     if ( notif_pending )
> >>     {
> >>         /* A pending global notification for the guest */
> >> @@ -174,12 +181,17 @@ void ffa_handle_notification_get(struct 
> >> cpu_user_regs *regs)
> >>             w6 = resp.a6;
> >>     }
> >>
> >> -    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) &&
> >> -          flags & FFA_NOTIF_FLAG_BITMAP_HYP &&
> >> -          test_and_clear_bool(ctx->notif.buff_full_pending) )
> >> +    if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
> >>     {
> >> -        ACCESS_ONCE(ctx->notif.vm_pending) = false;
> >> -        w7 = FFA_NOTIF_RX_BUFFER_FULL;
> >> +        spin_lock(&ctx->notif.notif_lock);
> >> +
> >> +        if ( (flags & FFA_NOTIF_FLAG_BITMAP_HYP) && 
> >> ctx->notif.hyp_pending )
> >> +        {
> >> +            w7 = ctx->notif.hyp_pending;
> >> +            ctx->notif.hyp_pending = 0;
> >> +        }
> >> +
> >> +        spin_unlock(&ctx->notif.notif_lock);
> >>     }
> >>
> >>     ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7);
> >> @@ -207,12 +219,17 @@ int32_t ffa_handle_notification_set(struct 
> >> cpu_user_regs *regs)
> >> void ffa_raise_rx_buffer_full(struct domain *d)
> >> {
> >>     struct ffa_ctx *ctx = d->arch.tee;
> >> +    uint32_t prev_bitmap;
> >>
> >>     if ( !ctx )
> >>         return;
> >>
> >> -    ACCESS_ONCE(ctx->notif.buff_full_pending) = true;
> >> -    if ( !test_and_set_bool(ctx->notif.vm_pending) )
> >> +    spin_lock(&ctx->notif.notif_lock);
> >> +    prev_bitmap = ctx->notif.hyp_pending;
> >> +    ctx->notif.hyp_pending |= FFA_NOTIF_RX_BUFFER_FULL;
> >> +    spin_unlock(&ctx->notif.notif_lock);
> >> +
> >> +    if ( !(prev_bitmap & FFA_NOTIF_RX_BUFFER_FULL) )
> >
> > Do we need to check for FFA_NOTIF_RX_BUFFER_FULL? Isn't !prev_bitmap
> > more accurate, if any other bit would ever be used in the bitmap?
>
>   I would keep the bit-specific check here, if that is OK with you.
>
>  This function is about raising the RX buffer full notification,
>  so I think it is clearer to check whether that bit was newly
>  pended rather than whether the whole Hypervisor bitmap
>  was previously empty.
>
> Using !prev_bitmap would also make the condition depend on
> unrelated Hypervisor notifications being pending.
>
> Tell me if you are ok with that.

I see your point, it's fine.

Cheers,
Jens

>
> Cheers
> Bertrand
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.