|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/6] xen/arm: ffa: Track hypervisor notifications in a bitmap
Hi Jens,
> On 23 Apr 2026, at 11:49, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
>
> Hi Bertrand,
>
> On Thu, Apr 23, 2026 at 9:30 AM Bertrand Marquis
> <Bertrand.Marquis@xxxxxxx> wrote:
>>
>> Hi Jens,
>>
>> Thanks a lot for the review.
>>
>>> On 22 Apr 2026, at 11:34, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
>>>
>>> Hi Bertrand,
>>>
>>> On Fri, Apr 17, 2026 at 3:41 PM Bertrand Marquis
>>> <bertrand.marquis@xxxxxxx> wrote:
>>>>
>>>> Hypervisor notifications are currently tracked with a dedicated
>>>> buff_full_pending boolean. That state only represents a single HYP
>>>> notification bit and keeps HYP bitmap handling tied to single-purpose
>>>> bookkeeping.
>>>>
>>>> Replace the boolean with a hypervisor notification bitmap protected by
>>>> notif_lock. INFO_GET reports pending when the bitmap is non-zero, GET
>>>> returns and clears the HYP bitmap under the lock, and RX-buffer-full
>>>> sets FFA_NOTIF_RX_BUFFER_FULL in the bitmap instead of updating
>>>> separate state.
>>>>
>>>> Initialize and clear the bitmap during domain lifecycle handling, and
>>>> use ctx->ffa_id for bitmap create and destroy so the notification state
>>>> stays tied to the cached FF-A endpoint ID.
>>>>
>>>> No functional changes.
>>>>
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>>>> ---
>>>> xen/arch/arm/tee/ffa_notif.c | 46 ++++++++++++++++++++++++++--------
>>>> xen/arch/arm/tee/ffa_private.h | 9 +++++--
>>>> 2 files changed, 43 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c
>>>> index 07bc5cb3a430..d15119409a25 100644
>>>> --- a/xen/arch/arm/tee/ffa_notif.c
>>>> +++ b/xen/arch/arm/tee/ffa_notif.c
>>>> @@ -94,8 +94,15 @@ void ffa_handle_notification_info_get(struct
>>>> cpu_user_regs *regs)
>>>>
>>>> notif_pending = test_and_clear_bool(ctx->notif.secure_pending);
>>>> if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>>>> + {
>>>> notif_pending |= test_and_clear_bool(ctx->notif.vm_pending);
>>>>
>>>> + spin_lock(&ctx->notif.notif_lock);
>>>> + if ( ctx->notif.hyp_pending )
>>>> + notif_pending = true;
>>>> + spin_unlock(&ctx->notif.notif_lock);
>>>
>>> Isn't this a functional change? Before this patch, we didn't consider
>>> ctx->notif.buff_full_pending here. Am I missing something?
>>
>> We did consider it implicitly through vm_pending.
>>
>> This patch makes that cleaner by using hyp_pending for the Hypervisor
>> framework notification itself. Previously, RX-buffer-full was made visible
>> indirectly via vm_pending, and FFA_NOTIFICATION_INFO_GET
>> cleared that summary state.
>>
>> As a result, the guest-visible pending indication could be lost before
>> the Hypervisor notification was actually retrieved with
>> FFA_NOTIFICATION_GET.
>>
>> With this change, the pending state is tracked in hyp_pending and is only
>> cleared when the Hypervisor notifications are retrieved through
>> FFA_NOTIFICATION_GET.
>>
>> I will reword the commit message to make that clearer.
>
> Thanks
>
>>
>>>
>>>> + }
>>>> +
>>>> if ( notif_pending )
>>>> {
>>>> /* A pending global notification for the guest */
>>>> @@ -174,12 +181,17 @@ void ffa_handle_notification_get(struct
>>>> cpu_user_regs *regs)
>>>> w6 = resp.a6;
>>>> }
>>>>
>>>> - if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) &&
>>>> - flags & FFA_NOTIF_FLAG_BITMAP_HYP &&
>>>> - test_and_clear_bool(ctx->notif.buff_full_pending) )
>>>> + if ( IS_ENABLED(CONFIG_FFA_VM_TO_VM) )
>>>> {
>>>> - ACCESS_ONCE(ctx->notif.vm_pending) = false;
>>>> - w7 = FFA_NOTIF_RX_BUFFER_FULL;
>>>> + spin_lock(&ctx->notif.notif_lock);
>>>> +
>>>> + if ( (flags & FFA_NOTIF_FLAG_BITMAP_HYP) &&
>>>> ctx->notif.hyp_pending )
>>>> + {
>>>> + w7 = ctx->notif.hyp_pending;
>>>> + ctx->notif.hyp_pending = 0;
>>>> + }
>>>> +
>>>> + spin_unlock(&ctx->notif.notif_lock);
>>>> }
>>>>
>>>> ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7);
>>>> @@ -207,12 +219,17 @@ int32_t ffa_handle_notification_set(struct
>>>> cpu_user_regs *regs)
>>>> void ffa_raise_rx_buffer_full(struct domain *d)
>>>> {
>>>> struct ffa_ctx *ctx = d->arch.tee;
>>>> + uint32_t prev_bitmap;
>>>>
>>>> if ( !ctx )
>>>> return;
>>>>
>>>> - ACCESS_ONCE(ctx->notif.buff_full_pending) = true;
>>>> - if ( !test_and_set_bool(ctx->notif.vm_pending) )
>>>> + spin_lock(&ctx->notif.notif_lock);
>>>> + prev_bitmap = ctx->notif.hyp_pending;
>>>> + ctx->notif.hyp_pending |= FFA_NOTIF_RX_BUFFER_FULL;
>>>> + spin_unlock(&ctx->notif.notif_lock);
>>>> +
>>>> + if ( !(prev_bitmap & FFA_NOTIF_RX_BUFFER_FULL) )
>>>
>>> Do we need to check for FFA_NOTIF_RX_BUFFER_FULL? Isn't !prev_bitmap
>>> more accurate, if any other bit would ever be used in the bitmap?
>>
>> I would keep the bit-specific check here, if that is OK with you.
>>
>> This function is about raising the RX buffer full notification,
>> so I think it is clearer to check whether that bit was newly
>> pended rather than whether the whole Hypervisor bitmap
>> was previously empty.
>>
>> Using !prev_bitmap would also make the condition depend on
>> unrelated Hypervisor notifications being pending.
>>
>> Tell me if you are ok with that.
>
> I see your point, it's fine.
In fact while working on this i ended up in something a bit more generic
due to your patch 6 findings and at the end i do not raise depending on
prev_bitmap anymore but just on if a notification was raised already or
not since something was changed which does not depend on hyp_pending
anymore.
I am working on v2, might be more clear then :-)
Cheers
Bertrand
>
> Cheers,
> Jens
>
>>
>> Cheers
>> Bertrand
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |