[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/4] xen/arm: ffa: Cache SP partition info at init


  • To: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • From: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • Date: Mon, 2 Mar 2026 11:17:31 +0100
  • Arc-authentication-results: i=1; mx.google.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=/3ZO1uASJ+Co5Rgal8hutKQ00YGgsluww0BnDoaGtWs=; fh=wB0f5JGUSpWYejuxtnrl8SDqvqyWrEsEaWvC32LbdiU=; b=CL6/LXg0QBeM4zrz9hYsJ6BZlJhQOpC5PWDceVfATUAZ6G6BAhXd/rTha1fcdzMyJT 1OcuCiIOOPgCyXpJBFYCdcAttiJQuIKUJeBO1knJTpTfjf7/kM3Pb5wI17QJn10k8LOq x48JBHcDvP9fLOm8z4/81rEc4Oo+aR90jFefYEjmxhQUIQbmdy+SjTLiAGKX3XPf7W/H IdrufOGs+dZ1ucSFMHGiFmN0rtKkgHt24O4K4Bk3dWSzP4FBaNSXVJIQCTpivBXpvnJT aT6F4V4iBuuf6I4wHWsTIjxJ/eP/2DEmglBWGtXHlk5rFlS7ShQqxyeu0bBh1EXRpaK8 Va6Q==; darn=lists.xenproject.org
  • Arc-seal: i=1; a=rsa-sha256; t=1772446663; cv=none; d=google.com; s=arc-20240605; b=W8lZJoN7fumb+K6RhBFj3W1fsrAHBwAJYzbFj/SHjPfn9BOE7Jo3wY+wpd/Hv6G6iS gzKbvMqKT7PxEPQWFP08FbejfWJFUL7czHBjTyGBfbrybE6xZOus/irFTcFC7tGxnwHI GykVDQ7Xmnv+YN5YX6ju+SEeI9WTXIs72IAhjt7QrMULMOgP7pEy5aDZkeR+XfNqCLhg WJblEazbG3RVk2oAkzbcb3zWdCUN/6Cc9V3Kes51yRZSvbUXAdiUqX6Q+9dRtYKV2TyM GMl7wshLUn62bjEk84VoxYdyy/FN/uKvEHGu+AwdH6SF5pD+dZ8aAHmjilHCvcFk+Uqm poBg==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Mon, 02 Mar 2026 10:17:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Bertrand,

On Mon, Mar 2, 2026 at 10:58 AM Bertrand Marquis
<Bertrand.Marquis@xxxxxxx> wrote:
>
> Hi Jens,
>
> > On 2 Mar 2026, at 10:32, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> >
> > Hi Bertrand,
> >
> > On Mon, Mar 2, 2026 at 9:51 AM Bertrand Marquis
> > <Bertrand.Marquis@xxxxxxx> wrote:
> >>
> >> Hi Jens,
> >>
> >>> On 27 Feb 2026, at 11:39, Jens Wiklander <jens.wiklander@xxxxxxxxxx> 
> >>> wrote:
> >>>
> >>> Hi Bertrand,
> >>>
> >>> On Wed, Feb 25, 2026 at 11:02 AM Bertrand Marquis
> >>> <bertrand.marquis@xxxxxxx> wrote:
> >>>>
> >>>> FFA_PARTITION_INFO_GET currently queries the SPMC on each call and walks 
> >>>> the
> >>>> RX buffer every time. The SP list is expected to be static, so repeated
> >>>> firmware calls and validation are unnecessary.
> >>>>
> >>>> Cache the SPMC partition-info list at init time, keeping only secure
> >>>> endpoints, and reuse the cached entries for SP count and partition-info
> >>>> responses. Initialize the VM create/destroy subscriber lists from the 
> >>>> cached
> >>>> list and free the cache on init failure.
> >>>>
> >>>> SP partition info now reflects the init-time snapshot and will not 
> >>>> change.
> >>>>
> >>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
> >>>> ---
> >>>> xen/arch/arm/tee/ffa_partinfo.c | 205 +++++++++++++++++++++-----------
> >>>> 1 file changed, 138 insertions(+), 67 deletions(-)
> >>>>
[snip]
> >>>> +static bool ffa_sp_list_cache_init(const void *buf, uint32_t count,
> >>>> +                                   uint32_t fpi_size)
> >>>> +{
> >>>> +    const uint8_t *src = buf;
> >>>> +    uint32_t secure_count = 0;
> >>>> +    uint32_t n, idx = 0;
> >>>> +    bool warned = false;
> >>>> +
> >>>> +    if ( fpi_size < sizeof(struct ffa_partition_info_1_0) ||
> >>>> +         fpi_size >= FFA_PAGE_SIZE )
> >>>> +        return false;
> >>>
> >>> Would it make sense to check that fpi_size is well aligned with struct
> >>> ffa_partition_info_1_0? If it's an odd size, we'll make unaligned
> >>> accesses below with memcpy(). But perhaps that's a bit much. The SPMC
> >>> isn't supposed to provide garbage.
> >>
> >> Memcpy should prevent issues even if accesses are not aligned.
> >> If we had this test, we cannot return an error to the SPMC so we would 
> >> have to
> >> generate one to the caller. It is simpler i think to handle non-aligned as 
> >> we do not
> >> expect the SPMC to generate such a case.
> >> Tell me if you agree.
> >
> > We dereference fpi below, and depending on compiler flags and pointer
> > types, memcpy() might not be safe with unaligned pointers.
> > From 6.3.2.3 Pointers, paragraph 7, in the C standard:
> > "A pointer to an object type may be converted to a pointer to a
> > different object type. If the
> > resulting pointer is not correctly aligned for the referenced type,
> > the behavior is
> > undefined."
> >
> > I've seen past examples where the compiler optimized memcpy() in a way
> > that breaks with unaligned pointers.
> >
> > We don't expect the test above to fail, but if it does we will not use
> > the secure firmware. I think refusing unexpected sizes is even
> > simpler. It should make finding eventual errors much easier.
>
> In the ffa spec, the size can grow and this is why there is a size field.
> FF-A expect that we either ignore or copy without looking the extra
> content.
> I think we should not be dependent on any alignment so we need to
> make sure we do the copy in a way that is robust to non alignments.
>
> >
> > So my question above is whether it's worth checking that fpi_size is
> > well-aligned, or if it's so unlikely that we don't need to consider
> > it.
>
> I think we need to find a solution where we handle properly things even
> if the size is not aligned.

I agree, that's the most robust.

> I though using memcpy would protect against
> that but maybe we need to use a stronger solution to ensure that works
> even if data is unaligned.

memcpy() can be used, but the pointer values must not pass through a struct
ffa_partition_info_1_0 pointer or such.

Cheers,
Jens



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.