[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 03/12] xen/arm: ffa: Harden shm page parsing


  • To: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Mon, 9 Feb 2026 15:10:29 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=2; mx.microsoft.com 1; spf=pass (sender ip is 4.158.2.129) smtp.rcpttodomain=linaro.org smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com] dmarc=[1,1,header.from=arm.com])
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dSzZl7+l7g50NRkkwswd5H4TBwFmrQBYdzfeGfm9ilA=; b=gUvYo0FohOS6ArX0j+Mzaf0lsIBAddLgDRDI20DU61bFFjoI468wqDhCBRQOrMBvwrXjMAldlG80qlMb6a9+kNifoLo98aT/IM0tL8slHTXVYf2eeN6uDl5v1wCWBsRCn4CliZdABngL9qNsL0pUOkEdIG+AhNOy6XLmgkg8qguQR4rSFFfHGEVaaoaWmXvWFvZoYKKxsEia+HR4qNjbR5hlX6Uzxb535Z+tpHA1k6oxSdoAkMSb7amZanCOf1PAXP61IVfRk+iqJOEfRClBqPYcPvcXEVPk+JV/f8hFpVTQX7L9tl/+uJbD0Pl4oNSroPc3QzQ0ADvivr7k/fjWuA==
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dSzZl7+l7g50NRkkwswd5H4TBwFmrQBYdzfeGfm9ilA=; b=cT6U407yMW48/pe5NcuOEs27yFLhx5XgEIGjktkrZa2JtU2DJPKs9OHiyrKr9Jc19Jx3PHYSiS7ju7XskitihBekFavkxgGG1zpUzKagbP6rnC+ovDRC96XEMuCR0C/ew089HXMXJ8Mt+U3KovS7L1rJlnQ/rKfO3MuGjTLRkwu9B2EmRaZAwy5r7GrDbpqn7qliyz8eIkp4dMCUuxXSDx0p7D2xKY8LTuJEesq+kDdk4+FODpfRiWtS/A2BM+kVk7E27/GTupsnGGOoWF7qwhGOEF1MBd8s4rOXM8UwK81EawNo0nyUgopbwYrGI5TgaZB1zlotz1kQ5z/eeQphNw==
  • Arc-seal: i=2; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=pass; b=yG5fMuYk5VcNKCtFaaSOZJB8ff7QOJ/gYbzPw2SwRV95d3sqsMT9X7e5MOCU5TIqx3MrnWNizwxr9j0e+/VNUhxldhlbR8iP+jwkqG5LjyvXdoP7B48U7exyCJbz3r4pcGYNLBKtpIX4w7b5Nk60XRAinPSHLvIu+UUNEro6cxjXE0F9lCX7uzo/nIewX4vCYs0wy5ivV4mA4ppCQEZHPDMXXZJgNxTx01zVaN25Z6DL/MhQo3DbmkcHmRIqmTxRBcI7e/UqVPJ+DP41tigw7Qdb3BmrpOkrujYZclfD3nQtZfKOITpNWw8qhnSeeX0ZEyzixH3tnGKCMWGk7bXSgA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=tlS72dkN1xQoOxmfFQLWN2ldrtcNjhf9GX1YRYe3vKjspdWefi+wnpmzTehwCIpytO7JzrqxBCpYfv2Uaozih5lkQAUmXD6LKIaa5AifqsAdRF9fJ7OXH+F01pesRAsfpN8cmnYIfb5o4NGDIQIjCZwEfRxIDB99+vst4kJs8XKEg7Fdpc9nApUnMBKbC93zeNORAiu8ODYz4t6hWv1W5SF3yFM7mv9mpgukeG7Fhai0ksE0wQok+QWq+oX9cxx5OKVThu6akcm7kkvW4OM3MFUqJmCj4oEOfMRj9OFUFuMgXJxBFIrtR7LCNWT1AtxCVgzhmb4z3CTJu/a7GhHG8Q==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Mon, 09 Feb 2026 15:11:56 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHclTP8S4iGL6T7pUmD4E2SewFDhLV6I0SAgABSaACAAAxSAA==
  • Thread-topic: [PATCH 03/12] xen/arm: ffa: Harden shm page parsing

Hi Jens,

> On 9 Feb 2026, at 15:26, Bertrand Marquis <Bertrand.Marquis@xxxxxxx> wrote:
> 
> Hi Jens,
> 
>> On 9 Feb 2026, at 10:31, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
>> 
>> Hi Bertrand,
>> 
>> On Tue, Feb 3, 2026 at 6:38 PM Bertrand Marquis
>> <bertrand.marquis@xxxxxxx> wrote:
>>> 
>>> get_shm_pages() uses unchecked address arithmetic and does not enforce
>>> alignment, so malformed descriptors can cause overflow or slip through
>>> validation. The reclaim path also repeats handle-to-shm-mem conversion
>>> in multiple places, duplicating error handling.
>>> 
>>> Harden page parsing and reclaim handling:
>>> - add ffa_safe_addr_add() and use it to detect address overflows
>>> - enforce alignment checks in get_shm_pages() and return FF-A errors
>>> - introduce ffa_secure_reclaim() and use it for MEM_RECLAIM and teardown
>>> - simplify ffa_mem_share() argument handling and allow max page count
>>> 
>>> Functional impact: invalid or misaligned memory ranges now fail earlier
>>> with proper error codes; behavior for valid descriptors is unchanged.
>>> 
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@xxxxxxx>
>>> ---
>>> xen/arch/arm/tee/ffa_private.h | 11 +++++++
>>> xen/arch/arm/tee/ffa_shm.c     | 57 +++++++++++++++++-----------------
>>> 2 files changed, 40 insertions(+), 28 deletions(-)
>>> 
>>> diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h
>>> index b625f1c72914..58562d8e733c 100644
>>> --- a/xen/arch/arm/tee/ffa_private.h
>>> +++ b/xen/arch/arm/tee/ffa_private.h
>>> @@ -632,4 +632,15 @@ static inline void ffa_uuid_set(struct ffa_uuid *id, 
>>> uint32_t val0,
>>>    id->val[1] = ((uint64_t)val3 << 32U) | val2;
>>> }
>>> 
>>> +/*
>>> + * Common overflow-safe helper to verify that adding a number of pages to 
>>> an
>>> + * address will not wrap around.
>>> + */
>>> +static inline bool ffa_safe_addr_add(uint64_t addr, uint64_t pages)
>>> +{
>>> +    uint64_t off = pages * FFA_PAGE_SIZE;
>>> +
>>> +    return (off / FFA_PAGE_SIZE) == pages && addr <= UINT64_MAX - off;
>>> +}
>>> +
>>> #endif /*__FFA_PRIVATE_H__*/
>>> diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c
>>> index 90800e44a86a..4c0b45cde6ee 100644
>>> --- a/xen/arch/arm/tee/ffa_shm.c
>>> +++ b/xen/arch/arm/tee/ffa_shm.c
>>> @@ -96,16 +96,14 @@ struct ffa_shm_mem {
>>>    struct page_info *pages[];
>>> };
>>> 
>>> -static int32_t ffa_mem_share(uint32_t tot_len, uint32_t frag_len,
>>> -                             register_t addr, uint32_t pg_count,
>>> -                             uint64_t *handle)
>>> +static int32_t ffa_mem_share(uint32_t tot_len, uint64_t *handle)
>>> {
>>>    struct arm_smccc_1_2_regs arg = {
>>>        .a0 = FFA_MEM_SHARE_64,
>>>        .a1 = tot_len,
>>> -        .a2 = frag_len,
>>> -        .a3 = addr,
>>> -        .a4 = pg_count,
>>> +        .a2 = tot_len,
>>> +        .a3 = 0,
>>> +        .a4 = 0,
>>>    };
>>>    struct arm_smccc_1_2_regs resp;
>>> 
>>> @@ -131,12 +129,16 @@ static int32_t ffa_mem_share(uint32_t tot_len, 
>>> uint32_t frag_len,
>>>    }
>>> }
>>> 
>>> -static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
>>> -                               uint32_t flags)
>>> +static int32_t ffa_secure_reclaim(struct ffa_shm_mem *shm, uint32_t flags)
>> 
>> I agree with moving the uint64_to_regpair() call into this function,
>> but I'm puzzled by the new name. What's secure?
> 
> This is to distinguish with reclaim for VM to VM sharing in the future as here
> reclaim is asked to the secure world.
> 
> But in fact to be coherent I should also have renamed ffa_mem_share to 
> ffa_secure_share.
> 
> Would you be ok with that ?

Looking at this a bit more, we are usually using spmc and not secure.

Would you be ok if I rename both those to:
ffa_spmc_share
ffa_spmc_reclaim

Cheers
Bertrand

> 
>> 
>>> {
>>> +    register_t handle_hi;
>>> +    register_t handle_lo;
>>> +
>>>    if ( !ffa_fw_supports_fid(FFA_MEM_RECLAIM) )
>>>        return FFA_RET_NOT_SUPPORTED;
>>> 
>>> +    uint64_to_regpair(&handle_hi, &handle_lo, shm->handle);
>>> +
>>>    return ffa_simple_call(FFA_MEM_RECLAIM, handle_lo, handle_hi, flags, 0);
>>> }
>>> 
>>> @@ -145,7 +147,7 @@ static int32_t ffa_mem_reclaim(uint32_t handle_lo, 
>>> uint32_t handle_hi,
>>> * this function fails then the caller is still expected to call
>>> * put_shm_pages() as a cleanup.
>>> */
>>> -static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
>>> +static int32_t get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
>>>                         const struct ffa_address_range *range,
>>>                         uint32_t range_count)
>>> {
>>> @@ -156,17 +158,26 @@ static int get_shm_pages(struct domain *d, struct 
>>> ffa_shm_mem *shm,
>>>    p2m_type_t t;
>>>    uint64_t addr;
>>>    uint64_t page_count;
>>> +    uint64_t gaddr;
>>> 
>>>    for ( n = 0; n < range_count; n++ )
>>>    {
>>>        page_count = ACCESS_ONCE(range[n].page_count);
>>>        addr = ACCESS_ONCE(range[n].address);
>>> +
>>> +        if ( !IS_ALIGNED(addr, FFA_PAGE_SIZE) )
>>> +            return FFA_RET_INVALID_PARAMETERS;
>>> +
>>>        for ( m = 0; m < page_count; m++ )
>>>        {
>>>            if ( pg_idx >= shm->page_count )
>>>                return FFA_RET_INVALID_PARAMETERS;
>>> 
>>> -            gfn = gaddr_to_gfn(addr + m * FFA_PAGE_SIZE);
>>> +            if ( !ffa_safe_addr_add(addr, m) )
>>> +                return FFA_RET_INVALID_PARAMETERS;
>>> +
>>> +            gaddr = addr + m * FFA_PAGE_SIZE;
>>> +            gfn = gaddr_to_gfn(gaddr);
>>>            shm->pages[pg_idx] = get_page_from_gfn(d, gfn_x(gfn), &t,
>>>                                                  P2M_ALLOC);
>>>            if ( !shm->pages[pg_idx] )
>>> @@ -180,7 +191,7 @@ static int get_shm_pages(struct domain *d, struct 
>>> ffa_shm_mem *shm,
>>> 
>>>    /* The ranges must add up */
>>>    if ( pg_idx < shm->page_count )
>>> -            return FFA_RET_INVALID_PARAMETERS;
>>> +        return FFA_RET_INVALID_PARAMETERS;
>>> 
>>>    return FFA_RET_OK;
>>> }
>>> @@ -198,15 +209,11 @@ static void put_shm_pages(struct ffa_shm_mem *shm)
>>> 
>>> static bool inc_ctx_shm_count(struct domain *d, struct ffa_ctx *ctx)
>>> {
>>> -    bool ret = true;
>>> +    bool ret = false;
>>> 
>>>    spin_lock(&ctx->lock);
>>> 
>>> -    if ( ctx->shm_count >= FFA_MAX_SHM_COUNT )
>>> -    {
>>> -        ret = false;
>>> -    }
>>> -    else
>>> +    if ( ctx->shm_count < FFA_MAX_SHM_COUNT )
>>>    {
>>>        /*
>>>         * If this is the first shm added, increase the domain reference
>>> @@ -217,6 +224,7 @@ static bool inc_ctx_shm_count(struct domain *d, struct 
>>> ffa_ctx *ctx)
>>>            get_knownalive_domain(d);
>>> 
>>>        ctx->shm_count++;
>>> +        ret = true;
>>>    }
>>> 
>>>    spin_unlock(&ctx->lock);
>>> @@ -251,7 +259,7 @@ static struct ffa_shm_mem *alloc_ffa_shm_mem(struct 
>>> domain *d,
>>>    struct ffa_ctx *ctx = d->arch.tee;
>>>    struct ffa_shm_mem *shm;
>>> 
>>> -    if ( page_count >= FFA_MAX_SHM_PAGE_COUNT )
>>> +    if ( page_count > FFA_MAX_SHM_PAGE_COUNT )
>>>        return NULL;
>>>    if ( !inc_ctx_shm_count(d, ctx) )
>>>        return NULL;
>>> @@ -367,7 +375,7 @@ static int share_shm(struct ffa_shm_mem *shm)
>>>        init_range(addr_range, pa);
>>>    }
>>> 
>>> -    ret = ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle);
>> 
>> Please remove frag_len from share_shm() since it's not needed any longer.
> 
> Ack, I will remove it in v2.
> 
> Cheers
> Bertrand
> 
>> 
>> Cheers,
>> Jens
>> 
>>> +    ret = ffa_mem_share(tot_len, &shm->handle);
>>> 
>>> out:
>>>    ffa_rxtx_spmc_tx_release();
>>> @@ -637,8 +645,6 @@ int32_t ffa_handle_mem_reclaim(uint64_t handle, 
>>> uint32_t flags)
>>>    struct domain *d = current->domain;
>>>    struct ffa_ctx *ctx = d->arch.tee;
>>>    struct ffa_shm_mem *shm;
>>> -    register_t handle_hi;
>>> -    register_t handle_lo;
>>>    int32_t ret;
>>> 
>>>    if ( !ffa_fw_supports_fid(FFA_MEM_RECLAIM) )
>>> @@ -652,8 +658,7 @@ int32_t ffa_handle_mem_reclaim(uint64_t handle, 
>>> uint32_t flags)
>>>    if ( !shm )
>>>        return FFA_RET_INVALID_PARAMETERS;
>>> 
>>> -    uint64_to_regpair(&handle_hi, &handle_lo, handle);
>>> -    ret = ffa_mem_reclaim(handle_lo, handle_hi, flags);
>>> +    ret = ffa_secure_reclaim(shm, flags);
>>> 
>>>    if ( ret )
>>>    {
>>> @@ -677,11 +682,7 @@ bool ffa_shm_domain_destroy(struct domain *d)
>>> 
>>>    list_for_each_entry_safe(shm, tmp, &ctx->shm_list, list)
>>>    {
>>> -        register_t handle_hi;
>>> -        register_t handle_lo;
>>> -
>>> -        uint64_to_regpair(&handle_hi, &handle_lo, shm->handle);
>>> -        res = ffa_mem_reclaim(handle_lo, handle_hi, 0);
>>> +        res = ffa_secure_reclaim(shm, 0);
>>>        switch ( res ) {
>>>        case FFA_RET_OK:
>>>            printk(XENLOG_G_DEBUG "%pd: ffa: Reclaimed handle %#lx\n",
>>> --
>>> 2.50.1 (Apple Git-155)



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.