|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v8 17/22] xen/arm: ffa: support sharing memory
Hi Julien,
On Thu, Apr 13, 2023 at 10:53 PM Julien Grall <julien@xxxxxxx> wrote:
>
> Hi Jens,
>
> On 13/04/2023 08:14, Jens Wiklander wrote:
> > static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
> > uint8_t msg)
> > {
> > @@ -781,6 +862,400 @@ out:
> > resp.a4 & mask, resp.a5 & mask, resp.a6 & mask, resp.a7 &
> > mask);
> > }
> >
> > +/*
> > + * Gets all page and assigns them to the supplied shared memory object. If
> > + * this function fails then the caller is still expected to call
> > + * put_shm_pages() as a cleanup.
> > + */
> > +static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
> > + const struct ffa_address_range *range,
> > + uint32_t range_count, unsigned int start_page_idx,
> > + unsigned int *last_page_idx)
> > +{
> > + unsigned int pg_idx = start_page_idx;
> > + gfn_t gfn;
> > + unsigned int n;
> > + unsigned int m;
> > + p2m_type_t t;
> > + uint64_t addr;
> > +
> > + for ( n = 0; n < range_count; n++ )
> > + {
> > + for ( m = 0; m < range[n].page_count; m++ )
> > + {
> > + if ( pg_idx >= shm->page_count )
> > + return FFA_RET_INVALID_PARAMETERS;
> > +
> > + addr = read_atomic(&range[n].address);
>
> I am confused with the use of read_atomic() here. Is this part of the
> guest memory? If so, why don't page_count is also not read atomically?
>
> Also, it looks like you are using you will read the same address
> atomically. Shouldn't this be moved just before the loop?
You're right it is from guest memory and we should use read_atomoc()
only once. I'll fix it.
>
> > + gfn = gaddr_to_gfn(addr + m * FFA_PAGE_SIZE);
> > + shm->pages[pg_idx] = get_page_from_gfn(d, gfn_x(gfn), &t,
> > + P2M_ALLOC);
> > + if ( !shm->pages[pg_idx] )
> > + return FFA_RET_DENIED;
> > + /* Only normal RAM for now */
> > + if ( !p2m_is_ram(t) )
> > + return FFA_RET_DENIED;
> > + pg_idx++;
> > + }
> > + }
> > +
> > + *last_page_idx = pg_idx;
> > +
> > + return FFA_RET_OK;
> > +}
> > +
> > +static void put_shm_pages(struct ffa_shm_mem *shm)
> > +{
> > + unsigned int n;
> > +
> > + for ( n = 0; n < shm->page_count && shm->pages[n]; n++ )
> > + {
> > + put_page(shm->pages[n]);
> > + shm->pages[n] = NULL;
> > + }
> > +}
> > +
> > +static struct ffa_shm_mem *alloc_ffa_shm_mem(struct ffa_ctx *ctx,
> > + unsigned int page_count)
> > +{
> > + struct ffa_shm_mem *shm;
> > +
> > + if ( page_count >= FFA_MAX_SHM_PAGE_COUNT ||
> > + ctx->shm_count >= FFA_MAX_SHM_COUNT )
> > + return NULL;
> > +
> > + shm = xzalloc_flex_struct(struct ffa_shm_mem, pages, page_count);
> > + if ( shm )
> > + {
> > + ctx->shm_count++;
> > + shm->page_count = page_count;
> > + }
> > +
> > + return shm;
> > +}
> > +
> > +static void free_ffa_shm_mem(struct ffa_ctx *ctx, struct ffa_shm_mem *shm)
> > +{
> > + if ( shm ) {
>
> Coding style:
>
> if ( ... )
> {
>
> but I would prefer if we remove one level of indentation and use:
>
> if ( !shm )
> return;
OK, I'll change it.
Thanks,
Jens
>
> > + ASSERT(ctx->shm_count > 0);
> > + ctx->shm_count--;
> > + put_shm_pages(shm);
> > + xfree(shm);
> > + }
> > +}
>
> Cheers,
>
> --
> Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |