|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 07/11] optee: add support for RPC SHM buffers
Julien Grall writes:
> Hi Volodymyr,
>
> On 18/12/2018 21:11, Volodymyr Babchuk wrote:
>> From: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>
>>
>> OP-TEE usually uses the same idea with command buffers (see
>> previous commit) to issue RPC requests. Problem is that initially
>> it has no buffer, where it can write request. So the first RPC
>> request it makes is special: it requests NW to allocate shared
>> buffer for other RPC requests. Usually this buffer is allocated
>> only once for every OP-TEE thread and it remains allocated all
>> the time until shutdown.
>
> By shutdown you mean for the OS or the thread?
Shutdown of OP-TEE actually. But guest can ask OP-TEE to de-allocate this
buffers. And this is what linux drivers does when it unloads.
So, basically, linux drivers says "I want to disable RPC buffer caching"
and then OP-TEE issues number of RPCs to free those buffers.
>>
>> Mediator needs to pin this buffer(s) to make sure that domain can't
>> transfer it to someone else.
>>
>> Life cycle of this buffer is controlled by OP-TEE. It asks guest
>> to create buffer and it asks it to free it.
>>
>> Signed-off-by: Volodymyr Babchuk <vlad.babchuk@xxxxxxxxx>
>> ---
>>
>> Changes from v2:
>> - Added check to ensure that guests does not return two SHM buffers
>> with the same cookie
>> - Fixed coding style
>> - Storing RPC parameters during RPC return to make sure, that guest
>> will not change them during call continuation
>> xen/arch/arm/tee/optee.c | 140
>> ++++++++++++++++++++++++++++++++++++++-
>> 1 file changed, 138 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> index dc90e2ed8e..771148e940 100644
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -30,6 +30,12 @@
>> * OP-TEE spawns a thread for every standard call.
>> */
>> #define MAX_STD_CALLS 16
>> +/*
>> + * Maximal number of pre-allocated SHM buffers. OP-TEE generally asks
>> + * for one SHM buffer per thread, so this also corresponds to OP-TEE
>> + * option CFG_NUM_THREADS
>> + */
>
> Same as patch #6 regarding CFG_NUM_THREADS.
Right now OP-TEE will not allocate more than one buffer per OP-TEE
thread. And I can see no reason why it would change. So, basically I can
remove this MAX_RPC_SHMS at all and use MAX_STD_CALLS instead. But then
it will be not so obvious, why I compare number of SHM buffers with
number of std calls. Thus, I think it is good to have separate
define and comment.
[...]
>> @@ -227,11 +247,90 @@ static void put_std_call(struct optee_domain *ctx,
>> struct optee_std_call *call)
>> spin_unlock(&ctx->lock);
>> }
>> +static struct shm_rpc *allocate_and_pin_shm_rpc(struct
>> optee_domain *ctx,
>> + paddr_t gaddr,
>
> As I said on v3, I would prefer if you use gfn_t here. This would
> introduce more safety.
Sure, will do.
[...]
>> + shm_rpc->guest_page = get_page_from_gfn(current->domain,
>> + paddr_to_pfn(gaddr),
>> + NULL,
>> + P2M_ALLOC);
>
> I think it would be wrong to share any page other than p2m_ram_rw with OP-TEE.
>
So it should be like this:
shm_rpc->guest_page = get_page_from_gfn(current->domain,
paddr_to_pfn(gaddr),
&p2m,
P2M_ALLOC);
if ( !shm_rpc->guest_page || p2m != p2m_ram_rw)
goto err;
?
[...]
>> +static void free_shm_rpc(struct optee_domain *ctx, uint64_t cookie)
>> +{
>> + struct shm_rpc *shm_rpc;
>> + bool found = false;
>> +
>> + spin_lock(&ctx->lock);
>> +
>> + list_for_each_entry( shm_rpc, &ctx->shm_rpc_list, list )
>> + {
>> + if ( shm_rpc->cookie == cookie )
>> + {
>> + found = true;
>> + list_del(&shm_rpc->list);
>> + break;
>> + }
>> + }
>> + spin_unlock(&ctx->lock);
>
> I think you are missing an atomic_dec(&ctx->shm_rpc_count) here.
Good catch. Thank you.
[...]
>> +static void handle_rpc_func_alloc(struct optee_domain *ctx,
>> + struct cpu_user_regs *regs)
>> +{
>> + paddr_t ptr = get_user_reg(regs, 1) << 32 | get_user_reg(regs, 2);
>> +
>> + if ( ptr & (OPTEE_MSG_NONCONTIG_PAGE_SIZE - 1) )
>> + gprintk(XENLOG_WARNING, "Domain returned invalid RPC command
>> buffer\n");
>
> Should not you bail-out in that case? Also, I would turn it to a gdprintk.
OP-TEE does own checks and that check will fail also. Then OP-TEE will
issue request to free this SHM.
But you have a point. I need to rework error path there.
[...]
>> case OPTEE_SMC_RPC_FUNC_FREE:
>> - /* TODO: Add handling */
>> + {
>> + uint64_t cookie = call->rpc_params[0] << 32 |
>> + (uint32_t)call->rpc_params[1];
>
> The indentation looks weird here.
You are right. How it should look? Would this be okay?
uint64_t cookie = call->rpc_params[0] << 32 |
(uint32_t)call->rpc_params[1];
>> + free_shm_rpc(ctx, cookie);
>> break;
>> + }
>> case OPTEE_SMC_RPC_FUNC_FOREIGN_INTR:
>> break;
>> case OPTEE_SMC_RPC_FUNC_CMD:
>>
>
> Cheers,
--
Best regards,Volodymyr Babchuk
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |