[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 4/4] arm: tee: add basic OP-TEE mediator



Hi Volodymyr,

On 19/10/17 16:33, Volodymyr Babchuk wrote:
On Thu, Oct 19, 2017 at 03:01:28PM +0100, Julien Grall wrote:
My request is to move the set_user_reg(...) calls outside of call_forward.
So this would make clear the mediator needs to examine the result values.
Ah, I see. You suggest to rename forward_call() to something like execute_call()
and make it return result in some other way.

To give you an example:

call_forward(....)
/* No need to sanitize value because... */
set_user_reg(...)
set_user_reg(...)

The caller may not need to examine the results. But at least it is clear
compare to an helper hiding that.

Note that the set_user_reg(...) calls could in a another helper.
Yep. So new executute_call() call does actuall SMC and returns result in
some structure. If I need to return result as is back to VM, I call another
helper. Right?

That's right.




+
+    return true;
+}
+
+static bool handle_get_shm_config(struct cpu_user_regs *regs)
+{
+    paddr_t shm_start;
+    size_t shm_size;
+    int rc;
+
+    printk("handle_get_shm_config\n");

No plain printk in code accessible by the guest. You should use gprintk or
ratelimit it.
Sorry, this is a debug print. I'll remove it at all.

+    /* Give all static SHM region to the Dom0 */

s/Dom0/Hardware Domain/
Hm, looks like Dom0 != hardware domain. At least I see code that replaces
contents of hardware_domain variable. If it is possible, then there will
be a problem with static SHM buffer.

On Arm Dom0 == Hardware Domain. If Hardware Domain were introduced, then I
would expect OP-TEE to be handled by the it and not Dom0.
Oh, I see. Thank you for explanation.


Looks like it is better to check for is_domain_direct_mapped(d), as you
mentioned below.

is_domain_direct_mapped(d) != hwdom. Please don't mix the both. The former
is here to proctect you gfn == mfn. The latter is here to make sure no other
domain than the hardware domain is going to use the shared memory.
Yes, I see. As I said earlier, only 1:1 mapped domain can use static SHM
mechanism. So I think I need to use is_domain_direct_mapped(d).

But if you use is_domain_direct_mapped(d) here, what will happen if two
guests asked for shared memory?
Is is possible that there will be two 1:1 mapped domains in XEN? If yes,
then I need to employ both checks: is_domain_direct_mapped(d) &&
is_hardware_domain(d).

At the moment only the hardware domain is mapped 1:1. But I don't want to make that assumption in the code. For instance, I know that one of your colleagues was looking at guest mapped 1:1.




But I am not sure what's the point of this check given OP-TEE is only
supported for the Hardware Domain and you already have a check for that.
Because I will remove outer check. But this check will remain. In this way
older OP-TEEs (without virtualization support) will still be accessible
>from Dom0/HWDom.

+    if ( current->domain->domain_id != 0 )

Please use is_hardware_domain(current->domain) and not open-code the check.

+        return false;
+
+    forward_call(regs);
+
+    /* Return error back to the guest */
+    if ( get_user_reg(regs, 0) != OPTEE_SMC_RETURN_OK)
+        return true;

This is quite confusing to read, I think it would make sense that
forward_call return the error.
Good idea, thanks.

+
+    shm_start = get_user_reg(regs, 1);
+    shm_size = get_user_reg(regs, 2);
+
+    /* Dom0 is mapped 1:1 */

Please don't make this assumption or at least add
ASSERT(is_domain_direct_mapped(d));
Thanks. I'll check this in runtime, as I mentioned above.

+    rc = map_regions_p2mt(current->domain, gaddr_to_gfn(shm_start),

Rather than using current->domain everywhere, I would prefer if you
introduce a temporary variable for the domain.
Okay.

+                          shm_size / PAGE_SIZE,

Please PFN_DOWN(...).

+                          maddr_to_mfn(shm_start),
+                          p2m_ram_rw);

What is this shared memory for? I know this is the hardware domain, so using
p2m_ram_rw would be ok. But I don't think this would be safe unless TEE do
proper safety check.
Linux kernel driver does memremap() in such place. OP-TEE maps it as non-secure
RAM. This shared memory is used to pass information between OP-TEE OS
and OP-TEE client. About which safety check you are talking?

Well, does OP-TEE validate the data read from that shared region? But it
seems that you don't plan to give part of the SHM to a guest, so it might be
ok.
OP-TEE surely validate all data from NW. Also OP-TEE is written in such way,
that it reads from shared memory only once, to ensure that NW will not change
data after validation. Mediator will do the same.

What do you mean by the last bit?
Let me cite TEE Internal Core API Specification (Public Release v1.1.1):

"
The fact that Memory References may use memory directly shared with
the client implies that the Trusted Application needs to be especially
careful when handling such data: Even if the client is not allowed to
access the shared memory buffer during an operation on this buffer,
the Trusted OS usually cannot enforce this restriction. A
badly-designed or rogue client may well change the content of the
shared memory buffer at any time, even between two consecutive memory
accesses by the Trusted Application. This means that the Trusted
Application needs to be carefully written to avoid any security
problem if this happens. If values in the buffer are security
critical, the Trusted Application SHOULD always read data only once
from a shared buffer and then validate it. It MUST NOT assume that
data written to the buffer can be read unchanged later on.
"

This requirement is for Trusted Applications, but it also true for
TEE OS as a whole and also for TEE mediators as well. I just wanted
to say, that I plan to write mediator in accordance to this.

It makes sense. Thank you for the explanation.



Also how OP-TEE will map this region? Cacheable...?
Yes, cacheable, PR, PW, non-secure.



+    if ( rc < 0 )
+    {
+        gprintk(XENLOG_INFO, "OP-TEE: Can't map static shm for Dom0: %d", rc);

gprintk already dump the domid. So no need to say Dom0.
I just wanted to emphasis that we mappaed memory for Dom0. Will remove.

gprintk will printk d0. So there are no point to say it a second time...

+        set_user_reg(regs, 0, OPTEE_SMC_RETURN_ENOTAVAIL);
+    }
+
+    return true;
+}
+
+static bool handle_exchange_capabilities(struct cpu_user_regs *regs)
+{
+        forward_call(regs);
+
+        printk("handle_exchange_capabilities\n");

Same here, no plain prink.
Sorry, this is another debug print. Missed it when formatted patches.

+        /* Return error back to the guest */
+        if ( get_user_reg(regs, 0) != OPTEE_SMC_RETURN_OK)
+            return true;
+
+        /* Don't allow guests to work without dynamic SHM */

Hmmm? But you don't support guests today. So why are you checking that?
This is a RFC. Will remove this parts of the code in a proper patch series.

I just wanted to ensure that community is okay with proposed approach and
to show how minimalistic mediator can look.
I don't think this is true. You only show how easy it is to let Dom0
accessing TEE. And as I said in the cover letter, this is not the
controversial part.
Actually I wanted to show approach when mediator resides right in xen.
I got valuable input from you. Now I see that I must completely rework the
first patch. And, probably, show more comprehensive support from OP-TEE side.

The more controversial one is the guest support that you completely left
aside. I believe this part will not be as minimalistic as you think because
you need to translate buffer address and prevent those buffers to disappear
under your feet.
Yes. I plan to copy all buffers where IPAs presented to another place,
so DomU will not be able to see PAs during translation. And I plan to
pin all DomU pages with a data. Also I'll read from guest pages only
once. I think, this will be enough.

There are probably other problem to fix...
Probably yes...

I think, I'll focus on OP-TEE side right now and come back when there will
be more more to show.

To clarify my view. I am not against a temporary support of OP-TEE for the
hardware domain in Xen. But it does not mean I would be ready to see  the a
full OP-TEE support for guests in Xen.
Hm. What did you mean in last sentence? Our (here, at EPAM) target is full
virtualization support for OP-TEE. If you don't want to see it in Xen,
then what another ways we have?

Sorry it was not clear enough. I meant that whilst I am happy to see OP-TEE support for the hardware domain in the hypervisor, we still need to discuss on the approach for guests.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.