[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [ARM] Native application design and discussion (I hope)
On 21/04/17 19:35, Volodymyr Babchuk wrote: Hi Julien, Hi Volodymyr, On 21 April 2017 at 20:38, Julien Grall <julien.grall@xxxxxxx> wrote:3. Some degree of protection. Bug in EL0 handler will not bring down whole hypervisor.If you have a single app handling all the domains using SMC, then you will bring down all thoses domains. I agree it does not take down the hypervisor, but it will render unusable a part of the platform.This will bring down TEE interface, right. Domains will lose ability to communicate with TEE, but other functionality will remain intact. I don't think so. SMC are synchronous, so if a vCPU do an SMC call it will block until the handler has finished. So if the EL0 app crash, then you will render the vCPU unusable. I guess this could be fixed by a timeout, but how do define the timeout? And then, you may have some state store in the EL0 app, they will be lost if the app crash. So how do you restore the state? In this case, how do you plan to restore the services?The obvious solution is to reboot the whole platform. It will be more controlled process, than hypervisor crash. But there are more soft ways. For example, SMC app can be restarted. Then, I am almost sure that I can ask OP-TEE to abort all opened sessions. After that, requests from a new domains can be processed as usual, but we can't rely on state of old domains, so they should be gracefully restarted. There will be problem with dom0/domD, though. What is domD? I guess you mean Driver Domain. If so, the goal of using driving domain is to restart it easily if a driver crash. Also, a bug in the EL0 handler may give the opportunity, in your use case, to get access to the firmware or data from another guest. How this will bring more protection?On other hand, bug in EL2 handler will give access to whole supervisor.If you handle only one guest per app, then it is very easy to kill that app and domain. It will only harm itself.Yes, I agree. It is great for emulators (and can be used it this case). But, unfortunately, TEE handler needs shared state. I can't see how to implement OP-TEE handler without shared knowledge about wait queues in all guests. It just came to me that it can be possible to move most of this stuff to OP-TEE. Can S-EL1 request two stage table walk for a given guest? We can do this in software, anyways. Probably I can minimize TEE handler it hypervisor, make it almost generic. Need to think more about this... I am not sure how S-EL1 could request stage-2 table walk, the SMC may be handled on a different pCPU than the guest vCPU or even the guest vCPU may have been descheduled whilst waiting the answer. I think you still need the hypervisor to do the translation IPA -> PA for your guest and also potential bounce buffer if the guest buffer span across multiple pages. You will also need the hypervisor to do basic sanity check such as for preventing to the buffer to live in a foreign mapping and making sure the page does not disappear under your feet (likely via incrementing the refcount on the page) when handling the SMC. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |