|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver
On 04/17/2018 11:57 PM, Dongwon Kim wrote: On Tue, Apr 17, 2018 at 09:59:28AM +0200, Daniel Vetter wrote:On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote:Yeah, I definitely agree on the idea of expanding the use case to the general domain where dmabuf sharing is used. However, what you are targetting with proposed changes is identical to the core design of hyper_dmabuf. On top of this basic functionalities, hyper_dmabuf has driver level inter-domain communication, that is needed for dma-buf remote tracking (no fence forwarding though), event triggering and event handling, extra meta data exchange and hyper_dmabuf_id that represents grefs (grefs are shared implicitly on driver level)This really isn't a positive design aspect of hyperdmabuf imo. The core code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is very simple & clean. If there's a clear need later on we can extend that. But for now xen-zcopy seems to cover the basic use-case needs, so gets the job done.Also it is designed with frontend (common core framework) + backend (hyper visor specific comm and memory sharing) structure for portability. We just can't limit this feature to Xen because we want to use the same uapis not only for Xen but also other applicable hypervisor, like ACORN.See the discussion around udmabuf and the needs for kvm. I think trying to make an ioctl/uapi that works for multiple hypervisors is misguided - it likely won't work. On top of that the 2nd hypervisor you're aiming to support is ACRN. That's not even upstream yet, nor have I seen any patches proposing to land linux support for ACRN. Since it's not upstream, it doesn't really matter for upstream consideration. I'm doubting that ACRN will use the same grant references as xen, so the same uapi won't work on ACRN as on Xen anyway. Let me explain how this works in case of para-virtual display use-case with xen-zcopy. 1. There are 4 components in the system: - displif protocol [1] - xen-front - para-virtual DRM driver running in DomU (Guest) VM - backend - user-space application running in Dom0 - xen-zcopy - DRM (as of now) helper driver running in Dom0 2. All the communication between domains happens between xen-front and the backend, so it is possible to implement para-virtual display use-casewithout xen-zcopy at all (this is why it is a helper driver), but in this case memory copying occurs (this is out of scope for this discussion). 3. To better understand security issues let's see what use-cases we have: 3.1 xen-front exports its dma-buf (dumb) to the backend In this case there are no security issues at all as Dom0 (backend side) will use DomU's pages (xen-front side) and Dom0 is a trusted domain, so we assume it won't hurt DomU. Even if DomU dies nothing bad happens to Dom0.If DomU misbehaves it can only write to its own pages shared with Dom0, but still cannot go beyond that, e.g. it can't access Dom0's memory. 3.2 Backend exports dma-buf to xen-front In this case Dom0 pages are shared with DomU. As before, DomU can only writeto these pages, not any other page from Dom0, so it can be still considered safe. But, the following must be considered (highlighted in xen-front's Kernel documentation): - If guest domain dies then pages/grants received from the backend cannotbe claimed back - think of it as memory lost to Dom0 (won't be used for any other guest) - Misbehaving guest may send too many requests to the backend exhaustingits grant references and memory (consider this from security POV). As the backend runs in the trusted domain we also assume that it is trusted as well, e.g. must take measures to prevent DDoS attacks. 4. xen-front/backend/xen-zcopy synchronization 4.1. As I already said in 2) all the inter VM communication happens between xen-front and the backend, xen-zcopy is NOT involved in that. When xen-front wants to destroy a display buffer (dumb/dma-buf) it issues a XENDISPL_OP_DBUF_DESTROY command (opposite to XENDISPL_OP_DBUF_CREATE). This call is synchronous, so xen-front expects that backend does free the buffer pages on return. 4.2. Backend, on XENDISPL_OP_DBUF_DESTROY: - closes all dumb handles/fd's of the buffer according to [3]- issues DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE IOCTL to xen-zcopy to make sure the buffer is freed (think of it as it waits for dma-buf->release callback) - replies to xen-front that the buffer can be destroyed. This way deletion of the buffer happens synchronously on both Dom0 and DomUsides. In case if DRM_IOCTL_XEN_ZCOPY_DUMB_WAIT_FREE returns with time-out error (BTW, wait time is a parameter of this IOCTL), Xen will defer grant reference removal and will retry later until those are free. Hope this helps understand how buffers are synchronously deleted in case of xen-zcopy with a single protocol command. I think the above logic can also be re-used by the hyper-dmabuf driver with some additional work: 1. xen-zcopy can be split into 2 parts and extend:1.1. Xen gntdev driver [4], [5] to allow creating dma-buf from grefs and vise versa, implement "wait" ioctl (wait for dma-buf->release): currently these areDRM_XEN_ZCOPY_DUMB_FROM_REFS, DRM_XEN_ZCOPY_DUMB_TO_REFS and DRM_XEN_ZCOPY_DUMB_WAIT_FREE 1.2. Xen balloon driver [6] to allow allocating contiguous buffers (not needed by current hyper-dmabuf, but is a must for xen-zcopy use-cases)2. Then hyper-dmabuf uses Xen gntdev driver for Xen specific dma-buf alloc/free/wait 3. hyper-dmabuf uses its own protocol between VMs to communicate buffer creation/deletion and whatever else is needed (fences?).To Xen community: please think of dma-buf here as of a buffer representation mechanism, e.g. at the end of the day it's just a set of pages. Thank you, Oleksandr -DanielRegards, DWOn Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote:Hello, all! After discussing xen-zcopy and hyper-dmabuf [1] approaches it seems that xen-zcopy can be made not depend on DRM core any more and be dma-buf centric (which it in fact is). The DRM code was mostly there for dma-buf's FD import/export with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and DRM_XEN_ZCOPY_DUMB_TO_REFS) are extended to also provide a file descriptor of the corresponding dma-buf, then PRIME stuff in the driver is not needed anymore. That being said, xen-zcopy can safely be detached from DRM and moved from drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?). This driver then becomes a universal way to turn any shared buffer between Dom0/DomD and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant references or represent a dma-buf as grant-references for export. This way the driver can be used not only for DRM use-cases, but also for other use-cases which may require zero copying between domains. For example, the use-cases we are about to work in the nearest future will use V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit from zero copying much. Potentially, even block/net devices may benefit, but this needs some evaluation. I would love to hear comments for authors of the hyper-dmabuf and Xen community, as well as DRI-Devel and other interested parties. Thank you, Oleksandr On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote: [1] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h [2] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/xen/interface/io/displif.h#L539 [3] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/gpu/drm/drm_prime.c#L39 [4] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/gntdev.c[5] https://elixir.bootlin.com/linux/v4.17-rc1/source/include/uapi/xen/gntdev.h [6] https://elixir.bootlin.com/linux/v4.17-rc1/source/drivers/xen/balloon.c _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |