[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] VIRTIO - compatibility with different virtualization solutions
Hi, Below you could find a summary of work in regards to VIRTIO compatibility with different virtualization solutions. It was done mainly from Xen point of view but results are quite generic and can be applied to wide spectrum of virtualization platforms. VIRTIO devices were designed as a set of generic devices for virtual environments. They work without major issues on many currently existing virtualization solutions. However, there is one VIRTIO specification and implementation issue which could hinder VRITIO devices/drivers implementation on new or even existing platforms (e.g. Xen). The problem is that the specification uses guest physical addresses as a pointers to virtques, buffers and other structures. It means that VIRTIO device controller (hypervisor/host or special device domain/process) knows the guest physical memory layout and simply maps required regions as needed. However, this crude mapping mechanism usually assumes that guests does not impose any access restrictions on its whole memory. That situation is not desirable because many times guests would like to put access restrictions on its whole memory and just give access to certain memory regions needed for device operations. Fortunately many hypervisors have some more or less advanced memory sharing mechanisms with relevant access control builtin. However, those mechanisms do not use guest physical addresses as a shared memory region address/reference but unique identifier which could be called "handle" here (or anything else which clearly describes idea). It means that specification should use term "handle" instead of guest physical addresses (in a particular case it can be guest physical address). This way any virtualization environment could choose the best way to access guest memory without compromising security if needed. Above mentioned changes in specification require some changes in VIRTIO devices and drivers implementation. From an implementation perspective of Linux VIRTIO drivers transition from old to new one should not be very difficult. Linux Kernel itself provides DMA API which should ease work on drivers. Hence they should use this API instead of omitting it. Additionally, new IOMMU drivers should be created. Those IOMMU drivers should expose handles to VIRTIO and hide hypervisor specific details. This way VIRTIO would not so strongly depend on specific hypervisor behavior. Another part of VIRTIO are devices which usually do not have an access to DMA API available in Linux Kernel and this may present some challenges in transition to the new implementation. Additionally, similar problems may also appear during drivers implementation in systems which do not have Linux Kernel DMA API. However, even in that situation it should not be very big issue and prevent transition to handles. The author does not know FreeBSD and Windows well enough to make assumptions of how to retool VRITIO drivers there to use some mechanism to get hypervisor handles, but surely there must be some easy API to plumb this through. As it can be seen from above description current VIRTIO specification could create implementation challenges in some virtual environments. However, this issue could be quite easily solved by migration from guest physical addresses which are used as a pointers to virtques, bufferrs and other structures to handles. This change should not be so difficult in implementation. Additionally, it makes VRITIO not so tightly linked with specific virtual environment. This in turn helps improve fulfillment of "Standard" assumption (Virtio makes no assumptions about the environment in which it operates, beyond supporting the bus attaching the device. Virtio devices are implemented over PCI and other buses, and earlier drafts been implemented on other buses not included in this spec) made in VIRTIO spec introduction. Acknowledgments for comments and suggestions: Wei Liu, Ian Pratt, Konrad Rzeszutek Wilk Daniel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |