[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Virtual passthrough I/O for Xen
Hi, We are working on a project for which we intend to modify Xen to support Virtual passthrough I/O. We have the following plan..it would great to get some feedback and listen to ideas where we can apply our plan. Goal: To share one physical device among multiple guest OSes without relying on a central driver. Every guest OS will have its own device driver. And a scheduler will be added to the hypervisor to schedule the device among multiple guest OSes. Motivation and related work: This driver model is more reliable than the current driver models in the VMM and hypervisors. Xen driver domain relies on the integrity of the driver domain itself. The newest hardware such as SR-IOV can support multiple virtual functions (VF) on a single physical device. It can dedicate one VF to one guest OS. However, if the number of guest OSes is more than the number of available VFs, there is no good way to share the VFs among guest OSes. One possible solution is to dedicate one VF for a driver domain with backend server. All the other guest OSes without dedicated VF can use a frontend driver to talk to the backend driver and share the VF. But this approach is not flexible. By using our approach, we can create flexible mappings between the VFs and the guest OSes. The mapping could be 1:n, 1:1, or n:m. The VPIO paper - [http://www.usenix.org/events/wiov08/tech/full_papers/xia/xia_html/] is most close to our approach. But our approach is more general. It does not require full hardware support such as VT-d or SVM. But it does require IOMMU to prevent malicious DMA. And it will be compatible with the new devices such as SR-IOV. Design: We assume that most guest OSes already have the device drivers for a device. We modify these drivers to work with our scheduler. In essence, before the device driver tries to start a control operation, such as DMA, it has to add a hypercall to “obtain” the device. If the device is busy, the device driver should wait for it. In servicing this hypercall, the scheduler (running in the hypervisor) can determine whether to allow this access and maintain a context for the guest OS. The scheduler maintain the state (context) of a device (or a VF) and switch it to other guest OS when necessary. We claim that the modification to the device driver is minimum. Another benefit is that the device driver of a guest OS may be blocked but other parts of the guest OS may keep running. While in the VPIO paper, a whole guest OS has to be blocked when the device is not available, because they didn’t modify the device driver. The benefit of our approach when comparing with other driver models is that there is no central controller of the device. The scheduler is not a full-fledged device driver but only maintain the minimum context of the device. All the guest OSes are equal and independent. If one of the guest OS is compromised, it may launch DoS attack against other guest OSes but cannot steal information from other guest OSes. Implementation: The basic approach is to modify the current pass-through mechanism on Xen. Currently, Xen can dedicate one device to one guest OS (which can be any domain). It also supports SR-IOV devices. But it cannot pass-through one device to multiple guest OSes at the same time. We will modify the current pass-though mechanism. First allow pass-through one device to multiple guest OSes, and then add a scheduler in the hypervisor to schedule the device. We will also modify a sample network device driver to cooperate with the hypervisor when accessing the device. Tasks: 1) Understand the implementation of the current Xen pass-through 2) Add a read-only hypercall to allow the guest device driver read the device state (idle or busy) easily and efficiently. 3) Add a hypercall to let the guest device driver grab the device. 4) Add a scheduler to maintain the context of the device. 5) Modify the device driver to call the abovementioned hypercalls when necessary. Comments and suggestions would be highly appreciated..thanks regards Sameer _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |