[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] HELP required with some ideas
On Sun, Aug 29, 2010 at 7:35 PM, Pasi Kärkkäinen <pasik@xxxxxx> wrote:
Hello, First of all we would like to thank you for sparing your time and looking at the suggested ideas. We have mentioned below the links for the papers regarding the ideas. We have also gone through the topics which you mentioned and we have summarized below what we found. We have elaborated two ideas a bit further which would give their clear picture. Our concerns regarding all of these ideas are that whether they are feasible as a 7-8 months project and are they already implemented elsewhere. Any suggestions extending or modifying these ideas would prove to be of great help. Links to papers: 1)On Disk I/O Scheduling in Virtual Machines : http://sysrun.haifa.il.ibm.com/hrl/wiov2010/papers/kesavan.pdf 2) Network Interface Virtualization : http://www.cs.rice.edu/CS/Architecture/docs/ram-vee09.pdf 3) Asymmetric aware hypervisors : http://www.cs.sfu.ca/~fedorova/papers/vee04-kazempour.pdf Regards, Rohan Malpani Ammar Ekbote Paresh Nakhe Gaurav Jain *************************************************************************************************************************************************************************** 1) dm-ioband (in context of the first idea : On Disk I/O Scheduling in Virtual Machines) dm-ioband is an I/O bandwidth controller implemented as a device-mapper driver and can control bandwidth on per partition, per user, per process , per virtual machine (such as KVM or Xen) basis. Our suggested idea does not revolve around I/O scheduling between VMs but its related to disk I/O scheduling carried at different levels in virtualized environments. Further elaboration of the first idea (On Disk I/O Scheduling in Virtual Machines) The suggested idea intends to introduce a disk I/O scheduling algorithm in the hypervisor which would take into consideration the disk I/O scheduling in the guest VM. The scenario is as follows : To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector and in doing so we encounter seek time and rotational delay. For a single disk there will be a number of I/O requests If requests are selected randomly, we will poor performance. So we have to "reorder" (using various algorithms) these requests to minimize the seek time by making the head move in an optimized way. The various algorithms used for these purposes: 1) First-in, first-out (FIFO) : Process request sequentially 2) Shortest Service Time First : Select the disk I/O request that requires the least movement of the disk arm from its current position 3) SCAN : Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction and then Direction is reversed 4) C-SCAN : Restricts scanning to one direction only.When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again. Now the problem in virtualized environments is that there are two levels of disk I/O scheduling; 1) at the guest VM level (domU) 2) at the hypervisor level (dom0) Now due to this if the scheduling is carried in hypervisor (dom0) then the scheduling carried at the guest VM level (dom0) will be of no use. e.g if guest uses FIFO and dom0 uses Shortest-service time first then ultimately the Shortest-service first will be considered and the FIFO scheduling will be wasted. Currently in XEN we have the follwing pattern: 1) at guest VM: The NOOP scheduler : It is the most basic scheduler that inserts all incoming I/O requests into a simple, unordered FIFO queue and implements request merging. 2) at dom0 : uses cfq scheduler The Idea: So effectively no scheduling is carried at the guest level in XEN. However the paper suggests that it is better if the guest VM schedules the tasks according to its need ( some VMs may run applications requiring sequential disk I/O thus needing shortest-seek time while others may berunning applications running randomized data and may be need SCAN ). Thus it is more useful if the guest VMs carry out the scheduling at their level. In such a case it becomes necessary to make the hypervisor aware that scheduling has already been carried out in the guests and not to carry any scheduling in the dom0 level. Thus we want to make modifications to make XEN aware if scheduling has been carried at the guest VM level or not and then accordingly apply its disk I/O scheduling policy thereby not disturbing the scheduling carried at guest VM level. ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// 2) about netchannel2 (in context of the second idea : Network Interface Virtualization) We went through netchannel2 and found that what you said was right. A considerable part of the idea was already implemented in netchannel2. We would go through netchannel2 and find if anything more could be done in it. Also any suggestions regarding what more could be done would be of great help. ///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// 3) about NUMA-aware hypervisors. (in context of 3rd idea : Asymmetry aware hypervisor) We went through NUMA-aware hypervisors and found that our suggested idea does not address NUMA aware hypervisors. However, we might consider that idea if it is not yet done. Further elaboration of the idea (Asymmetry aware hypervisor): To ensure that asymmetric hardware is well utilized, the system must match each thread with the right type of core: e.g., memory-intensive threads with slow cores and computer intensive threads with fast cores. This can be accomplished by an asymmetry-aware thread scheduler in the guest operating system, where properties of individual threads can be monitored more easily than at the hypervisor level. However, if the hypervisor is not asymmetry-aware it can thwart the efforts of the asymmetry-aware guest OS scheduler, for instance if it consistently maps the virtual CPU (vCPU) that the guest believes to be fast to a physical core that is actually slow. This paper focuses on the enabling asymmetric core support in hypervisors. Asymmetric cores (dissimilar cores) are seen to give better efficiency than Multi-core processors having similar cores. In such a case particular kind of processes can be handled by one group of asymmetric cores while other kind of processes can be handled by other group of cores. Operating systems have schedulers which can multiplex the threads to the different cores. However currently hypervisors do not have scheduling based on the asymmetric nature of the cores. THE IDEA: The paper proposes to make the hypervisor aware of this asymmetric nature. The paper proposes to map the vCPU's (present in the VMs) to physical cores of the same kind. i.e. fast vCPU's will be mapped to fast physical cores and slow vCPU's will be mapped to slow physical cores. Thus due to this appropriate mapping the threads in the VMs would be serviced by the appropriate set of cores. /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |