[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 00/20][V5]: PVH xen: version 5 patches...
I've version 5 of my patches for 64bit PVH guest for xen. This is Phase I. These patches are built on top of git c/s: 4de97462d34f7b74c748ab67600fe2386131b778 Phase I: - Establish a baseline of something working. These patches allow for dom0 to be booted in PVH mode, and after that guests to be started in PV, PVH, and HVM modes. I also tested booting dom0 in PV mode, and starting PV, PVH, and HVM guests. Also, the disk must be specified as phy: in vm.cfg file: > losetup /dev/loop1 guest.img > vm.cfg file: disk = ['phy:/dev/loop1,xvda,w'] I've not tested anything else. Note, HAP and iommu are required for PVH. As a result of V3, there were two new action items on the linux side before it will boot as PVH: 1)MSI-X fixup and 2)load KERNEL_CS righ after gdt switch. As a result of V5 a new fixme: - MMIO ranges above the highest covered e820 address must be mapped for dom0. Following fixme's exist in the code: - Add support for more memory types in arch/x86/hvm/mtrr.c. - arch/x86/time.c: support more tsc modes. - check_guest_io_breakpoint(): check/add support for IO breakpoint. - implement arch_get_info_guest() for pvh. - vmxit_msr_read(): during AMD port go thru hvm_msr_read_intercept() again. - verify bp matching on emulated instructions will work same as HVM for PVH guest. see instruction_done() and check_guest_io_breakpoint(). Following remain to be done for PVH: - AMD port. - Avail PVH dom0 of posted interrupts. (This will be a big win). - 32bit support in both linux and xen. Xen changes are tagged "32bitfixme". - Add support for monitoring guest behavior. See hvm_memory_event* functions in hvm.c - Change xl to support other modes other than "phy:". - Hotplug support - Migration of PVH guests. Thanks for all the help, Mukesh _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |