[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-ia64-devel] [PATCH 0/23] [RFC] VTi domain save/restore
Hi all. I've been working on IA64 HVM domain save/restore. Although it isn't completed yet, I'm posting for review/comment. Please note that I tested the patches only with Linux which is idle at this moment. The main patches are 16, 21, 23. The patch 17-19 should be sent to xen-devel. I'll send them unless there are comments/objections. The patch 14 is for consistency. The patch 1-10 are random bug fixes/clean ups which aren't necessary related to HVM domain save/restore. Tristan. Could you elaborate on struct vcpu_guest_context_regs rbs_nat, rbs_rnat? The comment above them is somewhat vague. I heavily rewrote the rbs related part of getting/setting vcpu context. I'm not sure about them and I used only rbs_rnat. I haven't forgotten hvm domain dump-core fix. I need to implement HVM vcpu context save before fixing it. Issues: - VTi break fault handler The patch 14 is for consistent VTi cpu context save. Although I don't see any reason why pUStk/pKStk are set by VTi break fault handler, please review. And people want to test it more before commit. - RSE (both PV and HVM domain) The number of physical stacked general register(RSE.N_STACKED_PHYS) isn't virtualized. Guest OS utilizes it via PAL_RSE_INFO call. If the number of cpu where domain is saved/restored aren't same, what should be supposed to do? The SDM says only that the number is cpu implementation specific. So the easiest way is to refuse restore. Any thoughts/comments? - Guest OS specific optimization. (both PV and HVM domain) Xen/IA64 supports guest OS optimization and PV guest kernel or GFW of HVM domain call opt_feature hypercall. However the optimization information is lost between save/restore. It should also be saved/restored. struct opt_feature seems stable for save/restore ABI. Or does someone else want to play with opt_feature hypercall? - Timers of virtual devices The current virtual device implementation in xen (vacpi, viosapic...) uses the xen internal timer. However they aren't stopped/restarted when guest domain pause/unpause. Something should be done here. It isn't ia64 specific, x86 seems to have same issue. - BUFFER PIO (HVM domain specific) This isn't addressed yet. I'm going to attack it. [PATCH 1/23] remove duplicate xc_get/set_hvm_param() definitions [PATCH 2/23] minor clean up of sync_vcpu_execstate() [PATCH 3/23] avoid set cr.ivt when context switch if possible [PATCH 4/23] remove dead code [PATCH 5/23] clean up: vmx_vcpu_set_rr() [PATCH 6/23] fix vmx_emul_mov_from_dbr/ibr() [PATCH 7/23] make vmx_setup_platfrom() return error. [PATCH 8/23] clean up of ia64 hvm builder [PATCH 9/23] minor clean up of hypervisor.h [PATCH 10/23] xencomm: add resumedomain support. [PATCH 11/23] vti domain save/restore: add show_stack() prototype [PATCH 12/23] vti domain save/restore: fix stack unwinder [PATCH 13/23] vti domain save/restore: add unwind directive to break fault handler [PATCH 14/23] [RFC] vti domain save/restore: clean up vti break fault handler [PATCH 15/23] vti domain save/restore: make vmx_vcpu_set_rr() accept non-current [PATCH 16/23] [RFC] vti domain save/restore: implement arch_get/set_info_guest() more [PATCH 17/23] vti domain save/restore: split hvm/support.h into arch generic/dependent part [PATCH 18/23] vti domain save/restore: split save.h into arch generic/dependent part [PATCH 19/23] vti domain save/restore: split save.c into arch generic/dependent part [PATCH 20/23] vti domain save/restore: linux xencomm; add hvm_set/get_context support [PATCH 21/23] [RFC] vti domain save/restore: implement hvm_save/load. work in progress. [PATCH 22/23] vti domain save/restore: libxc: add support set/get_hvmcontext support [PATCH 23/23] [RFC] vti domain save/restore: libxc: implement vti domain save/restore thanks, _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |