[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 00/04] Kexec / Kdump: Release 20061122 (xen-unstable-12502)
Hi Ian, [Added Dave Anderson to the CC list] Thanks for the comments! On 11/23/06, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote: Hi Magnus, On Wed, 2006-11-22 at 16:10 +0900, Magnus Damm wrote: > Here comes a new version of the Kexec / Kdump patches for x86 Xen. Not much > has changed since last release, just a minor fix for kdump on x86_64. > > Patches to make kexec-tools Xen aware have recently been sent to the fastboot > list. These patches will be merged in the kexec-tools-testing tree in the > near future. I've taken these patches out for a spin. They look pretty good. I've got a couple of comments. Firstly the patches break native kernel compile. You add usages of kexec_page_to_pfn() and friends to kernel/kexec.c but only include kexec-xen.h ifdef CONFIG_XEN. I fixed it with by removing the ifdef but the preferred way would be to move the native definitions of kexec_* into include/asm-i386/kexec.h and make a xen specific copy in include/asm-i386/mach-xen/asm/kexec.h with the xen versions patched in. Alternatively you could just merge kexec-xen.h into kexec.h. Sorry about the breakage. I'd like to stay away from duplicating files so I think merging kexec-xen.h into kexec.h sounds good. I plan to add the common non-xen version of the page macros to include/linux/kexec.h and add the xen-specific macros to the per-architecture include/asm/kexec.h. I'll include a fix in the patchset that I'll send later this week. My second comment is WRT to the ELF notes which you add to the kdump. You include a standard PRSTATUS core ELF note per physical CPU but there is some useful physical processor state which is not included in this structure -- most importantly CR3. This structure is used both for regular Linux kdumps and core dumps so it felt natural to extend it to the Xen case as well. I do however agree with you that it is strange that only certain registers are saved and many system-level processor registers are unsaved. I've discussed this with Dave Anderson, and he needed CR3 to be able to locate certain mapping tables used for converting between pseudo-physical and machine addresses. http://www.mail-archive.com/crash-utility@xxxxxxxxxx/msg00201.html Since the amount of physical CPU state which is not already included in PRSTATUS is pretty small I think you could just include the whole lot in a Xen specific note per PCPU. I'd basically include anything which is in a Xen panic/oops message but not in PRSTATUS, that's C[0,2,3,4]. Including the debug registers might be handy too. If there was some standard extended PRSTATUS note format for these extra things we could use that would be even better but I don't know of one (but then I don't really know about these things ;-)). I'm unfortunately not aware of any standard format. The current Xen specific note is only written once and it is used to give system-wide information, ie not per cpu information. So maybe it makes sense to create a new per-cpu note for system-level register information? You also store dom0's pfn_to_mfn_frame_list_list in a Xen specific note. What is that used for? Given a Xen symbol table it should be possible to locate the shared info for any domain via the xen mappings and hence find the p2m table that way. m2p is at a known virtual address already. This is because Dave wanted to be able to parse dom0 kernels easily. I'm not sure if that is the case still with the new xencrash code? Dave, are you listening? I thought that pointing out pfn_to_mfn_frame_list_list for dom0 was a better, more portable way to provide Dave with this info than just handing out CR3. The contents of the h/v taint bitmap would be another interesting thing to include in the Xen note. This sounds like system-wide information, not per-cpu right? Thanks, / magnus _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |