|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] kexec-tools: Read always one vmcoreinfo file
Dne Po 23. Äervence 2012 14:56:07 Petr Tesarik napsal(a):
> Dne Ät 5. Äervence 2012 14:16:35 Daniel Kiper napsal(a):
> > vmcoreinfo file could exists under /sys/kernel (valid on baremetal only)
> > and/or under /sys/hypervisor (valid when Xen dom0 is running).
> > Read only one of them. It means that only one PT_NOTE will be
> > always created. Remove extra code for second PT_NOTE creation.
>
> Hi Daniel,
>
> are you absolutely sure this is the right thing to do? IIUC these two
> VMCORINFO notes are very different. The one from /sys/kernel/vmcoreinfo
> describes the Dom0 kernel (type 'VMCOREINFO'), while the one from
> /sys/hypervisor describes the Xen hypervisor (type 'XEN_VMCOREINFO'). If
> you keep only the hypervisor note, then e.g. makedumpfile won't be able to
> use dumplevel greater than 1, nor will it be able to extract the log
> buffer.
I've just verified this, and I'm confident we have to keep both notes in the
dump file. Simon, please revert Daniel's patch to avoid regressions.
I'm attaching a sample VMCOREINFO_XEN and VMCOREINFO to demonstrate the
difference. Note that the VMCOREINFO_XEN note is actually too big, because Xen
doesn't bother to maintain the correct note size in the note header, so it
always spans a complete page minus sizeof(Elf64_Nhdr)...
Regards,
Petr Tesarik
> Petr Tesarik
> SUSE Linux
>
> > Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
> > ---
> >
> > kexec/crashdump-elf.c | 33 +++++++--------------------------
> > 1 files changed, 7 insertions(+), 26 deletions(-)
> >
> > diff --git a/kexec/crashdump-elf.c b/kexec/crashdump-elf.c
> > index 8d82db9..ec66548 100644
> > --- a/kexec/crashdump-elf.c
> > +++ b/kexec/crashdump-elf.c
> > @@ -40,8 +40,6 @@ int FUNC(struct kexec_info *info,
> >
> > uint64_t notes_addr, notes_len;
> > uint64_t vmcoreinfo_addr, vmcoreinfo_len;
> > int has_vmcoreinfo = 0;
> >
> > - uint64_t vmcoreinfo_addr_xen, vmcoreinfo_len_xen;
> > - int has_vmcoreinfo_xen = 0;
> >
> > int (*get_note_info)(int cpu, uint64_t *addr, uint64_t *len);
> >
> > if (xen_present())
> >
> > @@ -53,16 +51,14 @@ int FUNC(struct kexec_info *info,
> >
> > return -1;
> >
> > }
> >
> > - if (get_kernel_vmcoreinfo(&vmcoreinfo_addr, &vmcoreinfo_len) == 0) {
> > - has_vmcoreinfo = 1;
> > - }
> > -
> > - if (xen_present() &&
> > - get_xen_vmcoreinfo(&vmcoreinfo_addr_xen, &vmcoreinfo_len_xen) ==
0)
> > { - has_vmcoreinfo_xen = 1;
> > - }
> > + if (xen_present()) {
> > + if (!get_xen_vmcoreinfo(&vmcoreinfo_addr, &vmcoreinfo_len))
> > + has_vmcoreinfo = 1;
> > + } else
> > + if (!get_kernel_vmcoreinfo(&vmcoreinfo_addr, &vmcoreinfo_len))
> > + has_vmcoreinfo = 1;
> >
> > - sz = sizeof(EHDR) + (nr_cpus + has_vmcoreinfo + has_vmcoreinfo_xen)
*
> > sizeof(PHDR) + + sz = sizeof(EHDR) + (nr_cpus + has_vmcoreinfo) *
> > sizeof(PHDR) +
> >
> > ranges * sizeof(PHDR);
> >
> > /*
> >
> > @@ -179,21 +175,6 @@ int FUNC(struct kexec_info *info,
> >
> > dbgprintf_phdr("vmcoreinfo header", phdr);
> >
> > }
> >
> > - if (has_vmcoreinfo_xen) {
> > - phdr = (PHDR *) bufp;
> > - bufp += sizeof(PHDR);
> > - phdr->p_type = PT_NOTE;
> > - phdr->p_flags = 0;
> > - phdr->p_offset = phdr->p_paddr = vmcoreinfo_addr_xen;
> > - phdr->p_vaddr = 0;
> > - phdr->p_filesz = phdr->p_memsz = vmcoreinfo_len_xen;
> > - /* Do we need any alignment of segments? */
> > - phdr->p_align = 0;
> > -
> > - (elf->e_phnum)++;
> > - dbgprintf_phdr("vmcoreinfo_xen header", phdr);
> > - }
> > -
> >
> > /* Setup an PT_LOAD type program header for the region where
> >
> > * Kernel is mapped if elf_info->kern_size is non-zero.
> > */
>
> _______________________________________________
> kexec mailing list
> kexec@xxxxxxxxxxxxxxxxxxx
> http://lists.infradead.org/mailman/listinfo/kexec
Attachment:
VMCOREINFO_XEN Attachment:
VMCOREINFO _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |