[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen, Linux and EFI.



On Thu, Jul 12, 2012 at 11:39:51AM -0400, Shriram Rajagopalan wrote:
> On Thu, Jul 12, 2012 at 9:31 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@xxxxxxxxxx> wrote:
> 
> > >
> > > I spent a week trying to get Xen boot with grub2.efi (ubuntu 12.04).
> > > I ended up getting "Not enough memory to relocate domain 0".
> > >
> > > So I presume that the dom0 kernel EFI support needs to be done for both
> > > cases (grub2.efi and xen.efi) ?
> > >
> > >
> > > Additional info:
> > >  Hardware: IBM System X server.
> > >
> > >  It appeared that when booting xen under grub2.efi, xen was picking up a
> > > e-801 map
> >
> > Huh. E801 is from the ancient days. I am a bit surprised that grub2.efi
> > would
> > manufacture such ancient map.
> >
> > So just to make sure I am not confused - you ran GRUB2 and ran with the
> > normal hypervisor and Linux kernel. What did you serial output look like?
> >
> >
> Yes.
> I dont have the serial output now. I wiped out the server and installed
> SLES 11 SP2 on it. :(
> 
> 
> > >  instead of the e-820 map that was on the system. I forced xen code to a
> > > multiboot
> > >  e-820 map instead of the native one ( based on a forum post I saw).
> > >  That didnt help much.
> >
> > What does the memory map look like when you booted with GRUB2.efi + Linux.
> >
> 
> Sorry. I dont have that info now, since I uninstalled ubuntu from it.
> 
> 
> 
> > Was the memory map the same or different? I am trying to figure out if
> > the issue is that Xen needs extra code to deal with a GRUB2 manufactured
> > E801 map - and that the baremetal kernel already has such logic.
> >
> >
> I do have the memory map when I forced xen to follow the multiboot-e820
> logic.
> I posted about this issue on xen-users.
> http://lists.xen.org/archives/html/xen-users/2012-07/msg00034.html
> 
> The hack (patch) is available in the above post. The e820 map that xen saw
> was
> (XEN) Multiboot-e820 RAM map:
> (XEN)  0000000000000000 - 000000000006c000 (usable)
> (XEN)  000000000006c000 - 000000000006d000 (ACPI NVS)
> (XEN)  000000000006d000 - 000000000009f000 (usable)
> (XEN)  000000000009f000 - 00000000000a0000 (ACPI NVS)
> (XEN)  0000000000100000 - 000000007c11d000 (usable)
> (XEN)  000000007c11d000 - 000000007ec92000 (reserved)
> (XEN)  000000007ec92000 - 000000007f7bf000 (ACPI NVS)
> (XEN)  000000007f7bf000 - 000000007f7ff000 (ACPI data)
> (XEN)  000000007f7ff000 - 000000007f800000 (usable)
> (XEN)  0000000080000000 - 0000000090000000 (reserved)
> (XEN)  00000000fed1c000 - 00000000fed20000 (reserved)
> (XEN)  00000000ff800000 - 0000000100000000 (reserved)
> (XEN)  0000000100000000 - 0000001080000000 (usable)
> (XEN) ACPI Error (tbxfroot-0218): A valid RSDP was not found [20070126]
> 
> The machine booted. But xen saw only one CPU out of 16.
> 
> FYI, I checked the memory map that xen sees now (xen 4.1.2 sles11sp2
> version)
> and its the same. Except the ACPI errors arent there. xen recognizes all 16
> cpus.

Oh, it might be that SLES version of hypervisor has the patches
to deal with EFI, while the one in Ubuntu (or Fedora) does not!

If you were to boot Ubuntu with a SLES hypervisor (yeah, lets
go wild here), it might have worked - at least booted. The
toolstack would probably be very confused.

> 
> 
> 
> > >
> > > So I ended up booting with a SLES kernel. Not even sure if opensuse 12.1
> > > will work.
> >
> > With what hypervisor? Same one you used when you tried GRUB2 with Xen
> > earlier?
> > >
> >
> 
> As stated above. SLES11 SP2. The hypervisor is the one that comes packaged
> in
> the distro xen 4.1.2_14-0.5.5 and the sles11 kernel 3.0.13-0.27-xen

Right, so Jan (SuSE) has been busy working on EFI so I bet he back-ported
the EFI patches in Xen 4.1. They might not be in Ubuntu Xen 4.1 :-(

Thought they probably are in the Xen-unstable.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.