[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/dom0: improve PVH initrd and metadata placement


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 4 Mar 2020 11:49:03 +0100
  • Authentication-results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=roger.pau@xxxxxxxxxx; spf=Pass smtp.mailfrom=roger.pau@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, Wei Liu <wl@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Wed, 04 Mar 2020 10:49:18 +0000
  • Ironport-sdr: pNzEtvB9x1HRRNbc9pfSzAH7z0z4aMvqhiAKCM9NoaXeG/zI3yZmkLoS0dPHojsX3kEZ81WHeO XuIgFw/qXcUyt6wweY1XBzaHRkNmnR7A0baRf7Mul1GoVtQ78VJKI6igk5oe49q1vCYPelqQGM l4cW+Wo1vV5OXK/C0UI1/6w/ffhpCcG+jsQP3QehleFRuiSSiisg1zRNIkQTGLNDClYkbE8cc2 Z8LJfEm3n1Jk0hN+TSBOTNzUY+FSGByMOHUjhw6K6z6LdaL1eyrLOdJXFxTxnm6B7vWimw/0XS 6uE=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Mar 03, 2020 at 06:47:35PM +0000, Andrew Cooper wrote:
> On 03/03/2020 11:52, Roger Pau Monne wrote:
> > Don't assume there's going to be enough space at the tail of the
> > loaded kernel and instead try to find a suitable memory area where the
> > initrd and metadata can be loaded.
> >
> > Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> 
> I can confirm that this fixes the "failed to boot PVH" on my Rome system.
> 
> Tested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

Thanks!

> We've still got the excessive-time-to-construct issues to look at, but
> this definitely brings things to a better position.

Well, PV is always going to be faster to construct, since you only
need to allocate memory and create the initial page tables that cover
the kernel, the metadata and optionally the initrd IIRC.

On PVH we need to populate the full p2m, so I think it's safe to say
that PVH build time is always going to be worse than PV. That doesn't
mean we can't make it faster.
I have to 
Since I also have an AMD box that I can play with, how much memory are
you assigning to dom0?

> > diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> > index eded87eaf5..33520ec1bc 100644
> > --- a/xen/arch/x86/hvm/dom0_build.c
> > +++ b/xen/arch/x86/hvm/dom0_build.c
> > @@ -490,6 +490,45 @@ static int __init pvh_populate_p2m(struct domain *d)
> >  #undef MB1_PAGES
> >  }
> >  
> > +static paddr_t find_memory(const struct domain *d, const struct elf_binary 
> > *elf,
> > +                           size_t size)
> > +{
> > +    paddr_t kernel_start = (paddr_t)elf->dest_base;
> > +    paddr_t kernel_end = (paddr_t)(elf->dest_base + elf->dest_size);
> > +    unsigned int i;
> > +
> > +    for ( i = 0; i < d->arch.nr_e820; i++ )
> > +    {
> > +        paddr_t start, end = d->arch.e820[i].addr + d->arch.e820[i].size;
> > +
> > +        /* Don't use memory below 1MB, as it could overwrite the BDA/EBDA. 
> > */
> 
> The BDA is in mfn 0 so is special for other reasons*.  The EBDA and IBFT
> are the problem datastructures.

Sure. Sorry I haven't added it to the comment.

> ~Andrew
> 
> [*] Thinking about it, how should a PVH hardware domain reconcile its
> paravirtualised boot with finding itself on a BIOS or EFI system...

I guess the same applies to PV which also boots using a PV path but
has access to firmware.

I have to admit I never looked closely at how Linux does that, for
FreeBSD it's fairly easy because at least when booting from BIOS the
kernel won't try to make any calls into the BIOS, and instead expect
the data it requires to be provided by the loader as part of the
metadata blob, together with the modules &c.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.