[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 05/32] libxc: introduce a domain loader for HVM guest firmware



On Wed, 2015-10-07 at 18:55 +0200, Roger Pau Monne wrote:
> Introduce a very simple (and dummy) domain loader to be used to load the
> firmware (hvmloader) into HVM guests. Since hmvloader is just a 32bit elf
> executable the loader is fairly simple.
> 
> Since the order in which loaders are tested cannot be arranged, prevent the
> current elfloader from trying to boot a kernel that doesn't contain Xen
> ELFNOTES.

I think it relies (probably implicitly and probably not very defined) on
the link order.

It is possible to control this (somewhat) because the __init used on
register_loader turns into __attribute__((constructor)), which takes an
(optional) priority. You can also (I think) use __attribute__
((init_priority (2000))).

All of which is enormous faff. Having xc_dom_register_loader() take a
priority, or putting one in struct xc_dom_loader would be less so.

Apart fromall that, I'm happy with the idea that the elfloader shouldn't be
selected to load things which it cannot work with.

However I'm unsure that the presence or absence of ELF notes is sufficient,
since there is at least the legacy SHT_NOTE section and then __xen_guest
section (see the tail of elf_xen_parse) as well.

It may well be that the code you've got actually covers these cases and
it's just the commentary which doesn't. I think this is probably the case
(since elf_xen_parse calls elf_xen_note_check after handling all those
cases).

There is an additional subtlety with ARM not having notes, but I think your
checks will pass and therefore to the right thing.
> 
> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> ---
> Changes since v7:
>  - Prevent elfloader from loading kernels that don't have Xen elfnotes.
>    Another solution is to force an order in the way loaders are tested,
>    but that's a more complex solution.
> 
> Changes since v3:
>  - s/__FUNCTION__/__func__/g
>  - Fix style errors in xc_dom_hvmloader.c.
>  - Add Andrew Cooper Reviewed-by.
>  - Add Wei Acked-by.
> ---
>  tools/libxc/Makefile           |   1 +
>  tools/libxc/include/xc_dom.h   |   8 ++
>  tools/libxc/xc_dom_elfloader.c |  22 ++-
>  tools/libxc/xc_dom_hvmloader.c | 313
> +++++++++++++++++++++++++++++++++++++++++
>  xen/include/xen/libelf.h       |   1 +
>  5 files changed, 344 insertions(+), 1 deletion(-)
>  create mode 100644 tools/libxc/xc_dom_hvmloader.c
> 
> diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
> index a0f899b..baaadd6 100644
> --- a/tools/libxc/Makefile
> +++ b/tools/libxc/Makefile
> @@ -84,6 +84,7 @@ GUEST_SRCS-y                 += xc_dom_core.c
> xc_dom_boot.c
>  GUEST_SRCS-y                 += xc_dom_elfloader.c
>  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_bzimageloader.c
>  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_decompress_lz4.c
> +GUEST_SRCS-$(CONFIG_X86)     += xc_dom_hvmloader.c
>  GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
>  GUEST_SRCS-y                 += xc_dom_binloader.c
>  GUEST_SRCS-y                 += xc_dom_compat_linux.c
> diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h
> index 30fa8c5..88c5c80 100644
> --- a/tools/libxc/include/xc_dom.h
> +++ b/tools/libxc/include/xc_dom.h
> @@ -14,6 +14,7 @@
>   */
>  
>  #include <xen/libelf/libelf.h>
> +#include <xenguest.h>
>  
>  #define INVALID_P2M_ENTRY   ((xen_pfn_t)-1)
>  
> @@ -183,6 +184,13 @@ struct xc_dom_image {
>          XC_DOM_PV_CONTAINER,
>          XC_DOM_HVM_CONTAINER,
>      } container_type;
> +
> +    /* HVM specific fields. */
> +    /* Extra ACPI tables passed to HVMLOADER */
> +    struct xc_hvm_firmware_module acpi_module;
> +
> +    /* Extra SMBIOS structures passed to HVMLOADER */
> +    struct xc_hvm_firmware_module smbios_module;
>  };
>  
>  /* --- pluggable kernel loader ------------------------------------- */
> diff --git a/tools/libxc/xc_dom_elfloader.c
> b/tools/libxc/xc_dom_elfloader.c
> index 82524c9..36a115e 100644
> --- a/tools/libxc/xc_dom_elfloader.c
> +++ b/tools/libxc/xc_dom_elfloader.c
> @@ -106,7 +106,27 @@ static elf_negerrnoval check_elf_kernel(struct
> xc_dom_image *dom, bool verbose)
>  
>  static elf_negerrnoval xc_dom_probe_elf_kernel(struct xc_dom_image *dom)
>  {
> -    return check_elf_kernel(dom, 0);
> +    struct elf_binary elf;
> +    int rc;
> +
> +    rc = check_elf_kernel(dom, 0);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    rc = elf_init(&elf, dom->kernel_blob, dom->kernel_size);
> +    if ( rc != 0 )
> +        return rc;
> +
> +    /*
> +     * We need to check that it contains Xen ELFNOTES,
> +     * or else we might be trying to load a plain ELF.
> +     */
> +    elf_parse_binary(&elf);
> +    rc = elf_xen_parse(&elf, &dom->parms);
> +    if ( rc != 0 )
> +        return rc;

Can you confirm that I'm right and there is no need to cleanup of free this
elf object?

It's a bit of a shame to now have to parse the ELF twice. How abusive would
to be to declare that when a xc_dom ->probe hook returns success it is
entitled to rely on the contents of dom->loader_private being preserved
until ->parse is called. In turn this would imply that the first successful
probe would be used rather than e.g. probing everything and then picking a
winner from the successful applicants.

That feels a bit abusive though.

We could give ->probe a void **private_out which is guaranteed to be passed
to the ->parse if this loader is selected.

Perhaps I'm over thinking this.

I've only looked at this aspect which is new in v8, since others were ok
with the rest in v7. (You were right to drop the acks though)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.