[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 1/2] xen: pass kernel initrd to qemu



On Thu, 2014-05-29 at 11:23 +0800, Chunyan Liu wrote:
> [support xen HVM direct kernel boot]

What is this?

> xen side patch: support 'kernel', 'ramdisk', 'root', 'extra'
> in HVM config file, parse config file, pass -kernel, -initrd
> and -append parameters to qemu.

Is this a completely new feature or is this adding parity with a xend
feature?

> [config example]
> kernel="/mnt/vmlinuz-3.0.13-0.27-default"
> ramdisk="/mnt/initrd-3.0.13-0.27-default"
> root="/dev/hda2"
> extra="console=tty0 console=ttyS0"
> disk=[ 'file:/mnt/images/bjz_04_sles11_sp2/disk0.raw,hda,w', ]
> 
> Signed-off-by: Chunyan Liu <cyliu@xxxxxxxx>
> ---
>  tools/libxl/libxl_dm.c      | 20 ++++++++++++++++++++
>  tools/libxl/libxl_types.idl |  3 +++
>  tools/libxl/xl_cmdimpl.c    | 33 +++++++++++++++++++++++++++------
>  3 files changed, 50 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index 51ab2bf..133bb56 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -196,6 +196,16 @@ static char ** 
> libxl__build_device_model_args_old(libxl__gc *gc,

Does this work with old qemu+rombios?

I question whether we should be adding this sort of new feature here
anyway.

>          int nr_set_cpus = 0;
>          char *s;
>  
> +        if (b_info->u.hvm.kernel) {
> +            flexarray_vappend(dm_args, "-kernel", b_info->u.hvm.kernel, 
> NULL);
> +        }

libxl style would be to omit the {} for a single line if.

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 52f1aa9..b8b973b 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -336,6 +336,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
>      ("event_channels",   uint32),
>      ("u", KeyedUnion(None, libxl_domain_type, "type",
>                  [("hvm", Struct(None, [("firmware",         string),
> +                                       ("kernel", string),
> +                                       ("cmdline", string),
> +                                       ("ramdisk", string),

Alignment.

>                                         ("bios",             libxl_bios_type),
>                                         ("pae",              libxl_defbool),
>                                         ("apic",             libxl_defbool),
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 5195914..efd2474 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c

> +        xlu_cfg_get_string (config, "root", &root, 0);
> +        xlu_cfg_get_string (config, "extra", &extra, 0);
> +
> +        if (root) {
> +            if (asprintf(&cmdline, "root=%s %s", root, extra) == -1)
> +                cmdline = NULL;
> +        } else {
> +            cmdline = strdup(extra);
> +        }
> +
> +        if ((root || extra) && !cmdline) {
> +            fprintf(stderr, "Failed to allocate memory for cmdline\n");
> +            exit(1);
> +        }

This all seems to be duplicating pv code, please refactor into a common
helper or something.

> +
> +        b_info->u.hvm.cmdline = cmdline;
> +        xlu_cfg_replace_string (config, "ramdisk", &b_info->u.hvm.ramdisk, 
> 0);
>  
>          xlu_cfg_replace_string (config, "firmware_override",
>                                  &b_info->u.hvm.firmware, 0);
> @@ -1061,9 +1085,6 @@ static void parse_config_data(const char *config_source,
>          break;
>      case LIBXL_DOMAIN_TYPE_PV:
>      {
> -        char *cmdline = NULL;
> -        const char *root = NULL, *extra = "";
> -
>          xlu_cfg_replace_string (config, "kernel", &b_info->u.pv.kernel, 0);
>  
>          xlu_cfg_get_string (config, "root", &root, 0);



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.