[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [OSSTEST Nested PATCH v9 7/9] Add new script to customize nested test configuration



On Sat, 2015-05-02 at 14:28 +0800, longtao.pang wrote:
> +# Since Ian Campbell's v5_patch[04,05,06] has a bug that the
> +# 'preseed/late_command' are unavailable, so add below line here to
> +# make HTTP URL entry available in sources.list for L1 hvm guest.
> +# Onece the bug is fiexed, this line code could be removed.
> +target_cmd_root($l1, "sed -i 's/^deb *cdrom/#&/g' /etc/apt/sources.list");

This should be fixed in my v6 posting of that series.

> +target_cmd_root($l1, "update-rc.d osstest-confirm-booted start 99 2 .");
> +
> +# Inside L0, create storage lv, attach this lv to L1, and then inside L1 
> using the storage for installing L2
> +my $vgname = $l1->{Vg};
> +my $guest_storage_lv_name = "${l1_ident}_guest_storage";
> +my $guest_storage_lv_size = guest_var($l1,'guest_storage_size',undef);
> +my $guest_storage_lvdev = "/dev/${vgname}/${guest_storage_lv_name}";

Perhaps:
        die "guest_storage_lv_size" unless $guest_storage_lv_size;
?

> +
> +target_cmd_root($l0, <<END);
> +    lvremove -f $guest_storage_lvdev ||:
> +    lvcreate -L ${guest_storage_lv_size}M -n $guest_storage_lv_name $vgname
> +    dd if=/dev/zero of=$guest_storage_lvdev count=10

I have some ideas/plans on how to refactor things so we can use
prepareguest_part_lvmdisk for secondary disks too, but this is OK as it
is for now IMHO.

> +    xl block-attach $l1->{Name} ${guest_storage_lvdev},raw,sdb,rw

This relies on/assumes the toolstack is xl (sorry for not spotting this
before).

Really it should use a new block_attach method on toolstack($ho) (i.e.
Osstest/Toolstack/*.pm) so it can work with libvirt etc.

However I think this hardcoding is tolerable for now (you've encoded
that it is xl only in make-flight already), but please could you add
near the top of this script:

        die "toolstack $r{toolstack}" unless $r{toolstack} eq "xl";
        
so it fails in an immediately obvious way if someone tries this with
another toolstack.

Ian: Is that OK with you?

> +END
> +# This is probably a bug here, if we use 'xvdx' as block 'dev'
> +# after installing xen and boot into xen kernel, the disk will not be
> +# recognized by OS. Also, if we use 'sde' instead of 'sdb' as block 'dev',
> +# after installing xen and boot into xen kernel, the disk will be recognized
> +# as 'sdb' not 'sde', so we will have to use 'sdb' here.
> +
> +target_install_packages_norec($l1, qw(lvm2 rsync genisoimage));

Please can you put this above, just after the osstest-confirm-started
stuff so that it is not mixed in with the stuff dealing with the
additional disk.

> +# After block-attach lvdev to L1, need to reboot L1 to make the attach
> +# to take effect. I think this is a bug for kernel.

I've just tried this and it worked for me, i.e. /dev/xvdb appears
immediately after xl block-attach with no reboot required. Perhaps this
was an artefact of a bug in some previous iteration of the series?

I would replace this comment, and the "This is probably a bug here" and
"Inside L0, create storage lv" ones above with a single (long) comment
before all of this disk stuff which explains:

        We need to attach an extra disk to the L1 guest to be used as L2
        guest storage.
        
        When running in a nested HVM environment the L1 domain is acting
        as both a guest to L0 and a host to L2 guests and therefore
        potentially sees connections to two independent xenstore
        instances, one provided by the L0 host and one which is provided
        by the L1 instance of xenstore.
        
        Unfortunately the kernel is not capable of dealing with this and
        is only able to cope with a single xenstore connection. Since
        the L1 toolstack and L2 guests absolutely require xenstore to
        function we therefore cannot use the L0 xenstore and therefore
        cannot use PV devices (xvdX etc) in the L1 guest and must use
        emulated devices (sdX etc).
        
        However at the moment we have not yet rebooted L1 into Xen and
        so it does have PV devices available and sdb actually appears as
        xvdb. We could disable the Xen platform device and use emulated
        devices for the install phase too but that would be needlessly
        slow.

It's a bit long (to say the least) but I think it is worth explaining
since limitation of nestedhvm isn't obvious without thinking about it a
bit...

> +target_reboot($l1);
> +# Create a vg in L1 guest and vg name is ${l1_gn}-disk
> +target_cmd_root($l1, "pvcreate /dev/xvdb && vgcreate ${l1_gn}-disk 
> /dev/xvdb");



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.