|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [OSSTEST Nested PATCH v8 4/7] Changes on test step of Debian hvm guest install
On Mon, 2015-04-13 at 17:19 -0400, longtao.pang wrote:
> 1. Increase disk size to accommodate to nested test requirement.
> 2. Since 'Debain-xxx-.iso' image will be stored in rootfs of L1 guest,
> therefore needs more disk capacity, increase root partition size in
> preseed generation.
> 3. In L1 installation context, assign more memory to it; Since it
> acts as a nested hypervisor anyway.
> 4. Comment out CDROM entry in sources.list to make HTTP URL entry
> available for L1 hvm guest.
> 5. Enable nestedhvm feature in ExtraConfig for nested job.
>
> Signed-off-by: longtao.pang <longtaox.pang@xxxxxxxxx>
> ---
> Changes in v8:
> 1. Update a conventional way to comment out CDROM entry in sources.list.
> 2. Enable nestedhvm feature only for the nested job.
> 3. Set nested disk and memroy size to be driven from runvars in
> 'ts-debian-hvm-install'.
> ---
> ts-debian-hvm-install | 9 ++++++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/ts-debian-hvm-install b/ts-debian-hvm-install
> index cfd5144..13009d0 100755
> --- a/ts-debian-hvm-install
> +++ b/ts-debian-hvm-install
> @@ -36,7 +36,7 @@ our $ho= selecthost($whhost);
>
> # guest memory size will be set based on host free memory, see below
> our $ram_mb;
> -our $disk_mb= 10000;
> +our $disk_mb= $r{'nested_disk'} ? $r{'nested_disk'} : 10000;
This shouldn't hardcode 'nested' here, but use the guest ident. (nested
is the wrong name now anyway I think, since it is nestedl1 in this
iteration, which is why we avoid such hardcoding)
So you should arrange to do this somewhere that $gho is in scope and use
guest_var($gho, 'disk', 10000). $gho is in scope in $prep where $disk_mb
is used and AFAICT it is only used in that function, so I think you can
move this to a local variable
Perhaps also consider a more descriptive runvar name too, like disksize?
> our $guesthost= "$gn.guest.osstest";
> our $gho;
> @@ -60,7 +60,7 @@ d-i partman-auto/expert_recipe string \\
> use_filesystem{ } filesystem{ vfat } \\
> mountpoint{ /boot/efi } \\
> . \\
> - 5000 50 5000 ext4 \\
> + 10000 50 10000 ext4 \\
> method{ format } format{ } \\
> use_filesystem{ } filesystem{ ext4 } \\
> mountpoint{ / } \\
> @@ -75,6 +75,7 @@ d-i preseed/late_command string \\
> in-target mkdir -p /boot/efi/EFI/boot; \\
> in-target cp /boot/efi/EFI/debian/grubx64.efi
> /boot/efi/EFI/boot/bootx64.efi ;\\
> in-target mkdir -p /root/.ssh; \\
> + in-target sed -i 's/^deb *cdrom/#&/g' /etc/apt/sources.list; \\
> in-target sh -c "echo -e '$authkeys'> /root/.ssh/authorized_keys";
> END
> return $preseed_file;
> @@ -149,9 +150,11 @@ sub prep () {
> target_putfilecontents_root_stash($ho, 10, preseed(),
> $preseed_file_path);
>
> + my $nestedhvm_feature = $gn eq 'nestedl1' ? 'nestedhvm=1' : '';
This should be based on guest_var($gho, "enable_nestedhvm", undef) not
$gn eq "nested1".
Also I think this would be better done as:
my $extra_config = '';
$extra_config .= 'nestedhvm=1\n'
if guest_var($gho, "enable_nestedhvm", 'false') eq 'true';
Then use $extra_config below. This will make it easier to add more such
things in the future.
> more_prepareguest_hvm($ho,$gho, $ram_mb, $disk_mb,
> OnReboot => 'preserve',
> Bios => $r{bios},
> + ExtraConfig => $nestedhvm_feature,
> PostImageHook => sub {
> my $cmds = iso_copy_content_from_image($gho, $newiso);
> $cmds .= prepare_initrd($initrddir,$newiso,$preseed_file_path);
> @@ -174,7 +177,7 @@ my $ram_lots = 5000;
> if ($host_freemem_mb > $ram_lots * 2 + $ram_minslop) {
> $ram_mb = $ram_lots;
> } else {
> - $ram_mb = 768;
> + $ram_mb = $r{"${gn}_memory"} ? $r{"${gn}_memory"} : 768;
guest_var again and I think this can be local to prep() too..
> }
> logm("Host has $host_freemem_mb MB free memory, setting guest memory size to
> $ram_mb MB");
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |