[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 2/3] xen: eliminate scalability issues from initrd handling



On 17/09/14 05:12, Juergen Gross wrote:
> Size restrictions native kernels wouldn't have resulted from the initrd
> getting mapped into the initial mapping. The kernel doesn't really need
> the initrd to be mapped, so use infrastructure available in Xen to avoid
> the mapping and hence the restriction.
> 
> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
> ---
>  arch/x86/xen/enlighten.c | 11 +++++++++--
>  arch/x86/xen/xen-head.S  |  1 +
>  2 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index c0cb11f..8fd075f 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1519,6 +1519,7 @@ static void __init xen_pvh_early_guest_init(void)
>  asmlinkage __visible void __init xen_start_kernel(void)
>  {
>       struct physdev_set_iopl set_iopl;
> +     unsigned long initrd_start = 0;
>       int rc;
>  
>       if (!xen_start_info)
> @@ -1667,10 +1668,16 @@ asmlinkage __visible void __init 
> xen_start_kernel(void)
>       new_cpu_data.x86_capability[0] = cpuid_edx(1);
>  #endif
>  
> +     if (xen_start_info->mod_start)
> +             initrd_start = __pa(xen_start_info->mod_start);
> +#ifdef CONFIG_BLK_DEV_INITRD
> +     if (xen_start_info->flags & SIF_MOD_START_PFN)
> +             initrd_start = PFN_PHYS(xen_start_info->mod_start);
> +#endif

Why the #ifdef?

I'll fix this up to be:

   if (xen_start_info->mod_start) {
       if (xen_start_info->flags & SIF_MOD_START_PFN)
           initrd_start = PFN_PHYS_(xen_start_info->mod_start);
       else
           initrd_start = __pa(xen_start_info->mod_start);
   }

Unless you object.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.