|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] xen/arm: support compressed kernels
On 12/08/15 16:03, Ian Campbell wrote:
> On Wed, 2015-08-12 at 15:47 +0100, Stefano Stabellini wrote:
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
>> CC: julien.grall@xxxxxxxxxx
>> CC: ian.campbell@xxxxxxxxxx
>> ---
>> xen/arch/arm/kernel.c | 36
>> ++++++++++++++++++++++++++++++++++++
>> xen/common/Makefile | 2 +-
>> xen/include/asm-arm/byteorder.h | 2 ++
>> 3 files changed, 39 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
>> index f641b12..ca50cdd 100644
>> --- a/xen/arch/arm/kernel.c
>> +++ b/xen/arch/arm/kernel.c
>> @@ -13,6 +13,8 @@
>> #include <asm/byteorder.h>
>> #include <asm/setup.h>
>> #include <xen/libfdt/libfdt.h>
>> +#include <xen/decompress.h>
>> +#include <xen/vmap.h>
>>
>> #include "kernel.h"
>>
>> @@ -310,6 +312,38 @@ static int kernel_zimage64_probe(struct kernel_info
>> *info,
>>
>> return 0;
>> }
>> +
>> +static int kernel_zimage64_compressed_probe(struct kernel_info *info,
>> + paddr_t addr, paddr_t size)
>> +{
>> + char *output, *input;
>> + unsigned char magic[2];
>> + int rc;
>> + unsigned kernel_order_in;
>> + unsigned kernel_order_out;
>> + paddr_t output_size;
>> +
>> + copy_from_paddr(magic, addr, sizeof(magic));
>> +
>> + if (!((magic[0] == 0x1f) && ((magic[1] == 0x8b) || (magic[1] == 0x9e))))
>> + return -EINVAL;
>
> This is an open coded check_gzip. I think you could call that function on
> magic?
>
>> +
>> + kernel_order_in = get_order_from_bytes(size);
>> + input = (char *)ioremap_cache(addr, size);
>
> I don't think you need to cast this, do you? It's a void * already.
>
>> +
>> + output_size = output_length(input, size);
>> + kernel_order_out = get_order_from_bytes(output_size);
>> + output = (char *)alloc_xenheap_pages(kernel_order_out, 0);
>
> Likewise.
>
> Where is this buffer freed?
>
> When I said IRL we recover the kernel memory I meant the thing in the boot
> modules list. You might be able to get away with flipping the boot module
> over to this, but that would have ordering constraints which I didn't
> check, it'll probably get subtle fast.
>
>> +
>> + rc = decompress(input, size, output);
>> + clean_dcache_va_range(output, output_size);
>> + iounmap(input);
>> +
>> + if (rc != 0)
>> + return rc;
>> +
>> + return kernel_zimage64_probe(info, virt_to_maddr(output), output_size);
>
>
>
>> +}
>> #endif
>>
>> /*
>> @@ -466,6 +500,8 @@ int kernel_probe(struct kernel_info *info)
>> #ifdef CONFIG_ARM_64
>> rc = kernel_zimage64_probe(info, start, size);
>> if (rc < 0)
>> + rc = kernel_zimage64_compressed_probe(info, start, size);
>
> I don't see a reason not to support compressed 32 bit kernels too. All it
> would take would be to try and uncompress the buffer first before falling
> through to the various probe routines, instead of chaining a probe into the
> decompressor.
>
> Probably the easiest way to solve this and the buffer allocation issue
> above would be to always either copy or decompress the original kernel into
> a buffer and then change all the probe function to use a virtual address
> instead of an maddr (which might have tricky cache interactions since the
> mapping still exists).
>
>> + if (rc < 0)
>> #endif
>> rc = kernel_uimage_probe(info, start, size);
>> if (rc < 0)
>> diff --git a/xen/common/Makefile b/xen/common/Makefile
>> index 0a4d4fa..a8aefc6 100644
>> --- a/xen/common/Makefile
>> +++ b/xen/common/Makefile
>> @@ -56,7 +56,7 @@ obj-y += vsprintf.o
>> obj-y += wait.o
>> obj-y += xmalloc_tlsf.o
>>
>> -obj-bin-$(CONFIG_X86) += $(foreach n,decompress gunzip bunzip2 unxz unlzma
>> unlzo unlz4 earlycpio,$(n).init.o)
>> +obj-bin-y += $(foreach n,decompress gunzip bunzip2 unxz unlzma unlzo unlz4
>> earlycpio,$(n).init.o)
>
> I don't think we need/want earlycpio support on ARM (not yet anyway).
>
>>
>> obj-$(perfc) += perfc.o
>> obj-$(crash_debug) += gdbstub.o
>> diff --git a/xen/include/asm-arm/byteorder.h b/xen/include/asm
>> -arm/byteorder.h
>> index 9c712c4..3b7feda 100644
>> --- a/xen/include/asm-arm/byteorder.h
>> +++ b/xen/include/asm-arm/byteorder.h
>> @@ -5,6 +5,8 @@
>>
>> #include <xen/byteorder/little_endian.h>
>>
>> +#define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
>
> While CONFIG_HAVE_UNALIGNED_ACCESS might be true on arm64 it may not be the
> case that it is efficient. Also I'm not sure about arm32 at all.
ARM32 has alignment checking enabled. So any unaligned access would
result to a data abort.
FWIW, on ARM64 the alignment trap has been disabled because mem*
primitives are relying on the hardware handling misalignment:
58bbe7d71239db508c30099bf7b6db7c458f3336 " xen: arm64: disable
alignment traps
"
IIRC, the unaligned access on ARM processor tend to be slow. I
remembered to read an article about it a couple of years ago.
Regards,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |