[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1 of 5] xentrace: fix t_info_pages calculation for the default case
You need c/s 23074 On 24/03/2011 12:09, "Christoph Egger" <Christoph.Egger@xxxxxxx> wrote: > > This patch does not compile for me. There's no <xen/pfn.h>. > > Christoph > > > On 03/23/11 18:54, Olaf Hering wrote: >> # HG changeset patch >> # User Olaf Hering<olaf@xxxxxxxxx> >> # Date 1300900084 -3600 >> # Node ID 14ac28e4656d0c235c5edf119426b1bcf3bf33d4 >> # Parent 8e1c737b2c44249dd1c0e4e1b8978d5d35020226 >> xentrace: fix t_info_pages calculation for the default case >> >> The default tracebuffer size of 32 pages was not tested with the previous >> patch. >> As a result, t_info_pages will become zero and alloc_xenheap_pages() fails. >> Catch this case and allocate at least one page. >> >> Signed-off-by: Olaf Hering<olaf@xxxxxxxxx> >> >> diff -r 8e1c737b2c44 -r 14ac28e4656d xen/common/trace.c >> --- a/xen/common/trace.c Wed Mar 23 15:24:19 2011 +0000 >> +++ b/xen/common/trace.c Wed Mar 23 18:08:04 2011 +0100 >> @@ -29,6 +29,7 @@ >> #include<xen/init.h> >> #include<xen/mm.h> >> #include<xen/percpu.h> >> +#include<xen/pfn.h> >> #include<xen/cpu.h> >> #include<asm/atomic.h> >> #include<public/sysctl.h> >> @@ -109,6 +110,7 @@ >> { >> struct t_buf dummy; >> typeof(dummy.prod) size; >> + unsigned int t_info_bytes; >> >> /* force maximum value for an unsigned type */ >> size = -1; >> @@ -122,11 +124,9 @@ >> pages = size; >> } >> >> - t_info_pages = num_online_cpus() * pages + t_info_first_offset; >> - t_info_pages *= sizeof(uint32_t); >> - t_info_pages /= PAGE_SIZE; >> - if ( t_info_pages % PAGE_SIZE ) >> - t_info_pages++; >> + t_info_bytes = num_online_cpus() * pages + t_info_first_offset; >> + t_info_bytes *= sizeof(uint32_t); >> + t_info_pages = PFN_UP(t_info_bytes); >> return pages; >> } _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |