[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-xen-4.5] console: increase initial conring size

>>> On 03.12.14 at 16:37, <daniel.kiper@xxxxxxxxxx> wrote:
> On Tue, Dec 02, 2014 at 03:58:53PM +0000, Jan Beulich wrote:
>> >>> Daniel Kiper <daniel.kiper@xxxxxxxxxx> 12/02/14 3:58 PM >>>
>> >In general initial conring size is sufficient. However, if log
>> >level is increased on platforms which have e.g. huge number
>> >of memory regions (I have an IBM System x3550 M2 with 8 GiB RAM
>> >which has more than 200 entries in EFI memory map) then some
>> >of earlier messages in console ring are overwritten. It means
>> >that in case of issues deeper analysis can be hindered. Sadly
>> >conring_size argument does not help because new console buffer
>> >is allocated late on heap. It means that it is not possible to
>> >allocate larger ring earlier. So, in this situation initial
>> >conring size should be increased. My experiments showed that
>> >even on not so big machines more than 26 KiB of free space are
>> >needed for initial messages. In theory we could increase conring
>> >size buffer to 32 KiB. However, I think that this value could be
>> >too small for huge machines with large number of ACPI tables and
>> >EFI memory regions. Hence, this patch increases initial conring
>> >size to 64 KiB.
>> >
>> >Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx>
>> I think it was made clear before that just saying "for-xen-4.5" without any
>> further rationale is insufficient. Please explain what makes this so 
> important
>> a change that it needs to go in now.
> What is not clear in commit message? It describes a bug, how to fix it
> and why in that way. Do you need anything else?

I didn't say the commit message is unclear. Instead I told you that
after a --- separate I would have expected justification for the
request to include this in 4.5. After all what you call a bug here
is imo merely a lack of functionality.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.