[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] patches pending acks (or naks)

>>> On 11.12.14 at 20:41, <konrad.wilk@xxxxxxxxxx> wrote:
> On Thu, Dec 11, 2014 at 03:38:39PM +0000, Jan Beulich wrote:
>> >>> On 11.12.14 at 16:18, <konrad.wilk@xxxxxxxxxx> wrote:
>> > A proper fix would be to automatically adjust based on memmap and CPU but 
>> > that could be too complex.
>> The problem isn't the complexity, but the chicken-and-egg problem
>> as far as CPU count is concerned. The memory map size would be
>> known early enough.
> Let me explain my thought process:
>  - There are existing knobs that can be used to change this (conring_size=X)
>  - But we would like an awesome release where those knobs don't have to
>    be used.
>  - The patch you posted could be reworked to fully address memmap and CPU.

Not really, unless we separate parsing and printing of the ACPI
tables. Again, the CPU count is known only after that parsing.

>  - I don't know what your time constraints are for said patch and you
>    might not have the energy to rework it.
>  - I don't want to put pressure on you or Daniel on having to write new
>    patches - especially as folks are going on vacation soon.
> Hence as a fallback I believe that Daniel's patch should go in and
> Jan's patch can go in too. However if Jan (or somebody else) wants to
> rework the v2 (or a new one) to address the memmap issue then that
> patch would go in - instead of Daniel's or v2 of Jan patch.

Which memmap issue? You confirmed in your reply that you understand
that the memmap gets printed late enough for the change in v2 to
take effect. Plus those are info-level messages, and hence don't get
printed at all by default. And if somebody alters the log levels, (s)he
can surely be expected to also adjust the ring size. (The log level
aspect is actually another argument against Daniel's patch.)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.