[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 3/4] xen: refactor debugtrace data
On 05.09.2019 16:36, Juergen Gross wrote: > On 05.09.19 14:46, Juergen Gross wrote: >> On 05.09.19 14:37, Jan Beulich wrote: >>> On 05.09.2019 14:27, Juergen Gross wrote: >>>> On 05.09.19 14:22, Jan Beulich wrote: >>>>> On 05.09.2019 14:12, Juergen Gross wrote: >>>>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>>>> support buffers >4GB. >>>>>>> >>>>>>> Just as a further remark in this regard: >>>>>>> >>>>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>>>> { >>>>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>>>> - static unsigned int count, last_count, last_prd; >>>>>>>> + static unsigned int count, last_count; >>>>>>> >>>>>>> How long do we think will it take until their wrapping will become >>>>>>> an issue with such huge buffers? >>>>>> >>>>>> Count wrapping will not result in any misbehavior of tracing. With >>>>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>>>> entries, but I guess chances are rather low this being a real issue. >>>>>> >>>>>> BTW: wrapping of count is not related to buffer size, but to the >>>>>> amount of trace data written. >>>>> >>>>> Sure, but the chance of ambiguity increases with larger buffer sizes. >>>> >>>> Well, better safe than sorry. Switching to unsigned long will rarely >>>> hurt, so I'm going to do just that. The only downside will be some waste >>>> of buffer space for very long running traces with huge amounts of >>>> entries. >>> >>> Hmm, true. Maybe we could get someone else's opinion on this - anyone? >> >> TBH, I wouldn't be concerned about the buffer space. In case there are >> really billions of entries, the difference between a 10-digit count >> value and maybe a 15 digit one (and that is already a massive amount) >> isn't that large. The average printed size of count with about >> 4 billion entries is 9.75 digits (and not just 5 :-) ). > > Another option would be to let count wrap at e.g. 4 billion (or even 1 > million) and add a wrap count. Each buffer struct would contain the > wrap count of the last entry, and when hitting a higher wrap count a > new entry like ("wrap %u->%u", old_wrap, new_wrap) would be printed. > This saves buffer space without loosing information. This sounds quite neat. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |