[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware
On 05/08/2019 13:52, Jan Beulich wrote: > On 30.07.2019 16:42, Andrew Cooper wrote: >> c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong >> indirection on its pointer check in nvmx_cpu_up_prepare(), causing the >> VMCS-shadowing buffer never be allocated. Fix it. >> >> This in turn results in a massive quantity of logspam, as every virtual >> vmentry/exit hits both gdprintk()s in the *_bulk() functions. > The "in turn" here applies to the original bug (which gets fixed here) > aiui, Correct. > i.e. there isn't any log spam with the fix in place anymore, is > there? Incorrect, because... > If so, ... > >> Switch these to using printk_once(). The size of the buffer is chosen at >> compile time, so complaining about it repeatedly is of no benefit. > ... I'm not sure I'd agree with this move: Why would it be of interest > only the first time that we (would have) overrun the buffer? ... we will either never overrun it, or overrun it on every virtual vmentry/exit. > After all > it's not only the compile time choice of buffer size that matters here, > but also the runtime aspect of what value "n" has got passed into the > functions. The few choices of "n" are fixed at compile time as well, which is why... > If this is on the assumption that we'd want to know merely > of the fact, not how often it occurs, then I'd think this ought to > remain a debugging printk(). ... this still ends up as a completely unusable system, when the problem occurs. > >> Finally, drop the runtime NULL pointer checks. It is not terribly >> appropriate >> to be repeatedly checking infrastructure which is set up from start-of-day, >> and in this case, actually hid the above bug. > I don't see how the repeated checking would have hidden any bug: Without this check, Xen would have crashed with a NULL deference on the original change, and highlighted the fact that the change was totally broken. I didn't spot the issue because it was tested with a release build, which is another reason why the replacement printk() is deliberately not a debug-time-only. > Due > to the lack of the extra indirection the pointer would have remained > NULL, and hence the log message would have appeared (as also > mentioned above) _until_ you had fixed the indirection mistake. (This > isn't to mean I'm against dropping the check, I'd just like to > understand the why.) > >> @@ -922,11 +922,10 @@ static void vvmcs_to_shadow_bulk(struct vcpu *v, >> unsigned int n, >> if ( !cpu_has_vmx_vmcs_shadowing ) >> goto fallback; >> >> - if ( !value || n > VMCS_BUF_SIZE ) >> + if ( n > VMCS_BUF_SIZE ) >> { >> - gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, " >> - "buffer: %p, buffer size: %d, fields number: %d.\n", >> - value, VMCS_BUF_SIZE, n); >> + printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n", >> + v, n); >> goto fallback; >> } >> >> @@ -962,11 +961,10 @@ static void shadow_to_vvmcs_bulk(struct vcpu *v, >> unsigned int n, >> if ( !cpu_has_vmx_vmcs_shadowing ) >> goto fallback; >> >> - if ( !value || n > VMCS_BUF_SIZE ) >> + if ( n > VMCS_BUF_SIZE ) >> { >> - gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, " >> - "buffer: %p, buffer size: %d, fields number: %d.\n", >> - value, VMCS_BUF_SIZE, n); >> + printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n", >> + v, n); > Would you mind taking the opportunity and also disambiguate the two > log messages, so that from observing one it is clear which instance > it was that got triggered? I'm really not sure it matters. The use of these functions are symmetric, so in practice you'll get both printks() in very quick succession. Also, the required fix is the same whichever one fires first. I'm also not convinced that any of this logic is going to survive a concerted effort to get nested-virt usable. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |