[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/2] x86/hap: Resolve mm-lock order violations when forking VMs with nested p2m
On 06.01.2021 16:29, Tamas K Lengyel wrote: > On Wed, Jan 6, 2021 at 7:03 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: >> On 04.01.2021 18:41, Tamas K Lengyel wrote: >>> @@ -1226,6 +1224,15 @@ int __mem_sharing_unshare_page(struct domain *d, >>> return 0; >>> } >>> >>> + /* lock nested p2ms to avoid lock-order violation */ >> >> Would you mind mentioning here the other side of the possible >> violation, to aid the reader? > > You mean what the nested p2m locks would conflict with? I think in the > context of mem_sharing it's clear that the only thing it can conflict > with is the mem_sharing mm lock. I don't think it's all this obvious. It wouldn't been to me, at least, without also having this change's description at hand. >>> + if ( unlikely(nestedhvm_enabled(d)) ) >>> + { >>> + int i; >> >> unsigned int please (also further down), no matter that there may >> be other similar examples of (bad) use of plain int. > > IMHO this is the type of change request that makes absolutely 0 > difference at the end. (see below, applies here as well) >>> + for ( i = 0; i < MAX_NESTEDP2M; i++ ) >>> + p2m_lock(d->arch.nested_p2m[i]); >> >> From a brief scan, this is the first instance of acquiring all >> nested p2m locks in one go. Ordering these by index is perhaps >> fine, but I think this wants spelling out in e.g. mm-locks.h. Of >> course the question is if you really need to go this far, i.e. >> whether really all of the locks need holding. This is even more >> so with p2m_flush_table_locked() not really looking to be a >> quick operation, when there have many pages accumulated for it. >> I.e. the overall lock holding time may turn out even more >> excessive this way than it apparently already is. > > I agree this is not ideal but it gets things working without Xen > crashing. I would prefer if we could get rid of the mm lock ordering > altogether in this context. How would this do any good? You'd then be at risk of actually hitting a lock order violation. These are often quite hard to debug. > We already hold the host p2m lock and the > sharing lock, that ought to suffice. I don't see how holding any locks can prevent lock order violations when further ones get acquired. I also didn't think the nested p2m locks were redundant with the host one. >>> --- a/xen/arch/x86/mm/p2m.c >>> +++ b/xen/arch/x86/mm/p2m.c >>> @@ -1598,8 +1598,17 @@ void >>> p2m_flush_nestedp2m(struct domain *d) >>> { >>> int i; >>> + struct p2m_domain *p2m; >>> + >>> for ( i = 0; i < MAX_NESTEDP2M; i++ ) >>> - p2m_flush_table(d->arch.nested_p2m[i]); >>> + { >>> + p2m = d->arch.nested_p2m[i]; >> >> Please move the declaration here, making this the variable's >> initializer (unless line length constraints make the latter >> undesirable). > > I really don't get what difference this would make. Both choice of (generally) inappropriate types (further up) and placement of declarations (here) (and of course also other style violations) can set bad precedents even if in a specific case it may not matter much. So yes, it may be good enough here, but it would violate our desire to - use unsigned types when a variable will hold only non- negative values (which in the general case may improve generated code in particular on x86-64), - limit the scopes of variables as much as possible, to more easily spot inappropriate uses (like bypassing initialization). This code here actually demonstrates such a bad precedent, using plain int for the loop induction variable. While I can't be any way near sure, there's a certain chance you actually took it and copied it to __mem_sharing_unshare_page(). The chance of such happening is what we'd like to reduce over time. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |