[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86: fix domain cleanup
On 28/10/08 08:22, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote: >>>> "Jan Beulich" <jbeulich@xxxxxxxxxx> 27.10.08 12:36 >>> >> The preemptable page type handling changes modified free_page_type() >> behavior without adjusting the call site in relinquish_memory(): Any >> type reference left pending when leaving hypercall handlers is >> associated with a page reference, and when successful free_page_type() >> decrements the type refcount - hence relinquish_memory() must now also >> drop the page reference. > > After some more thinking, especially in the context of another bug (see > below), I'm not certain anymore this is the right thing to do, even though > the explanation continues to seem plausible to me. One part of my concern > is that, when the issue this patch attempted to fix is seen, the code path > gets entered only because of DOMAIN_DESTRUCT_AVOID_RECURSION, not > because of either of the conditions mentioned in the preceding comment. > Hence I'm worried that especially for circular 'linear page table' references > this may be wrong (but I don't really know how to properly distinguish this > case from the DOMAIN_DESTRUCT_AVOID_RECURSION leftovers). Oh I see. Well it's certainly bogus for circular references. Let's say A refers to B and vice versa. Then free_page_type(A) will cause put_page_and_type(B) -> free_page_type(B) -> put_page_and_type(A) [this last function will note that A's PGT_validated is clear and hence will not reenter free_page_type for A]. So the put_page() you added is not the correct fix for whetever issue you've been seeing. Raising the reference count on PGT_validated is certainly a possibility... That won't make the put_page() in the circular-reference destructor correct though. :-) -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |