[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] error in xen/arch/x86/mm.c:get_page during migration



>>> On 28.02.13 at 12:56, Tim Deegan <tim@xxxxxxx> wrote:
> At 09:52 -0500 on 25 Feb (1361785966), Andres Lagar-Cavilla wrote:
>> There seems to be something else amiss though. Unless I am parsing
>> this incorrectly, taf == PGT_writable | PGT_pae_xen_l2? And caf == PAT
>> | PCD? Looks like a very unlikely combination
> 
> By my reading, 
> 
> taf = 0x7400000000000001 = typecount 1, PGT_writable_page | PGT_validated
> caf = 0x0180000000000000 = refcount 0, PGC_state_free

Right.

> iow this is a free page but somehow has ended up with a typecount (which
> explains why the get_page() failed).  And presumably this is one of the
> various get_page[_and_type](page, dom_cow) calls in mem_sharing.c.
> 
> Since free_domheap_pages() has a BUG_ON(typecount != 0), it seems like
> something's gone badly off the rails here. 
> 
> One place I can see that tinkers with typecount without holding a
> ref is share_xen_page_with_guest(), which sets exactly this typecount,
> but then calls page_set_owner(page, d).
> 
> There's some hairy code in __gnttab_map_grant_ref() too, but I _think_
> it can't end up taking typecounts without refcounts.
> 
> __acquire_grant_for_copy() looks pretty hairy too, in particular this:
>         (void)page_get_owner_and_reference(*page);
>  but presumably the matching put_page() would have crashed if that was
> the problem.  Does anyone understand the grant code well enough to get
> into that?

Problem is that the domain reported in the message is DOM_COW
according to Olaf, yet he's not knowingly using page sharing. But
that fact pretty much excluded the grant table or any other of the
"usual" code paths for me (and I never looked at the sharing code,
so hoped one of you two would).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.