[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] mm.c:807:d0 Error getting mfn be3a9 (pfn ffffffffffffffff) from L1 entry 80000000be3a9a25 for l1e_owner=0, pg_owner=6



Ok, thanks for information.

But question is 'how we can kill this domain?'

Right now it simple testing machine, but once I have met this problem on
product server, so called 'deadbeaf-dead-beaf' problem:

xapi see stray (unkilled domain), change it uuid to deadbeef and reject
to join pool.

The main problem is migration problem: beause domains can not be killed
(usually if one domain got 'd' state other will be unkillable to) and
this prevents successful migration from host.

I'm still have no enough data to propose solution, but copying domain
data, unplugging all devices and putting it to paused state and
forgetting (instead destroying) may be sort of solution in this
situation.

Something like 'xe emergency-host-evacuate'....



Ð ÐÑÐ, 05/08/2011 Ð 14:48 +0100, Dave Scott ÐÐÑÐÑ:
> I'm not sure if that's fixed. I think what's happening is that the domain 
> cannot be removed because some of its memory is mapped to another domain, 
> probably dom0 where the storage backend driver is a bit confused. It's a bit 
> like having a zombie process hang around.
> 
> Cheers,
> Dave
> 
> On Aug 5, 2011, at 2:12 PM, "George Shuklin" <george.shuklin@xxxxxxxxx> wrote:
> 
> > I know, XCP 0.5 is completely outdated, but still I got funny error on
> > my test server - I broke storage (manually remove base copy for
> > snapshoted VDI) and got domain with status -b-s-d- in xl list - it
> > cannot be killed with xc.domain_destory or xc.domain_shutdown.
> > 
> > And in xc.readconsolering() I saw:
> > ...
> > (XEN) grant_table.c:1408:d0 dest domain 6 dying
> > (XEN) grant_table.c:1408:d0 dest domain 6 dying
> > (XEN) grant_table.c:1408:d0 dest domain 6 dying
> > (XEN) printk: 9 messages suppressed.
> > (XEN) mm.c:807:d0 Error getting mfn be3a9 (pfn ffffffffffffffff) from L1
> > entry 80000000be3a9a25 for l1e_owner=0, pg_owner=6
> > 
> > This bug fixed in XCP1.0+ or it still fresh?
> > 
> > 
> > _______________________________________________
> > xen-api mailing list
> > xen-api@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/mailman/listinfo/xen-api



_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.