[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Error in xen-unstable



On Thu, 2011-07-14 at 13:08 +0100, Roger Pau Monnà wrote:
> Hello,
> 
> Thanks both for the help, I was able to boot PV machines using the old
> toolstack (xm), xl still show the same odd behaviour. I'm able to boot
> a PV machine only one time, after I shut it down I have to reboot the
> system, I think because xenstore is not properly cleaned when the
> machine is shut down. Here is my xenstore-ls before launching any
> domu:
> 
> /tool = ""   (n0)
> /tool/xenstored = ""   (n0)
> [...]
> /vm/00000000-0000-0000-0000-000000000000/name = "Domain-0"   (n0)
> [...]
> /vm/00000000-0000-0000-0000-000000000000-1/name = "Domain-0"   (n0)
> [...]
> /vm/00000000-0000-0000-0000-000000000000-2/name = "Domain-0"   (n0)
> [...]
> /vm/00000000-0000-0000-0000-000000000000-17/name = "Domain-0"   (n0)

The presence of all these entries for dom0 is a bit odd. I doubt it is
related to your issue but it'd be worth clearing them out, or perhaps
arranging for your xenstore tdb to be nuked on boot.

> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51 = ""   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/uuid =
> "f96bf0e3-19ae-e011-ae46-1803730a9a51"   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/name = "debian"   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/pool_name = "Pool-0"   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/image = ""   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/image/ostype = "linux"   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/image/kernel = ""   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/image/ramdisk = ""   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/image/cmdline = ""   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51/start_time = "1310648706.88"   
> (n0,r2)

Seems to be domain 2.

> [...]
> /vm/00000000-0000-0000-0000-000000000000-18/name = "Domain-0"   (n0)

Another dom 0.

> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1 = ""   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/on_xend_stop = "ignore"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/pool_name = "Pool-0"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/shadow_memory = "0"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/uuid =
> "f96bf0e3-19ae-e011-ae46-1803730a9a51"   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/on_reboot = "restart"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/image = "(linux (kernel '')
> (superpages 0) (nomigrate 0) (tsc_mode 0))"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/image/ostype = "linux"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/image/kernel = ""   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/image/cmdline = ""   (n0,r2)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/image/ramdisk = ""   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/on_poweroff = "destroy"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/bootloader_args = ""   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/on_xend_start = "ignore"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/on_crash = "restart"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/xend = ""   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/xend/restart_count = "0"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/vcpus = "20"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/vcpu_avail = "1048575"   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/bootloader = ""   (n0)
> /vm/f96bf0e3-19ae-e011-ae46-1803730a9a51-1/name = "Domain-Unnamed"   (n0)

Apparently yet another dom2!

> /vm/ebb38b4b-ad25-ba77-a594-dc0309cad702 = ""   (n0)
> /vm/ebb38b4b-ad25-ba77-a594-dc0309cad702/image = ""   (n0)
> /vm/ebb38b4b-ad25-ba77-a594-dc0309cad702/image/ostype = "linux"   (n0)
> /vm/ebb38b4b-ad25-ba77-a594-dc0309cad702/image/kernel =
> "/var/run/xend/boot/boot_kernel.YHU2Yq"   (n0)
> /vm/ebb38b4b-ad25-ba77-a594-dc0309cad702/image/cmdline =
> "root=/dev/xvda2 ro root=/dev/xvda2 ro "   (n0,r3)
> /vm/ebb38b4b-ad25-ba77-a594-dc0309cad702/image/ramdisk =
> "/var/run/xend/boot/boot_ramdisk.5_DepX"   (n0)

A dom3... I think having a good clean out of xenstore on boot would be
wise (or at least remove some of the woods so we can see the trees!)

[...]
> /vm/00000000-0000-0000-0000-000000000000-19/name = "Domain-0"   (n0)

This appears to be the current actual dom0 entry.

[...]
> /local/domain = ""   (n0)
> /local/domain/0 = ""   (r0)
> /local/domain/0/vm = "/vm/00000000-0000-0000-0000-000000000000-19"   (r0)
> /local/domain/0/device = ""   (n0)
> /local/domain/0/control = ""   (n0)
> /local/domain/0/control/platform-feature-multiprocessor-suspend = "1"   (n0)
> /local/domain/0/error = ""   (n0)
> /local/domain/0/memory = ""   (n0)
> /local/domain/0/memory/target = "524288"   (n0)
> /local/domain/0/guest = ""   (n0)
> /local/domain/0/hvmpv = ""   (n0)
> /local/domain/0/data = ""   (n0)
> /local/domain/0/description = ""   (r0)
> /local/domain/0/console = ""   (r0)
> /local/domain/0/console/limit = "1048576"   (r0)
> /local/domain/0/console/type = "xenconsoled"   (r0)
> /local/domain/0/domid = "0"   (r0)
> /local/domain/0/cpu = ""   (r0)
> /local/domain/0/cpu/0 = ""   (r0)
> /local/domain/0/cpu/0/availability = "online"   (r0)
> /local/domain/0/name = "Domain-0"   (r0)

All seems sane enough.

> When I launch the domu I see the following messages in the console:
> 
> xbd backend: attach device vnd0d (size 20971520) for domain 1
> xbd backend: attach device vnd1d (size 262144) for domain 1
> mapping kernel into physical memory
> about to get started...
> 
> And xenstore reports the following components:
> [...]

All seems sane enough.

> The domu works correctly, and when I shut it down, I see the following
> messages in the console:
> 
> xbd backend: detach device vnd1d for domain 1
> xbd backend: detach device vnd0d for domain 1
> Jul 14 15:32:03 loki xenbackendd[771]: Failed to read 
> /local/domain/0/backend/vbd/1/51714/state (No such file or directory)
> xenbus: can't get state for backend/vbd/1/51714 (2)
> xenbus: can't get state for backend/vbd/1/51713 (2)

These messages are odd, in the light of:
[...]
> /local/domain/0/backend/vbd/1/51714/state = "5"   (r0)
> [...]
> /local/domain/0/backend/vbd/1/51713/state = "5"   (r0)

> Note that devices 51714 and 51713 are still listed in the dom0, and
> when I try to start the domu again, I don't see anything in the
> console, and xm create freezes.

I imagine that a simple xenstore-read can read those nodes just fine
throughout?

If you manually xenstore-rm the backend directories does it unwedge
itself? (obviously not a solution but might provide an interesting data
point).

>  I'm trying to debug this, but since
> I'm new to xen, it's kind of complicated. If someone could point me in
> the right direction I will try to do my best to fix this. Anyway, I
> will keep on working on this as long as I have time to do so.

FWIW I don't see anything like this on Linux dom0, but that doesn't
necessarily rule out some sort of generic issue. I'm not all that
familiar with how NetBSD dom0 is setup (for example I'm not sure what
role xenbackendd plays)

I think clearing out your xenstored db (/var/lib/xenstored/tdb on Linux,
not sure about NetBSD) and rebooting would be worthwhile just to rule
out any corruption or whatever. Note that on Linux we remove this in the
initscript (apparently since 21552:de101fc39fc3 when the xencommons
stuff landed), perhaps (most probably) doing the same on NetBSD makes
sense?

One other thing which might be useful to try would be to switch to the
ocaml xenstored because it will log various xenstore interactions
in /var/log/xenstored-access.log. It's a bit of a long shot but might
help spot what's going on (or it might magically fix the problem :-D).

It might also be worth checking the patches in NetBSD's ports, in case
there is an interesting fix in there...

Maybe you posted it before but can you repost your guest config?

You are having a different issue with xl, is that right? xend is not
exactly maintained these days.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.