[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] VFS: Unable to mount root fs on unknown-block(0,0)
On Sat, 2006-12-16 at 15:35 -0500, Bo wrote: > I have compiled a kernel from the sources in xen-3.0.3_0-src.tgz. I > generated an initrd via mkinitrd /boot/initrd-2.6.16.29-xen.img > 2.6.16.29-xen (after doing a depmod) > > I did a make world/make install from the source directory, per > instructions in the README. I extracted the files from the initrd file > to see what the init script looked like. It has a line inserting the > ext3.ko module. > > The last few lines from the output: > > (XEN) Xen trace buffers: disabled > (XEN) Xen is relinquishing VGA console. > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch > input to Xen). > Kernel panic - not syncing: VFS: Unable to mount root fs on > unknown-block(0,0) > (XEN) Domain 0 crashed: 'noreboot' set - not rebooting. > Bo, is it possible (with your hardware) to boot without an initrd? I'd look at the initrd itself. You notice unknown-block(0,0), when its obvious that you passed the correct root as a kernel paramater, which is correct at 1,1. My *guess*, and this is only a guess is that you could try swapping the order of ro and root= in your boot config. I don't work much with Fedora, however it looks like "ro" is being passed as the real_root_fs, and a default value of 0,0 is being passed to pivot root because obviously no block device named 'ro' exists. In other words, the order of the arguments could be an issue. Your initrd is "dumb". I'd try switching them, then try booting without the initrd. The chain loading process of pivoting to the real root FS is broken in between vmlinuz and your initrd. This should not be an issue, but doesn't mean it isn't an issue :) Again, just a guess but a simple one to try. Hope this helps - Best, -Tim _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |