[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] How to diagnose windows domU hang on boot?



> -----Original Message-----
> From: firemeteor.guo@xxxxxxxxx [mailto:firemeteor.guo@xxxxxxxxx] On
> Behalf Of G.R.
> Sent: 03 February 2017 18:14
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] How to diagnose windows domU hang on boot?
> 
> Finally I was able to uninstall the GPL PV driver by following the
> steps in this page:
> https://lime-
> technology.com/wiki/index.php/UnRAID_Manual_6#Converting_VMs_from
> _Xen_to_KVM
> I thought the info was obsolete but it just worked!
> 
> However, the new NIC driver is not working properly.
> Is there anything wrong that you can tell from the attached qemu log?

Hi,

Very much so...

GNTTAB: MAP XENMAPSPACE_grant_table[31] @ 00000000.f0020000
XENBUS|GnttabExpand: added references [00003e00 - 00003fff]
XENBUS|RangeSetPop: fail1 (c000009a)
XENBUS|GnttabExpand: fail1 (c000009a)
XENBUS|GnttabEntryCtor: fail1 (c000009a)
XENBUS|CacheCreateObject: fail2
XENBUS|CacheCreateObject: fail1 (c000009a)

It looks to me like you must have multipage rings for your storage because 
XENVBD has grabbed the entire grant table before XENVIF gets a look-in.

This is illustrated by the following:

XENVIF|PdoSetFriendlyName: Xen PV Network Device #0
XENVIF|__MacSetPermanentAddress: attr/vif/0: 00:18:3E:51:48:4C
XENVIF|__MacSetCurrentAddress: attr/vif/0: 00:18:3E:51:48:4C
XENVIF|FrontendSetNumQueues: device/vif/0: 4
XENVIF|FrontendSetSplit: device/vif/0: TRUE
XENBUS|RangeSetPop: fail1 (c000009a)
XENBUS|GnttabExpand: fail1 (c000009a)
XENBUS|GnttabEntryCtor: fail1 (c000009a)
XENBUS|CacheCreateObject: fail2
XENBUS|CacheCreateObject: fail1 (c000009a)
XENBUS|GnttabPermitForeignAccess: fail1 (c000009a)
XENVIF|__ReceiverRingConnect: fail8
XENVIF|__ReceiverRingConnect: fail7
XENVIF|__ReceiverRingConnect: fail6
XENVIF|__ReceiverRingConnect: fail5
XENVIF|__ReceiverRingConnect: fail4
XENVIF|__ReceiverRingConnect: fail3
XENVIF|__ReceiverRingConnect: fail2
XENVIF|__ReceiverRingConnect: fail1 (c000009a)
XENVIF|ReceiverConnect: fail6
XENVIF|ReceiverConnect: fail5
XENVIF|ReceiverConnect: fail4
XENVIF|ReceiverConnect: fail3
XENVIF|ReceiverConnect: fail2
XENVIF|ReceiverConnect: fail1 (c000009a)
XENVIF|FrontendConnect: fail5
XENVIF|FrontendConnect: fail4
XENVIF|FrontendConnect: fail3
XENVIF|FrontendConnect: fail2
XENVIF|FrontendConnect: fail1 (c000009a)
XENVIF|__PdoD3ToD0: fail1 (c0000001)
XENVIF|PdoD3ToD0: fail2
XENVIF|PdoD3ToD0: fail1 (c0000001)
XENVIF|PdoStartDevice: fail9
XENVIF|PdoStartDevice: fail6
XENVIF|PdoStartDevice: fail5
XENVIF|PdoStartDevice: fail4
XENVIF|PdoStartDevice: fail3
XENVIF|PdoStartDevice: fail2
XENVIF|PdoStartDevice: fail1 (c0000001)

...which means that XENVIF cannot even allocate grant references for the 4 
shared pages it needs to hook up the receive side queues.

That was from qemu-ruibox_new.log. As for the other log, I can't see anything 
wrong other than it seems to have stopped once it has attached to storage... 
which probably suggests your backend is not working properly.

So, I suggest you stick with dom0 backends but limit your storage to a single 
page ring. I can't remember the exact blkback module param you need to set to 
do that, but it shouldn't be hard to find.

  Cheers,

    Paul

> 
> Anyway, I'm getting a booting system this time.
> So I decide to give it a try with the driver domain setup as it was
> the initial goal.
> Unfortunately the new driver does NOT make it any better and I still
> get stuck at the boot screen as GPLPV driver does.
> I'm also attaching the qemu log here (the one with 'driver_domain' suffix).
> 
> The driver domain (freeBSD 10 based) lacks some features that appears
> in domain0:
> XENVBD|FrontendReadFeatures:Target[0] : Features: INDIRECT
> XENVBD|FrontendReadFeatures:Target[0] : INDIRECT 100
> But other than that, I couldn't find anything obvious wrong.
> 
>   XENVBD|PdoSetDevicePnpState:Target[0] : PNP Present to Started
>                        //<< this is the last line seen for
> driver-domain config
>   XENVBD|PdoDispatchPnp:Target[0] : 07:QUERY_DEVICE_RELATIONS ->
> c00000bb
>   XENVBD|FrontendWriteUsage:Target[0] : DUMP NOT_HIBER NOT_PAGE
>   XENVBD|FdoDispatchPnp:14:QUERY_PNP_DEVICE_STATE -> c00000bb
> 
> Any idea what's the situation?
> 
> 
> On Fri, Jan 20, 2017 at 10:23 PM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> wrote:
> > I think uninstalling in safe mode is perfectly reasonable, but I don’t know
> > if there is any ‘magic’ to doing so. Are you following instructions as to
> > how to go about removing the drivers fully? (Remember that Windows
> itself
> > doesn’t really handle driver un-installation… it’s not something Microsoft
> > really supports). Whatever you do, you need to get to a situation where
> you
> > can boot the VM normally and verify that both storage and network are
> using
> > emulated devices before installing the 8.1 PV drivers.
> >
> >
> >
> > For comparison perhaps you could try installing the 8.1 drivers in a fresh
> > VM to verify there are no problems with your environment.
> >
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.