I’m trying to do nested XEN to give some of my colleagues a play area to work in, and it seems to work – but not quite…
I can build the nested environment (using OVM, and I have to step back to OVM3.2.2 to allow me to do PCI passthrough), but as soon as I start up a VM with a phys device passed through, the first layer loses all connectivity to the SAN.
Setup:
· 2 UCS B200M2 blades – configured with 9 vHBAs.
· Linux machines running LIO-ORG’s and SCST’s FCAL target mode (trying both to decide on which to use going forwards)
The first vHBA is passed through to the “bare metal” OVM3.2.2, and the rest are managed through xen-pciback. I then install another OVM3.2.4 instance in an HVM with 1 vHBA passed through.
Within the nested OVS (what OVM calls the VM hosts), I can configure access to disks – and I’m getting up to 130MB/s when I do a ‘dd’ – on any of the LUNs (actually I get up to 130MB/s on the scst LUNs, and 50MB/s on the LIO-ORG, but they are different hardware).
When I startup the nested PVM (since the HVM xen can’t run an HVM guest), Oracle Linux 6 gets as far as ‘Detecting hardware’ and hangs – and an Ubuntu Xen install attaches to blkback and the provides a countdown as it waits for the hardware to settle/be available.
At this point, I start to receive messages from the HVM layer about multipath seeing paths down, and 120s later the OVM instance is rebooted due to OCFS2 heartbeat issues.
I’m hoping, this weekend, to use a more recent Xen install rather than the bundled one in OVM to do this without the Oracle specific parts being relevant, but I wondered if anyone had experienced this (or a similar) problem with the fnic driver before?
Thanks,
Graham