[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Still struggling to understand Xen
Xen seems to be different to most other forms of virtualisation in the way it presents hardware to the guest. For so-called HVM guests I understand everything: Hypervisor in conjunction with dom0 provides disk and network devices on PCI busses that can be viewed, enumerated with standard off-the-shelf Linux drivers and tools. This is all good. My confusion kicks in when the subject of PV drivers comes up. >From what I understand (not clearly documented anywhere that I could find) the hypervisor/dom0 combination somehow switches mode in response to something the DomU guest does. What exactly? Don't know. But by the time you've booted using the HVM hardware it seems the door is shut, and any attempt to load front-end drivers will then result in 'device not found' messages or whatever. That is, assuming my kernel is configured correctly. So this is presumably why most guests 'connect' to the PV back-end in the initrd. I couldn't really understand if it's the loading of the conventional SCSI driver, or the detection of a SCSI device, or the opening of a conventional SCSI device to mount as root that shuts the above 'door'. Unfortunately there isn't much documentation about kernel configurations for Xen and what documentation I found seemed to be out of date. It's also unclear to me if the back-end drivers in a typical dom0 that you might get from for example XCP-ng, or XenServer, or even AWS can somehow be incompatible with the latest and greatest domU Linux kernels. Is there some kind of interface versioning or are all versions forward and backward compatible? I've been through pretty much all drivers related to Xen, compiled them into my kernel and selected /dev/xvda1 device on boot, but it's still not working for me, the Xen 'hardware' is not being detected, so would appreciate any guidance you can offer. Regards, Mark.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |