|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RESEND] tools/libxl: add support for emulated NVMe drives
Paul Durrant writes ("RE: [PATCH RESEND] tools/libxl: add support for emulated
NVMe drives"):
> > I guess that was with xapi rather than libxl ?
>
> Nope. It was libxl.
That's weird. You specify it as xvda in the config file ?
> Windows PV drivers treat hd*, sd* and xvd* numbering in the same way... they
> just parse the disk number out and use that as the target number of the
> synthetic SCSI bus exposed to Windows.
What if there's an hda /and/ and an xvda ?
> > Allocating a new numbering scheme might involve changing Linux guests
> > too. (I haven't experimented with what happens if one specifies a
> > reserved number.)
>
> Yes, that's a point. IIRC the doc does say that guests should ignore numbers
> they don't understand... but who knows if this is actually the case.
>
> Given that there's no booting from NVMe at the moment, even HVM linux will
> only ever see the PV device since the emulated device will be unplugged early
> in boot and PV drivers are 'in box' in Linux. Windows is really the concern,
> where PV drivers are installed after the OS has seen the emulated device and
> thus the PV device needs to appear with the same 'identity' as far as the
> storage stack is concerned. I'm pretty sure this worked when I tried it a few
> months back using xvd* numbering (while coming up with the QEMU patch) but
> I'll check again.
I guess I'm trying to look forward to a "real" use case, which is
presumably emulated NVME booting ?
If it's just for testing we might not care about a low limit on the
number of devices, or the precise unplug behaviour. Or we might
tolerate having such tests require special configuration.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |