[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 14/21] Ovmf/Xen: allow non-PCI usage of XenBusDxe



On 01/26/15 10:46, Ard Biesheuvel wrote:

> So it would be sufficient to install the XENIO_PROTOCOL on the
> existing ControllerHandle containing the EFI_PCI_IO_PROTOCOL?

Yes.

Because there would be only one PCI (b,d,f) that would qualify (you'd
write up the Supported() check appropriately), there'd be only one
instance of XENIO_PROTOCOL created in the system as well.

> The
> recursion in the UEFI driver model would take care that the
> subordinate bus and devices are discovered as well?

Right. For example, when connecting all drivers to all devices, or when
connecting the device handle coming out of "XenIoMmioLib" recursively,
or when connecting the PCI (b, d, f) in question recursively, XenBusDxe
would report support for the XENIO protocol instance, then it would be
connected to it, producing a new handle with a XENBUS_PROTOCOL instance
on it for each child. (In fact XENBUS_PROTOCOL should have been called
XEN_DEVICE_PROTOCOL as I understand it, but that's water under the
bridge.) Then the individual device drivers like XenPvBlkDxe would
report support for some of those, etc.

> Well, the point here is that the Xen PCI device is only used as a
> vehicle to convey the grant table address, nothing else. After reading
> the grant table address from BAR1, no other calls into the
> EFI_PCI_IO_PROTOCOL are made at all.

I see. It should still build from the bottom up. In the PCI case, your
"chain-reaction" is ignited by PCI enumeration, and you should key off
the appropriate PCI (b,d,f) -- see above. In the MMIO case, you do the
DTB-based enumeration yourself, and start it all by calling the library
function on the appropriate register block (or grant table base
address), which then produces a new handle with XENIO_PROTOCOL on it.

It's okay that the hypercall mechanism is orthogonal, but that's an
implementation detail. It should still be hidden behind the
XENIO_PROTOCOL interface in my opinion. You can express the
orthogonality inside the implementation of XENIO_PROTOCOL, by delegating
the hypercalls to a library instance. In theory you could have four
providers of XENIO_PROTOCOL:
- a standalone driver that binds PciIo and uses the Intel hypercall
  style.
- a standalone driver that binds PciIo and uses the ARM hypercall style.
  This difference would be controlled by the library class to library
  instance resolution in the DSC file(s).
- A library that takes the grant table base address as an input
  parameter, and uses the Intel hypercall style.
- A library that takes the grant table base address as an input
  parameter, and uses the ARM hypercall style. (Obviously not
  explicitly, but by delegating hypercalls to whatever library instance
  that the hypercall library class is resolved to.)

Ad absurdum, you could use "the library that takes the grant table
address as an input parameter" *inside* your PciIo-based driver. Then,
VirtFdtDxe would scan the DTB and call the main library function
directly, whereas your *thin* PciIo-based, standalone driver would
interrogate the PCI device that it recognizes, and delegate the main
work to the library. For this the library would have to be able to take
a device handle from the caller (would be the PCI device handle in the
PCI case, and a fresh handle with just DevicePath on it in the other
case), and install XenIo on that.

If this is feasible or not should become very clear when you get that
far in coding, I think.

> Yes. I agree I need to rework that patch, but the existing
> XenHypercallLib works pretty well, I think.

Absolutely, it makes sense.

> I will proceed and split off XenIoPciDxe from XenBusDxe, which
> installs my existing XENIO_PROTOCOL on the existing ControllerHandle
> if its EFI_PCI_IO_PROTOCOL produces the correct vid/pid and BAR1 mem
> type.

Well... if you don't want to make the hypercall stuff part of the
XENIO_PROTOCOL interface, I can accept that too. My proposal above was:

                    XenBusDxe
                       |
                     XenIo
                       |
                      / \
                     /   \
          PCI vs. DTB     ARM vs. Intel hypercall Lib Instance

(Ie. XenBusDxe would depend on one abstraction (== XenIo), and XenIo
would depend on two independent abstractions, each of which would have
two possible implementations.)

But I guess the following also makes sense:

                    XenBusDxe
                   /         \
               XenIo          ARM vs. Intel hypercall Lib Instance
                 |
            PCI vs. DTB

The second architecture would keep XenIo much thinner, and it's
certainly sufficient to have only one hypercall method built into the
entire firmware -- you could decide that question at build time with a
"global" library class resolution.

Please use --stat=150 in the next version, and spoon-feed us the code in
baby bite sized patches! :)

Thanks
Laszlo

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.