[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [edk2] [RFC] Introduting Xen PV block driver to OVMF



Hello Anthony,

On 05/20/14 17:11, Anthony PERARD wrote:
> Hi all,
> 
> I've been working on writing a driver to use a Xen PV disk directly from
> OVMF.  So far, I have a working prototype, a guest can boot from a PV
> device.  The code can be found on a git repo:
> git://xenbits.xen.org/people/aperard/ovmf.git  pv-disk
> (http://xenbits.xen.org/gitweb/?p=people/aperard/ovmf.git;a=summary)
> 
> (I don't want to send yet the patchs to the mailing list since there is
> lots of dead code and probably hard to read because of the absence of
> coding style)

I probably won't find the time to review the series (it is huge), but I
have a few suggestions if you allow me to say:

- I used to co-maintain the RHEL-5 Xen hypervisor, the RHEL-5 Xen dom0
  and domU kernels, and the RHEL-6 domU kernel. My educated plea is
  that Xen developers write (or reference) a concise natural language
  specification, not unlike the virtio specification (probably smaller,
  if possible), for the subset of the Xen infrastructure that is
  exercised in the patchset.

- Adhering to the edk2 coding style is very much recommended, but
  first, please remove all dead code, unused definitions and
  declarations from the headers etc. The series will be very hard to
  review even without wading through dead code.

> I would like your comment on the current design choice.
> 
> What I have:
>   - Ovmf/XenbusDxe:
>   This driver is the first to start, based on the presence of a PCI
>   device, which will initialize plenty of stuff needed to access a PV
>   device and discover every PV devices.

So it is a UEFI bus (or hybrid) driver, producing child handles (with
instances of the xenbus protocol), isn't it?

>   It initializes the Grant Table and Xenstore.
>   It provide XenbusProtocol to PV drivers.
>   There are some hypercall, in assembly which will target GCC for now.
> 
>   It consumes PciIoProtocol.
>   It provides XenbusProtocol.
> 
>   - Ovmf/XenPvBlkDxe:
>   This is a block driver that use XenbusProtocol and provide BlockIO.
>   Everything that the driver need go through the XenbusProtocol, which
>   is Xenstore read/write, event channel initialization, Grant table
>   access.

This layering seems sane to me.

>   - OvmfPkg/Include/IndustryStandard/Xen:
>   This contains all the public includes from Xen Project. There are
>   few modification to use function provided by Ovmf instead of POSIX
>   one.
> 
> 
> What are all this?
>   - Xenstore: it's a key/value database.
>   - Xenbus: It's a king a bus that is bluild on top of Xenstore, this is
>     used to discover PV devices.
>   - Event Channel: It provide event notification between the two sides
>     of a PV devices, the frontend and backend. This normaly is an
>     interrupt.
>   - Grant Tables: To grant access to part of a guest memory to another
>     guest.
> 
> 
> Do I need to apply the coding style to the public includes that come
> from Xen (which are store in Include/IndustryStandard)? I mean changing
> all uint32_t to UINT32, change every struct name, ...

I think that's recommended. It's the perfect incentive for you to cut
stuff to the bare minimum, and to document the rest. (You'll feel an
urge to explain whatever you keep and rename.)

> I will however comply to the coding style in Ovmf/XenbusDxe and
> XenPvBlkDxe.
> 
> I'm not sure what I should do with the assembly code, is writing it for
> GCC enough or should I use another format (.S + .asm)?

AFAIK current practice is to provide all 32-bit and 64-bit assembly as
both .S and .asm, and let the build system choose (based on the build
host platform).

Of course the above is just my opinion; I won't complain if you
disagree. The "hard" requirements will (should :)) come from Jordan.

Thanks!
Laszlo

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.