[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [DOC v7] PV Calls protocol design



On Fri, 20 Jan 2017, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 10, 2017 at 04:13:26PM -0800, Stefano Stabellini wrote:
> > Upon request from Konrad, I am attaching the output of pahole on the C
> > structs defined by PVCalls. As you can see, alignments and sizes of all
> 
> Thank you!
> > fields are the same, except for the padding at the end of many request
> > structs (they need to be multiple of 8 bytes on 64bit). However, struct
> 
> Would it make sense to add
> 
> #ifdef CONFIG_X86_32
> uint8_t[X] _pad;
> #endif
> 
> so it will nicely fill it out?

I added them for each struct in the union too.


> > xen_pvcalls_dummy dummy ensures that the overall size of struct
> > xen_pvcalls_request is the same on both 32 and 64.
> 
> Yes indeed. Thank you!
> 
> > > # PV Calls Protocol version 1
> > > 
> > > ## Glossary
> > > 
> > > The following is a list of terms and definitions used in the Xen
> > > community. If you are a Xen contributor you can skip this section.
> > > 
> > > * PV
> > > 
> > >   Short for paravirtualized.
> > > 
> > > * Dom0
> > > 
> > >   First virtual machine that boots. In most configurations Dom0 is
> > >   privileged and has control over hardware devices, such as network
> > >   cards, graphic cards, etc.
> > > 
> > > * DomU
> > > 
> > >   Regular unprivileged Xen virtual machine.
> > > 
> > > * Domain
> > > 
> > >   A Xen virtual machine. Dom0 and all DomUs are all separate Xen
> > >   domains.
> > > 
> > > * Guest
> > > 
> > >   Same as domain: a Xen virtual machine.
> > > 
> > > * Frontend
> > > 
> > >   Each DomU has one or more paravirtualized frontend drivers to access
> > >   disks, network, console, graphics, etc. The presence of PV devices is
> > >   advertized on XenStore, a cross domain key-value database. Frontends
> > >   are similar in intent to the virtio drivers in Linux.
> > > 
> > > * Backend
> > > 
> > >   A Xen paravirtualized backend typically runs in Dom0 and it is used to
> > >   export disks, network, console, graphics, etcs, to DomUs. Backends can
> > >   live both in kernel space and in userspace. For example xen-blkback
> > >   lives under drivers/block in the Linux kernel and xen_disk lives under
> > >   hw/block in QEMU. Paravirtualized backends are similar in intent to
> > >   virtio device emulators.
> > > 
> > > * VMX and SVM
> > >   
> > >   On Intel processors, VMX is the CPU flag for VT-x, hardware
> > >   virtualization support. It corresponds to SVM on AMD processors.
> > > 
> > > 
> > > 
> > > ## Rationale
> > > 
> > > PV Calls is a paravirtualized protocol that allows the implementation of
> > > a set of POSIX functions in a different domain. The PV Calls frontend
> > > sends POSIX function calls to the backend, which implements them and
> > > returns a value to the frontend.
> 
> "returns a value to the frontend and acts on the function call."

OK


> > > 
> > > This version of the document covers networking function calls, such as
> > > connect, accept, bind, release, listen, poll, recvmsg and sendmsg; but
> > > the protocol is meant to be easily extended to cover different sets of
> > > calls. Unimplemented commands return ENOTSUP.
> > > 
> > > PV Calls provide the following benefits:
> > > * full visibility of the guest behavior on the backend domain, allowing
> > >   for inexpensive filtering and manipulation of any guest calls
> > > * excellent performance
> > > 
> > > Specifically, PV Calls for networking offer these advantages:
> > > * guest networking works out of the box with VPNs, wireless networks and
> > >   any other complex configurations on the host
> > > * guest services listen on ports bound directly to the backend domain IP
> > >   addresses
> > > * localhost becomes a secure namespace for inter-VMs communications
> > >
> 
> [Not sure I understand that but it probably is explained in detail later on?]
> 
> Does it make sense to define what 'namespace' is ? I mean I know what it is
> in context of C++ but I presume with cgroups and such it is different? 

I reworded it to:

* localhost becomes a secure host wide network for inter-VMs 
  communications


> > > 
> > > ## Design
> > > 
> > > ### Why Xen?
> > > 
> > > PV Calls are part of an effort to create a secure runtime environment
> > > for containers (OCI images to be precise). PV Calls are based on Xen,
> 
> OCI = Open Containers Initiative?

That's right


> Perhaps mention that in glossary or just spell it out here, like:
> 
> s/OCI images/Open Containers Initiative, aka OCI/

OK


> > > although porting them to other hypervisors is possible. Xen was chosen
> > > because of its security and isolation properties and because it supports
> > > PV guests, a type of virtual machines that does not require hardware
> > > virtualization extensions (VMX on Intel processors and SVM on AMD
> > > processors). This is important because PV Calls is meant for containers
> > > and containers are often run on top of public cloud instances, which do
> > > not support nested VMX (or SVM) as of today (late 2016). Xen PV guests
> 
> s/late 2016/early 2017/

OK


> > > are lightweight, minimalist, and do not require machine emulation: all
> > > properties that make them a good fit for the project.
> 
> s/the project/this project/

OK


> > > 
> > > ### Xenstore
> > > 
> > > The frontend and the backend connect via [xenstore] to
> > > exchange information. The toolstack creates front and back nodes with
> > > state [XenbusStateInitialising]. The protocol node name
> 
> s/state/state of/

OK


> > > is **pvcalls**.  There can only be one PV Calls frontend per domain.
> > > 
> > > #### Frontend XenBus Nodes
> > > 
> > > port
> > >      Values:         <uint32_t>
> > > 
> > >      The identifier of the Xen event channel used to signal activity
> > >      in the ring buffer.
> 
> ring buffer or command ring?

Command ring


> Or ring buffers?
> 
> You seem to have:
> 
>                 1:N              1:N
>   command ring ------> data ring ---> ring buffer

It's actually:

                 1:N                 1:1
   command ring ------> indexes page ---> ring buffer


> And the data ring creates 1 event channel for N ring buffers when we have 
> 'connect'
> call. But that is done outside XenBus. In theory you could have 500 extra 
> connections?

The command ring can spawn any number of indexes pages which are used to
setup the ring buffers. I'll start using indexes page throughout the doc
for clarity.


> Anyhow this 'port' has to be for the command ring?

Port is the event channel of the command ring


>     
> > > 
> > > ring-ref
> > >      Values:         <uint32_t>
> > > 
> > >      The Xen grant reference granting permission for the backend to map
> > >      the sole page in a single page sized ring buffer. Later on this
> 
> ring buffer? Or command ring?
> 
> > >      ring will be referred to commands ring.
> 
> s/Later on this ring will be to commands ring/This ring will be known as 
> commands ring./
> 
> But why? Why not just call it 'command ring' from the start?

Sure, I can do that.


> > > 
> > > #### Backend XenBus Nodes
> > > 
> > > version
> > >      Values:         <uint32_t>
> > > 
> > >      Protocol version supported by the backend. Currently the value is
> > >      1.
> > > 
> > > max-page-order
> > >      Values:         <uint32_t>
> > > 
> > >      The maximum supported size of a memory allocation in units of
> > >      lb(machine pages), e.g. 0 == 1 page,  1 = 2 pages, 2 == 4 pages,
> 
> lb? Isn't that like 'pound-mass' abbrevation?
> 
> I think you mean log2n or logarithm. Could you just use that please?

OK


> > >      etc.
> > > 
> > > function-calls
> > >      Values:         <uint32_t>
> > > 
> > >      Value "0" means that no calls are supported.
> > >      Value "1" means that socket, connect, release, bind, listen, accept
> > >      and poll are supported.
> > > 
> > > #### State Machine
> > > 
> > > Initialization:
> > > 
> > >     *Front*                               *Back*
> > >     XenbusStateInitialising               XenbusStateInitialising
> > >     - Query virtual device                - Query backend device
> > >       properties.                           identification data.
> > >     - Setup OS device instance.           - Publish backend features
> > >     - Allocate and initialize the           and transport parameters
> > >       request ring.                                      |
> > >     - Publish transport parameters                       |
> > >       that will be in effect during                      V
> > >       this connection.                            XenbusStateInitWait
> > >                  |
> > >                  |
> > >                  V
> > >        XenbusStateInitialised
> > > 
> > >                                           - Query frontend transport 
> > > parameters.
> > >                                           - Connect to the request ring 
> > > and
> > >                                             event channel.
> > >                                                          |
> > >                                                          |
> > >                                                          V
> > >                                                  XenbusStateConnected
> > > 
> > >      - Query backend device properties.
> > >      - Finalize OS virtual device
> > >        instance.
> > >                  |
> > >                  |
> > >                  V
> > >         XenbusStateConnected
> > > 
> > > Once frontend and backend are connected, they have a shared page, which
> > > will is used to exchange messages over a ring, and an event channel,
> > > which is used to send notifications.
> > > 
> > > Shutdown:
> > > 
> > >     *Front*                            *Back*
> > >     XenbusStateConnected               XenbusStateConnected
> > >                 |
> > >                 |
> > >                 V
> > >        XenbusStateClosing
> > > 
> > >                                        - Unmap grants
> > >                                        - Unbind evtchns
> > >                                                  |
> > >                                                  |
> > >                                                  V
> > >                                          XenbusStateClosing
> > > 
> > >     - Unbind evtchns
> > >     - Free rings
> > >     - Free data structures
> > >                |
> > >                |
> > >                V
> > >        XenbusStateClosed
> > > 
> > >                                        - Free remaining data structures
> > >                                                  |
> > >                                                  |
> > >                                                  V
> > >                                          XenbusStateClosed
> > > 
> > > 
> > > ### Commands Ring
> > > 
> > > The shared ring is used by the frontend to forward POSIX function calls
> > > to the backend. We shall refer to this ring as **commands ring** to
> > > distinguish it from other rings which can be created later in the
> > > lifecycle of the protocol (see [Data ring]). The grant reference for
> > > shared page for this ring is shared on xenstore (see [Frontend XenBus
> > > Nodes]). The ring format is defined using the familiar
> > > `DEFINE_RING_TYPES` macro (`xen/include/public/io/ring.h`).  Frontend
> > > requests are allocated on the ring using the `RING_GET_REQUEST` macro.
> > > The list of commands below is in calling order.
> > > 
> > > The format is defined as follows:
> > >     
> > >     #define PVCALLS_SOCKET         0
> > >     #define PVCALLS_CONNECT        1
> > >     #define PVCALLS_RELEASE        2
> > >     #define PVCALLS_BIND           3
> > >     #define PVCALLS_LISTEN         4
> > >     #define PVCALLS_ACCEPT         5
> > >     #define PVCALLS_POLL           6
> > > 
> > >     struct xen_pvcalls_request {
> > >           uint32_t req_id; /* private to guest, echoed in response */
> > >           uint32_t cmd;    /* command to execute */
> > >           union {
> > >                   struct xen_pvcalls_socket {
> > >                           uint64_t id;
> > >                           uint32_t domain;
> > >                           uint32_t type;
> > >                           uint32_t protocol;
> > >                   } socket;
> > >                   struct xen_pvcalls_connect {
> > >                           uint64_t id;
> > >                           uint8_t addr[28];
> > >                           uint32_t len;
> > >                           uint32_t flags;
> > >                           grant_ref_t ref;
> > >                           uint32_t evtchn;
> > >                   } connect;
> > >                   struct xen_pvcalls_release {
> > >                           uint64_t id;
> > >                           uint8_t reuse;
> > >                   } release;
> > >                   struct xen_pvcalls_bind {
> > >                           uint64_t id;
> > >                           uint8_t addr[28];
> > >                           uint32_t len;
> > >                   } bind;
> > >                   struct xen_pvcalls_listen {
> > >                           uint64_t id;
> > >                           uint32_t backlog;
> > >                   } listen;
> > >                   struct xen_pvcalls_accept {
> > >                           uint64_t id;
> > >                           uint64_t id_new;
> > >                           grant_ref_t ref;
> > >                           uint32_t evtchn;
> > >                   } accept;
> > >                   struct xen_pvcalls_poll {
> > >                           uint64_t id;
> > >                   } poll;
> > >                   /* dummy member to force sizeof(struct 
> > > xen_pvcalls_request) to match across archs */
> > >                   struct xen_pvcalls_dummy {
> > >                           uint8_t dummy[56];
> > >                   } dummy;
> > >           } u;
> > >     };
> > > 
> > > The first two fields are common for every command. Their binary layout
> > > is:
> > > 
> > >     0       4       8
> > >     +-------+-------+
> > >     |req_id |  cmd  |
> > >     +-------+-------+
> > > 
> > > - **req_id** is generated by the frontend and is a cookie used to
> > >   identify one specific request/response pair of commands. Not to be
> > >   confused with any command **id** which are used to identify a socket
> > >   across multiple commands, see [Socket].
> > > - **cmd** is the command requested by the frontend:
> > > 
> > >     - `PVCALLS_SOCKET`:  0
> > >     - `PVCALLS_CONNECT`: 1
> > >     - `PVCALLS_RELEASE`: 2
> > >     - `PVCALLS_BIND`:    3
> > >     - `PVCALLS_LISTEN`:  4
> > >     - `PVCALLS_ACCEPT`:  5
> > >     - `PVCALLS_POLL`:    6
> > > 
> > > Both fields are echoed back by the backend. See [Socket families and
> > > address format] for the format of the **addr** field of connect and
> > > bind. The maximum size of command specific arguments is 56 bytes. Any
> > > future command that requires more than that will need a bump the
> > > **version** of the protocol.
> 
> Would it make sense to say that the protocol should be backwards compatible?
> 
> As if you bump to 2 then you can still support 1?
> 
> Or should there be an negotation with frontend and backed on the version?
> Similar to how the other values can be negotiated?

Given that we have feature flags for protocol extensions, I think that
we'll only need to increase the version of the protocol if we introduce
backward incompatible changes. For example, increasing the size of
struct xen_pvcalls_request is a backward incompatible change.

I think you are right: it makes sense for a backend to be able to
support multiple protocol versions. I'll add that to the next version.


> > > Similarly to other Xen ring based protocols, after writing a request to
> > > the ring, the frontend calls `RING_PUSH_REQUESTS_AND_CHECK_NOTIFY` and
> > > issues an event channel notification when a notification is required.
> > > 
> > > Backend responses are allocated on the ring using the `RING_GET_RESPONSE` 
> > > macro.
> > > The format is the following:
> > > 
> > >     struct xen_pvcalls_response {
> > >         uint32_t req_id;
> > >         uint32_t cmd;
> > >         int32_t ret;
> > >         uint32_t pad;
> > >         union {
> > >                   struct _xen_pvcalls_socket {
> > >                           uint64_t id;
> > >                   } socket;
> > >                   struct _xen_pvcalls_connect {
> > >                           uint64_t id;
> > >                   } connect;
> > >                   struct _xen_pvcalls_release {
> > >                           uint64_t id;
> > >                   } release;
> > >                   struct _xen_pvcalls_bind {
> > >                           uint64_t id;
> > >                   } bind;
> > >                   struct _xen_pvcalls_listen {
> > >                           uint64_t id;
> > >                   } listen;
> > >                   struct _xen_pvcalls_accept {
> > >                           uint64_t id;
> > >                   } accept;
> > >                   struct _xen_pvcalls_poll {
> > >                           uint64_t id;
> > >                   } poll;
> > >                   struct _xen_pvcalls_dummy {
> > >                           uint8_t dummy[8];
> > >                   } dummy;
> > >           } u;
> > >     };
> > > 
> > > The first four fields are common for every response. Their binary layout
> > > is:
> > > 
> > >     0       4       8       12      16
> > >     +-------+-------+-------+-------+
> > >     |req_id |  cmd  |  ret  |  pad  |
> > >     +-------+-------+-------+-------+
> > > 
> > > - **req_id**: echoed back from request
> > > - **cmd**: echoed back from request
> > > - **ret**: return value, identifies success (0) or failure (see error 
> > > numbers
> > >   below). If the **cmd** is not supported by the backend, ret is ENOTSUP.
> 
> Uh, where are those error numbers?
> 
> Or perhaps  you meant, see "[Error numbers] in further sections." ?

I made the change


> > > - **pad**: padding
> > > 
> > > After calling `RING_PUSH_RESPONSES_AND_CHECK_NOTIFY`, the backend checks 
> > > whether
> > > it needs to notify the frontend and does so via event channel.
> > > 
> > > A description of each command, their additional request and response
> > > fields follow.
> > > 
> > > 
> > > #### Socket
> > > 
> > > The **socket** operation corresponds to the POSIX [socket][socket]
> > > function. It creates a new socket of the specified family, type and
> > > protocol. **id** is freely chosen by the frontend and references this
> > > specific socket from this point forward. See "Socket families and
> > > address format" below.
> > > 
> > > Request fields:
> > > 
> > > - **cmd** value: 0
> > > - additional fields:
> > >   - **id**: generated by the frontend, it identifies the new socket
> > >   - **domain**: the communication domain
> > >   - **type**: the socket type
> > >   - **protocol**: the particular protocol to be used with the socket, 
> > > usually 0
> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16      20     24       28
> > >     +-------+-------+-------+-------+-------+
> > >     |       id      |domain | type  |protoco|
> > >     +-------+-------+-------+-------+-------+
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16       20       24
> > >     +-------+--------+
> > >     |       id       |
> > >     +-------+--------+
> > > 
> > > Return value:
> > > 
> > >   - 0 on success
> > >   - See the [POSIX socket function][connect] for error names; the 
> > > corresponding
> > >     error numbers are specified later in this document.
> 
> Or perhaps  you meant, see "[Error numbers] in further sections." ?

OK


> > > #### Connect
> > > 
> > > The **connect** operation corresponds to the POSIX [connect][connect]
> > > function. It connects a previously created socket (identified by **id**)
> > > to the specified address.
> > > 
> > > The connect operation creates a new shared ring, which we'll call **data
> > > ring**. The [Data ring] is used to send and receive data from the
> > > socket. The connect operation passes two additional parameters:
> > > **evtchn** and **ref**. **evtchn** is the port number of a new event
> > > channel which will be used for notifications of activity on the data
> > > ring. **ref** is the grant reference of a page which contains shared
> > > indices that point to the write and read locations in the ring buffers.
> 
> which ring buffer? Perhaps say:
> 
> s/in the ring buffers/of the ring buffers (to be setup by this function)/ ?
> 
> Or are ring buffers data rings?

I introduced the usage of "indexes page":

  **ref** is the grant reference of the **indexes page**: a page which
  contains shared indexes that point to the write and read locations in
  the data ring. The **indexes page** also contains the full array of
  grant references for data ring.


> No wait, they are not.
> 
> Perhaps this:
> 
> +---------------------------+                 Data ring:(pvcalls_data_intf)
> | Command ring:             |                 +----------------------+        
>       ring buffers
> | @0: xen_pvcalls_connect:  |                 |@0 pvcalls_data_intf: |        
>      +------------------+
> | @44: ref  +-------------------------------->+@76: ring_order = 1   |        
>      | @0->2048: in     |
> |                           |                 |@80: 
> ref[0]+------------------------+                  |
> +---------------------------+                 |@84: ref[1]+          |        
>      | @2048->4095: out |
>                                               |           |          |        
>      +------------------+
>                                               |           |          |
>                                               +----------------------+
>                                                           v (ring buffers)
>                                                   +-------+-----------+
>                                                   |  @0->2048: in     |
>                                                   |                   |
>                                                   |  @2049->4095: out |
>                                                   +-------------------+
> 
> 
> ? That is what this looks like right?

Not quite: the ring_order = 1 pages are supposed to be mapped contiguously:

  +---------------------------+                 Indexes page
  | Command ring:             |                 +----------------------+
  | @0: xen_pvcalls_connect:  |                 |@0 pvcalls_data_intf: |
  | @44: ref  +-------------------------------->+@76: ring_order = 1   |
  |                           |                 |@80: ref[0]+          |
  +---------------------------+                 |@84: ref[1]+          |
                                                |           |          |
                                                |           |          |
                                                +----------------------+
                                                            |
                                                            v (ring buffers)
                                                    +-------+-----------+
                                                    |  @0->4098: in     |
                                                    |  ref[0]           |
                                                    |-------------------|
                                                    |  @4099->8196: out |
                                                    |  ref[1]           |
                                                    +-------------------+
 
I added this diagram to the doc.


> > > **ref** also contains the full array of grant references for the ring
> > > buffers. When the frontend issues a **connect** command, the backend:
> 
> OK, so ring buffers != data rings.
> 
> Perhaps explain what an ring buffer is before hand?

I removed the usage of "ring buffers" in the doc. I was trying to carry the
meaning that a data ring is composed of two buffers: the in and out buffers.
They are independent, and each of them have their own indexes. I'll simply say
"data ring" from now on, and in the data ring format description it will become
clear that actually each data ring has two circular buffers.


> Like [The connect operation creates a new shared ring, which we will call
> **data ring**, it also creates secondary rings known as **ring buffers**
> which will have the actual data.]
> ?
> [edit: s/**ring buffers**/**data ring interfaces**/ ] ?
> 
> > > 
> > > - finds its own internal socket corresponding to **id**
> > > - connects the socket to **addr**
> > > - maps the grant reference **ref**, the shared page contains the data
> > >   ring interface (`struct pvcalls_data_intf`)
> 
> So later in the spec you call the 'ring buffer' as 'data ring interface'.
> But not always (See **Data ring**)
> 
> Perhaps we always should call it 'data ring interface' ?

I used indexes page


> > > - maps all the grant references listed in `struct pvcalls_data_intf` and
> > >   uses them as shared memory for the ring buffers
> > > - bind the **evtchn**
> > > - replies to the frontend
> > > 
> > > The [Data ring] format will be described in the following section. The
> 
> .. and [Ring buffers]
>
> > > data ring is unmapped and freed upon issuing a **release** command on
> > > the active socket identified by **id**.
> 
> .. or state change (Frontend changing state, etc) ?

OK


> > > 
> > > Request fields:
> > > 
> > > - **cmd** value: 0
> > > - additional fields:
> > >   - **id**: identifies the socket
> > >   - **addr**: address to connect to, see the address format section for 
> > > more
> 
> 
> s/address format/[Socket families and address format]/

OK


> > >     information
> > >   - **len**: address length
> > >   - **flags**: flags for the connection, reserved for future usage
> > >   - **ref**: grant reference of the page containing `struct
> > >     pvcalls_data_intf`
> > >   - **evtchn**: port number of the evtchn to signal activity on the data 
> > > ring
> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16      20      24      28      32      36      40    
> > >   44
> > >     
> > > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> > >     |       id      |                            addr                     
> > >   |
> > >     
> > > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> > >     | len   | flags |  ref  |evtchn |
> > >     +-------+-------+-------+-------+
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16      20      24
> > >     +-------+-------+
> > >     |       id      |
> > >     +-------+-------+
> > > 
> > > Return value:
> > > 
> > >   - 0 on success
> > >   - See the [POSIX connect function][connect] for error names; the 
> > > corresponding
> > >     error numbers are specified later in this document.
> 
> Or perhaps  you meant, see "[Error numbers] in further sections." ?

Yep


> > > 
> > > #### Release
> > > 
> > > The **release** operation closes an existing active or a passive socket.
> > > 
> > > When a release command is issued on a passive socket, the backend
> > > releases it and frees its internal mappings. When a release command is
> > > issued for an active socket, the data ring is also unmapped and freed:
> 
> s/data ring/data ring and buffer rings/?

Right


> > > - frontend sends release command for an active socket
> > > - backend releases the socket
> > > - backend unmaps the data ring buffers
> > > - backend unmaps the data ring interface
> 
> What is 'data ring interface', ah! ring buffer! [mentioned in 'connect' 
> section]
> 
> Aren't these two above the same thing then?
> 
> But if not, then s/interface/interfaces/

I used:
- backend unmaps the data ring
- backend unmaps the indexes page


> > > - backend unbinds the evtchn
> 
> s/evtchn/event channel/

OK


> > > - backend replies to frontend
> 
> ..with an **ret** value.

OK


> > > - frontend frees ring and unbinds evtchn
> 
> s/ring/data ring/
> s/evtchn/event channel/

OK


> > > 
> > > Request fields:
> > > 
> > > - **cmd** value: 1
> > > - additional fields:
> > >   - **id**: identifies the socket
> > >   - **reuse**: an optimization hint for the backend. The field is
> > >     ignored for passive sockets. When set to 1, the frontend lets the
> > >     backend know that it will reuse exactly the same set of grant pages
> > >     (interface and data ring) and evtchn when creating one of the next
> 
> interface.. that is the 'ring buffer'! ?
> 
> So 'data ring interface and data ring' or
> 'ring buffer and data ring' ?

Sorry for the confusion. Interface is the indexes page and data ring are the two
in and out arrays. I changed it to "(indexes page and data ring)".


> 
> > >     active sockets. The backend can take advantage of it by delaying
> > >     unmapping grants and unbinding the evtchn. The backend is free to
> > >     ignore the hint. Reused data rings are found by **ref**, the grant
> > >     reference of the page containing the indices.
> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16    17
> > >     +-------+-------+-----+
> > >     |       id      |reuse|
> > >     +-------+-------+-----+
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16      20      24
> > >     +-------+-------+
> > >     |       id      |
> > >     +-------+-------+
> > > 
> > > Return value:
> > > 
> > >   - 0 on success
> > >   - See the [POSIX shutdown function][shutdown] for error names; the
> > >     corresponding error numbers are specified later in this document.
> 
> 
> Or perhaps  you meant, see "[Error numbers] in further sections." ?

Yep


> > > 
> > > #### Bind
> > > 
> > > The **bind** operation corresponds to the POSIX [bind][bind] function.
> > > It assigns the address passed as parameter to a previously created
> > > socket, identified by **id**. **Bind**, **listen** and **accept** are
> > > the three operations required to have fully working passive sockets and
> > > should be issued in this order.
> 
> s/this/that/

OK


> > > 
> > > Request fields:
> > > 
> > > - **cmd** value: 2
> > > - additional fields:
> > >   - **id**: identifies the socket
> > >   - **addr**: address to connect to, see the address format section for 
> > > more
> 
> s/address format/[Socket families and address format]/

OK


> > >     information
> > >   - **len**: address length
> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16      20      24      28      32      36      40    
> > >   44
> > >     
> > > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> > >     |       id      |                            addr                     
> > >   |
> > >     
> > > +-------+-------+-------+-------+-------+-------+-------+-------+-------+
> > >     |  len  |
> > >     +-------+
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16      20      24
> > >     +-------+-------+
> > >     |       id      |
> > >     +-------+-------+
> > > 
> > > Return value:
> > > 
> > >   - 0 on success
> > >   - See the [POSIX bind function][bind] for error names; the 
> > > corresponding error
> > >     numbers are specified later in this document.
>
>

Done :-)


> > > 
> > > 
> > > #### Listen
> > > 
> > > The **listen** operation marks the socket as a passive socket. It 
> > > corresponds to
> > > the [POSIX listen function][listen].
> > > 
> > > Reuqest fields:
> > > 
> > > - **cmd** value: 3
> > > - additional fields:
> > >   - **id**: identifies the socket
> > >   - **backlog**: the maximum length to which the queue of pending
> > >     connections may grow
> 
> In bytes?
> May want to be specific on that.

Nope, it is the number of sockets in queue


> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16      20
> > >     +-------+-------+-------+
> > >     |       id      |backlog|
> > >     +-------+-------+-------+
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16      20      24
> > >     +-------+-------+
> > >     |       id      |
> > >     +-------+-------+
> > > 
> > > Return value:
> > >   - 0 on success
> > >   - See the [POSIX listen function][listen] for error names; the 
> > > corresponding
> > >     error numbers are specified later in this document.
> 
> 
> Or perhaps  you meant, see "[Error numbers] in further sections." ?
> > > 
> > > 
> > > #### Accept
> > > 
> > > The **accept** operation extracts the first connection request on the
> > > queue of pending connections for the listening socket identified by
> > > **id** and creates a new connected socket. The id of the new socket is
> > > also chosen by the frontend and passed as an additional field of the
> > > accept request struct (**id_new**). See the [POSIX accept 
> > > function][accept]
> > > as reference.
> > > 
> > > Similarly to the **connect** operation, **accept** creates a new [Data
> > > ring]. The data ring is used to send and receive data from the socket.
> 
> s/ring]/ring] and N data ring buffers./

OK


> > > The **accept** operation passes two additional parameters: **evtchn**
> > > and **ref**. **evtchn** is the port number of a new event channel which
> > > will be used for notifications of activity on the data ring. **ref** is
> > > the grant reference of a page which contains shared indices that point
> > > to the write and read locations in the ring buffers. **ref** also
> > > contains the full array of grant references for the ring buffers.
> > > 
> > > The backend will reply to the request only when a new connection is
> > > successfully accepted, i.e. the backend does not return EAGAIN or
> > > EWOULDBLOCK.
> > > 
> > > Example workflow:
> > > 
> > > - frontend issues an **accept** request
> > > - backend waits for a connection to be available on the socket
> > > - a new connection becomes available
> > > - backend accepts the new connection
> > > - backend creates an internal mapping from **id_new** to the new socket
> > > - backend maps the grant reference **ref**, the shared page contains the
> > >   data ring interface (`struct pvcalls_data_intf`)
> > > - backend maps all the grant references listed in `struct
> > >   pvcalls_data_intf` and uses them as shared memory for the new data
> > >   ring
> 
> s/data ring/ring buffer/
> 
> or
> s/data ring/data ring interface/

I wrote:
 
- backend maps all the grant references listed in `struct
  pvcalls_data_intf` and uses them as shared memory for the new data
  ring **in** and **out** arrays


> > > - backend binds the **evtchn**
> 
> s/the/to the/

OK


> > > - backend replies to the frontend
> 
> with an **ret** value.

OK


> > > 
> > > Request fields:
> > > 
> > > - **cmd** value: 4
> > > - additional fields:
> > >   - **id**: id of listening socket
> > >   - **id_new**: id of the new socket
> > >   - **ref**: grant reference of the data ring interface (`struct
> > >     pvcalls_data_intf`)
> > >   - **evtchn**: port number of the evtchn to signal activity on the data 
> > > ring
> 
> .. and ring buffers.

In this document:

  data ring == in + out arrays == ring buffers

so I would keep the wording as is.


> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16      20      24      28      32
> > >     +-------+-------+-------+-------+-------+-------+
> > >     |       id      |    id_new     |  ref  |evtchn |
> > >     +-------+-------+-------+-------+-------+-------+
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: id of the listening socket, echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16      20      24
> > >     +-------+-------+
> > >     |       id      |
> > >     +-------+-------+
> > > 
> > > Return value:
> > > 
> > >   - 0 on success
> > >   - See the [POSIX accept function][accept] for error names; the 
> > > corresponding
> > >     error numbers are specified later in this document.
> 
> 
> Or perhaps  you meant, see "[Error numbers] in further sections." ?

Yes


> > > 
> > > 
> > > #### Poll
> > > 
> > > In this version of the protocol, the **poll** operation is only valid
> > > for passive sockets. For active sockets, the frontend should look at the
> > > state of the data ring. When a new connection is available in the queue
> 
> By state you mean the in_prod ? If so perhaps 'look at the producer/consumer
> indicies in the data ring' ?

Right. I wrote:

  For active sockets, the frontend should look at the
  indexes on the **indexes page**.


> > > of the passive socket, the backend generates a response and notifies the
> > > frontend.
> > > 
> > > Request fields:
> > > 
> > > - **cmd** value: 5
> > > - additional fields:
> > >   - **id**: identifies the listening socket
> > > 
> > > Request binary layout:
> > > 
> > >     8       12      16
> > >     +-------+-------+
> > >     |       id      |
> > >     +-------+-------+
> > > 
> > > 
> > > Response additional fields:
> > > 
> > > - **id**: echoed back from request
> > > 
> > > Response binary layout:
> > > 
> > >     16       20       24
> > >     +--------+--------+
> > >     |        id       |
> > >     +--------+--------+
> > > 
> > > Return value:
> > > 
> > >   - 0 on success
> > >   - See the [POSIX poll function][poll] for error names; the 
> > > corresponding error
> > >     numbers are specified later in this document.
> 
> Guest what I am going to ask here :-)

Eh :-)


> > > 
> > > #### Error numbers
> > > 
> > > The numbers corresponding to the error names specified by POSIX are:
> > > 
> > >     [EPERM]         -1
> > >     [ENOENT]        -2
> > >     [ESRCH]         -3
> > >     [EINTR]         -4
> > >     [EIO]           -5
> > >     [ENXIO]         -6
> > >     [E2BIG]         -7
> > >     [ENOEXEC]       -8
> > >     [EBADF]         -9
> > >     [ECHILD]        -10
> > >     [EAGAIN]        -11
> > >     [EWOULDBLOCK]   -11
> > >     [ENOMEM]        -12
> > >     [EACCES]        -13
> > >     [EFAULT]        -14
> > >     [EBUSY]         -16
> > >     [EEXIST]        -17
> > >     [EXDEV]         -18
> > >     [ENODEV]        -19
> > >     [EISDIR]        -21
> > >     [EINVAL]        -22
> > >     [ENFILE]        -23
> > >     [EMFILE]        -24
> > >     [ENOSPC]        -28
> > >     [EROFS]         -30
> > >     [EMLINK]        -31
> > >     [EDOM]          -33
> > >     [ERANGE]        -34
> > >     [EDEADLK]       -35
> > >     [EDEADLOCK]     -35
> > >     [ENAMETOOLONG]  -36
> > >     [ENOLCK]        -37
> > >     [ENOTEMPTY]     -39
> > >     [ENOSYS]        -38
> > >     [ENODATA]       -61
> > >     [ETIME]         -62
> > >     [EBADMSG]       -74
> > >     [EOVERFLOW]     -75
> > >     [EILSEQ]        -84
> > >     [ERESTART]      -85
> > >     [ENOTSOCK]      -88
> > >     [EOPNOTSUPP]    -95
> > >     [EAFNOSUPPORT]  -97
> > >     [EADDRINUSE]    -98
> > >     [EADDRNOTAVAIL] -99
> > >     [ENOBUFS]       -105
> > >     [EISCONN]       -106
> > >     [ENOTCONN]      -107
> > >     [ETIMEDOUT]     -110
> > >     [ENOTSUP]      -524
> > > 
> > > #### Socket families and address format
> > > 
> > > The following definitions and explicit sizes, together with POSIX
> > > [sys/socket.h][address] and [netinet/in.h][in] define socket families and
> > > address format. Please be aware that only the **domain** `AF_INET`, 
> > > **type**
> > > `SOCK_STREAM` and **protocol** `0` are supported by this version of the
> > > spec, others return ENOTSUP.
> 
> s/spec,/specification,/

OK


> > > 
> > >     #define AF_UNSPEC   0
> > >     #define AF_UNIX     1   /* Unix domain sockets      */
> > >     #define AF_LOCAL    1   /* POSIX name for AF_UNIX   */
> > >     #define AF_INET     2   /* Internet IP Protocol     */
> > >     #define AF_INET6    10  /* IP version 6         */
> > > 
> > >     #define SOCK_STREAM 1
> > >     #define SOCK_DGRAM  2
> > >     #define SOCK_RAW    3
> > > 
> > >     /* generic address format */
> > >     struct sockaddr {
> > >         uint16_t sa_family_t;
> > >         char sa_data[26];
> > >     };
> > > 
> > >     struct in_addr {
> > >         uint32_t s_addr;
> > >     };
> > > 
> > >     /* AF_INET address format */
> > >     struct sockaddr_in {
> > >         uint16_t         sa_family_t;
> > >         uint16_t         sin_port;
> > >         struct in_addr   sin_addr;
> > >         char             sin_zero[20];
> > >     };
> > > 
> > > 
> > > ### Data ring
> > > 
> > > Data rings are used for sending and receiving data over a connected 
> > > socket. They
> > > are created upon a successful **accept** or **connect** command.
> > > The **sendmsg** and **recvmsg** calls are implemented by sending data and
> > > receiving data from data rings.
> 
> s/data rings/secondary data rings (and updating the primary data ring 
> indicies)/?
> > > 
> > > A data ring is composed of two pieces: the interface and the **in** and 
> > > **out**
> > > buffers. The interface, represented by `struct pvcalls_ring_intf` is 
> > > shared
> > > first and resides on the page whose grant reference is passed by 
> > > **accept** and
> > > **connect** as parameter. `struct pvcalls_ring_intf` contains the list of 
> > > grant
> 
> s/parameter./parameter (see **ref** in those sections)/
> 
> > > references which constitute the **in** and **out** data buffers.
> 
> Perhaps add:
> 
> (see **ref[]** below for order and count)

I changed this entire introduction to:

  Data rings are used for sending and receiving data over a connected socket. 
They
  are created upon a successful **accept** or **connect** command.
  The **sendmsg** and **recvmsg** calls are implemented by sending data and
  receiving data from a data ring.
  
  Firstly, the **indexes page** is shared by a **connect** or **accept**
  command, see **ref** parameter in their sections. The content of the
  **indexes page** is represented by `struct pvcalls_ring_intf`, see
  below. The structure contains the list of grant references which
  constitute the **in** and **out** buffers of the data ring, see ref[]
  below. The backend maps the grant references contiguously. Of the
  resulting shared memory, the first half is dedicated to the **in** array
  and the second half to the **out** array. They are used as circular
  buffers for transferring data, and, together, they are the data ring.


> > > 
> > > #### Data ring interface
> > > 
> > >     struct pvcalls_data_intf {
> > >           PVCALLS_RING_IDX in_cons, in_prod;
> > >           int32_t in_error;
> > > 
> > >           uint8_t pad[52];
> > > 
> > >           PVCALLS_RING_IDX out_cons, out_prod;
> > >           int32_t out_error;
> > > 
> > >           uint32_t ring_order;
> 
> Why not add some extra padding here ? You did it for the
> in_cons, in_prod, so you could do it here as well?
> 
>               uint8_t pad2[48];

There isn't much benefit to this: indexes are read all the time,
while refs are only read once. They are not on the hot path.


 
> > >           grant_ref_t ref[];
> > >     };
> > > 
> > >     /* not actually C compliant (ring_order changes from socket to 
> > > socket) */
> > >     struct pvcalls_data {
> > >         char in[((1<<ring_order)<<PAGE_SHIFT)/2];
> > >         char out[((1<<ring_order)<<PAGE_SHIFT)/2];
> > >     };
> > > 
> > > - **ring_order**
> > >   It represents the order of the data ring. The following list of grant
> > >   references is of `(1 << ring_order)` elements. It cannot be greater than
> > >   **max-dataring-page-order**, as specified by the backend on XenBus.
> 
> That 'max-dataring-page-order' is not anywhere in this spec?

It should be max-page-order


> Would it make sense to say that it must be at mininum one?

yes


> > > - **ref[]**
> > >   The list of grant references which will contain the actual data. They 
> > > are
> > >   mapped contiguosly in virtual memory. The first half of the pages is the
> > >   **in** array, the second half is the **out** array. The array must
> > >   have a power of two number of elements.
> 
> .. and be up to 'ring_order'.

For clarity I wrote:

  The arrays must have a power of two size. Together, their size is `(1 <<
  ring_order) * PAGE_SIZE`.


> 
> > > - **in** is an array used as circular buffer
> > >   It contains data read from the socket. The producer is the backend, the
> > >   consumer is the frontend.
> > > - **out** is an array used as circular buffer
> > >   It contains data to be written to the socket. The producer is the 
> > > frontend,
> > >   the consumer is the backend.
> > > - **in_cons** and **in_prod**
> > >   Consumer and producer indices for data read from the socket. They keep 
> > > track
> > >   of how much data has already been consumed by the frontend from the 
> > > **in**
> > >   array. **in_prod** is increased by the backend, after writing data to 
> > > **in**.
> > >   **in_cons** is increased by the frontend, after reading data from 
> > > **in**.
> > > - **out_cons**, **out_prod**
> > >   Consumer and producer indices for the data to be written to the socket. 
> > > They
> > >   keep track of how much data has been written by the frontend to **out** 
> > > and
> > >   how much data has already been consumed by the backend. **out_prod** is
> > >   increased by the frontend, after writing data to **out**. **out_cons** 
> > > is
> > >   increased by the backend, after reading data from **out**.
> > > - **in_error** and **out_error** They signal errors when reading from the 
> > > socket
> > >   (**in_error**) or when writing to the socket (**out_error**). 0 means no
> > >   errors. When an error occurs, no further reads or writes operations are
> > >   performed on the socket. In the case of an orderly socket shutdown 
> > > (i.e. read
> > >   returns 0) **in_error** is set to ENOTCONN. **in_error** and 
> > > **out_error**
> > >   are never set to EAGAIN or EWOULDBLOCK.
> 
> 
> How come? What is so special about EAGAIN ?

It is returned when there is nothing to read from a non-blocking socket.
Here it doesn't make sense: the data is written to the ring as soon as
it is available.


> > > The binary layout of `struct pvcalls_data_intf` follows:
> > > 
> > >     0         4         8         12           64        68        72     
> > >    76 
> > >     
> > > +---------+---------+---------+-----//-----+---------+---------+---------+
> > >     | in_cons | in_prod |in_error |  padding   |out_cons |out_prod 
> > > |out_error|
> > >     
> > > +---------+---------+---------+-----//-----+---------+---------+---------+
> > > 
> > >     76        80        84        88      4092      4096
> > >     +---------+---------+---------+----//---+---------+
> > >     |ring_orde|  ref[0] |  ref[1] |         |  ref[N] |
> > >     +---------+---------+---------+----//---+---------+
> > > 
> > > **N.B** For one page, N is maximum 1004 ((4096-80)/4), but given that N 
> > > needs
> > > to be a power of two, actually max N is 512.
> 
> Would it make sense to also say what what the 'ring_order' would be for
> N being 512?
> 
> (9 right?)

That's right


> > > The binary layout of the ring buffers follow:
> 
> Would it make sense to have:
> 
> ### Ring buffer (secondary data ring)
> 
> or 
> 
> ### Interface ring buffer
> 
> ?
> 
> Just to connect to the earlier sections that talk about it (connect
> for example)

Sure


> > > 
> > >     0         ((1<<ring_order)<<PAGE_SHIFT)/2       
> > > ((1<<ring_order)<<PAGE_SHIFT)
> > >     +------------//-------------+------------//-------------+
> > >     |            in             |           out             |
> > >     +------------//-------------+------------//-------------+
> > > 
> > > #### Workflow
> > > 
> > > The **in** and **out** arrays are used as circular buffers:
> > >     
> > >     0                               sizeof(array) == 
> > > ((1<<ring_order)<<PAGE_SHIFT)/2
> > >     +-----------------------------------+
> > >     |to consume|    free    |to consume |
> > >     +-----------------------------------+
> > >                ^            ^
> > >                prod         cons
> > > 
> > >     0                               sizeof(array)
> > >     +-----------------------------------+
> > >     |  free    | to consume |   free    |
> > >     +-----------------------------------+
> > >                ^            ^
> > >                cons         prod
> > > 
> > > The following function is provided to calculate how many bytes are 
> > > currently
> > > left unconsumed in an array:
> > > 
> > >     #define _MASK_PVCALLS_IDX(idx, ring_size) ((idx) & (ring_size-1))
> > > 
> > >     static inline PVCALLS_RING_IDX 
> > > pvcalls_ring_unconsumed(PVCALLS_RING_IDX prod,
> > >                   PVCALLS_RING_IDX cons,
> > >                   PVCALLS_RING_IDX ring_size)
> > >     {
> > >           PVCALLS_RING_IDX size;
> > >     
> > >           if (prod == cons)
> > >                   return 0;
> > >     
> > >           prod = _MASK_PVCALLS_IDX(prod, ring_size);
> > >           cons = _MASK_PVCALLS_IDX(cons, ring_size);
> > >     
> > >           if (prod == cons)
> > >                   return ring_size;
> > >     
> > >           if (prod > cons)
> > >                   size = prod - cons;
> > >           else {
> > >                   size = ring_size - cons;
> > >                   size += prod;
> > >           }
> > >           return size;
> > >     }
> > > 
> > > The producer (the backend for **in**, the frontend for **out**) writes to 
> > > the
> > > array in the following way:
> > > 
> > > - read *cons*, *prod*, *error* from shared memory
> 
> Would it make sense to prefix those with '[in|out]' as appropiate?
> 
> > > - memory barrier
> > > - return on *error*
> 
> Ditto
> > > - write to array at position *prod* up to *cons*, wrapping around the 
> > > circular
> 
> Ditto :-)
> > >   buffer when necessary
> > > - memory barrier
> > > - increase *prod*
> 
> And here as well.
> > > - notify the other end via evtchn
> > > 
> > > The consumer (the backend for **out**, the frontend for **in**) reads 
> > > from the
> > > array in the following way:
> > > 
> > > - read *prod*, *cons*, *error* from shared memory
> 
> Again, same question?

rename all this instances, prepending [in|out]_


> > > - memory barrier
> > > - return on *error*
> > > - read from array at position *cons* up to *prod*, wrapping around the 
> > > circular
> > >   buffer when necessary
> > > - memory barrier
> > > - increase *cons*
> > > - notify the other end via evtchn
> > > 
> > > The producer takes care of writing only as many bytes as available in the 
> > > buffer
> > > up to *cons*. The consumer takes care of reading only as many bytes as 
> > > available
> > > in the buffer up to *prod*. *error* is set by the backend when an error 
> > > occurs
> > > writing or reading from the socket.
> 
> This implies you can only have one outstanding "burst of data" at a time.
> 
> That is since the _cons and _prod are variable size (They are just
> byte indicies) you can't batch?

Why not? The backend could skip a couple of event notifications from the
frontend and process the whole thing afterward. It could happen
naturally if the backend is busy. Wouldn't that count as batching?


> But you could have in parallel multiple connections which utilize
> different ring buffers.

This is true


> Would it make sense to mention this somewhere here?
>
> Lastly should you have a seciton that talks about other non-UNIX OSes?

What's there to say? Any OSes can use PVCalls as long as they conform to
the protocol, no matter if they are UNIX OSes or non-UNIX OSes.


> > > 
> > > [xenstore]: http://xenbits.xen.org/docs/unstable/misc/xenstore.txt
> > > [XenbusStateInitialising]: 
> > > http://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,io,xenbus.h.html
> > > [address]: 
> > > http://pubs.opengroup.org/onlinepubs/7908799/xns/syssocket.h.html
> > > [in]: 
> > > http://pubs.opengroup.org/onlinepubs/000095399/basedefs/netinet/in.h.html
> > > [socket]: 
> > > http://pubs.opengroup.org/onlinepubs/009695399/functions/socket.html
> > > [connect]: http://pubs.opengroup.org/onlinepubs/7908799/xns/connect.html
> > > [shutdown]: http://pubs.opengroup.org/onlinepubs/7908799/xns/shutdown.html
> > > [bind]: http://pubs.opengroup.org/onlinepubs/7908799/xns/bind.html
> > > [listen]: http://pubs.opengroup.org/onlinepubs/7908799/xns/listen.html
> > > [accept]: http://pubs.opengroup.org/onlinepubs/7908799/xns/accept.html
> > > [poll]: http://pubs.opengroup.org/onlinepubs/7908799/xsh/poll.html
> 
> Thank you!
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.