[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] public/io/netif.h: document control ring and toeplitz hashing



On 23/12/2015 10:06, Paul Durrant wrote:
> +#define NETIF_CTRL_RING_SIZE 1024
> +
> +struct netif_ctrl_ring {
> +     RING_IDX req_cons;
> +     RING_IDX req_prod;
> +     RING_IDX rsp_cons;
> +     RING_IDX rsp_prod;
> +     uint8_t req[NETIF_CTRL_RING_SIZE];
> +     uint8_t rsp[NETIF_CTRL_RING_SIZE];

To avoid making the same mistake as the xenstore ring, this at the very
minimum needs a defined reset protocol.  It should also at least have a
version number (currently expected to be zero) which is used to
delineate the use of the remaining space in the page.

> +};
> +
> +struct xen_netif_ctrl_msg_hdr {
> +     uint16_t type;
> +     uint16_t len;

These don't match your documentation above.  uint32_t's ?

> +};
> +
> +#define NETIF_CTRL_MSG_ACK                  1
> +#define NETIF_CTRL_MSG_GET_TOEPLITZ_FLAGS   2
> +#define NETIF_CTRL_MSG_SET_TOEPLITZ_FLAGS   3
> +#define NETIF_CTRL_MSG_SET_TOEPLITZ_KEY     4
> +#define NETIF_CTRL_MSG_SET_TOEPLITZ_MAPPING 5

What about 0?  Again learning from the xenstore case, can we define 0 as
explicitly an invalid value, so a page of zeroes doesn't appear to be a
valid sequence of messages.

> +
> +/* Control messages: */
> +
> +/*
> + * NETIF_CTRL_MSG_ACK:
> + *
> + * This is the only valid type of message sent by the backend to the
> + * frontend. It carries a payload of the following format:
> + *
> + *    0     1     2     3     4     5     6     7  octet
> + * +-----+-----+-----+-----+-----+-----+-----+-----+

Can I recommend that all ack packets contain the control type they are
responding to.  In the normal case, it indeed shouldn't be needed, but
if the front and back ever get out of sync, it will make debugging far
easier.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.