|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Minios-devel] [UNIKRAFT PATCH v3 02/12] lib/uknetdev: Add alignment size for packet buffers allocations
On 8/12/20 2:34 PM, Costin Lupu wrote: Hi Sharan, Please see inline. On 8/12/20 3:04 PM, Sharan Santhanam wrote:Hello Costin, On 8/11/20 12:40 PM, Costin Lupu wrote:Hi Sharan, Please see inline. On 8/11/20 11:40 AM, Sharan Santhanam wrote:Hello Costin, Thank you for the work. Please find the comments inline. Thanks & Regards Sharan On 3/3/20 3:13 PM, Costin Lupu wrote: Yes in this case. The other option might be to move the `uk_netdev_queue_info` as a struct into the `netdev_info` with 2 different fields as `rxq_info` and `txq_info` and fetch the information only once when configuring the respective queue. This also leads to the question on where we should use the tx_headroom and rx_headroom be defined. Since the scope of the patch series changes, we can split it as a separate patch series. For the current problem, I would use the `nb_align` field from `uk_netdev_queue_info` since the queue properties would remain either ways. In the netdev API, both the rx_one and tx_one operation happens at the queue level and not on a device level since we select the queue on which send/receive the packet from the netdevice. The netdev API have taken the multiqueue in account. The receive part on LWIP happens also at the queue level as all the callback are set for a queue configuration, while the tx part is still the one were netdevice where we don't use the queue information yet. `netbuf_alloc_helper_init` does not have any dependency to a netdevice except we pass the netdev_info to it and collapse the netdev_info to a flat structure for that interface which is a valid method for lwip to handle its memory allocation. A suggestion might be fetch the device and queue information as a part of the `netbuf_alloc_helper_init`. But the needs of LWIP may not cause a change to netdev as it is a specific use case. The other use case of running DPDK on Unikraft does not have these requirements. I would keep the netdev API flexible enough and handle the specific use case at a higher level than netdev. Thanks & Regards Sharan}; /** diff --git a/plat/drivers/virtio/virtio_net.c b/plat/drivers/virtio/virtio_net.c index efc2cb71..9f1873c5 100644 --- a/plat/drivers/virtio/virtio_net.c +++ b/plat/drivers/virtio/virtio_net.c @@ -1048,8 +1048,10 @@ static void virtio_net_info_get(struct uk_netdev *dev, dev_info->max_rx_queues = vndev->max_vqueue_pairs; dev_info->max_tx_queues = vndev->max_vqueue_pairs; + dev_info->max_mtu = vndev->max_mtu; dev_info->nb_encap_tx = sizeof(struct virtio_net_hdr_padded); dev_info->nb_encap_rx = sizeof(struct virtio_net_hdr_padded); + dev_info->align = sizeof(void *); /* word size alignment */ } static int virtio_net_start(struct uk_netdev *n)
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |