[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [kvm-devel] [PATCH RFC 1/3] virtio infrastructure



On Sun, 2007-06-03 at 14:39 +0300, Avi Kivity wrote:
> Rusty Russell wrote:
> > Hmm... Perhaps I should move the used arrays into the "struct
> > virtio_device" and guarantee that the id (returned by add_*buf) is an
> > index into that.  Then we can trivially add a corresponding bit array. 
> 
> That may force the virtio backend to do things it doesn't want to.

Well, I have two implementations, and both ids already fit this model so
I have some confidence.  I just need to add the set_bit/clear_bit to the
read/write one, and use sync_clear_bit to the descriptor-based one
(which I suspect will actually get a little cleaner).

So I think this constraint is a reasonable thing to add anyway.

> > This also makes the "id" easier to use from the driver side: if we add a
> > "max_id" field it can size its own arrays if it wants to.  Gets rid of
> > the linked-list in the block driver...
> >   
> 
> How about
> 
>     struct virtio_completion {
>         unsigned long length;
>         void *data;
>     };
> 
>     int virtio_complete(struct virtio_completion *completions, int nr);
> 
> where 'data' is an opaque pointer passed by the device during add_*buf()?

Other than the fact that the driver will look less like a normal driver,
it's actually harder to write the net driver if that complete call can
happen off an interrupt: we can't free skbs in interrupt context, which
is why the used-outbuf cleanup is currently done (inefficiently, but
that's orthogonal) in the xmit routine.

I'll code up the modifications now and check if I'm right about the
impact on the code.  I might have to come up with something else...

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.