[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Xen-devel] Re: [kvm-devel] [PATCH RFC 3/3] virtio infrastructure: example block driver
Rusty Russell wrote:
Yea that would only work on the host: one can use ionice to set the io niceness of the entire guest. Individual processes inside the guest are opaque to the host, and thus are opaque to its io scheduler.Now my lack of block-layer knowledge is showing. I would have thought that if we want to do things like ionice(1) to work, we have to do some guest scheduling or pass that information down to the host.
With regard to compute power needed, almost none. The penalty is latency, not overhead: A small request may sit on the request queue to wait for other work to arrive until the queue gets unplugged. This penality is compensated by the benefit of a good chance that more requests will be merged during this time period. If we have this method both in host and guest, we have twice the penalty with no added benefit.It seems preferable to do that in the host, especially when requests of multiple guests end up on the same physical media (shared access, or partitioned).What's the overhead in doing both?
On the other hand, if we choose to hook into q->make_request_fn, we do end up doing far more hypercalls: bios do not get merged on the guest side. We must do a hypercall per bio in this scenario, or we'll end up adding latency again. In contrast, we can submit the entire content of the queue with a single hypercall when calling at do_request().
A third way out of that situation is to do queueing between guest and host: on the first bio, guest does a hypercall. When the next bio arrives, guest sees that the host has not finished processing the queue yet and pushes another buffer without doing a notification. We've also implemented this, with the result that our host stack was quick enough to practically always process the bio before the guest had the chance to submit another one. Performance was a nightmare, so we discontinued pursuing that idea.
so long, Carsten _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
Lists.xenproject.org is hosted with RackSpace, monitoring our