Thanks for the comments. Frankly, I’m
guessing the bulk of the time in the COM port IO is VMEXIT time, and that saving
qemu round-trip would be a marginal effect**.
As for the read’s flushing writes,
this happens automatically as a result of how the buffered_io page works (and
assuming one sticks to this design for IO buffering). If dir == IOREQ_READ
then attempt to buffered the IO request will fail. Thus, hvm_send_assist_req
is invoked. When qemu catches the “notify” event of the READ
it firsts dispatches *all* of the
buffered io requests before dispatching the READ. Thus order is preserved and inb
are synchronous from the vcpu point of view.
As for controlling outbound FIFO depth,
adding a per range “max_depth” test to the “queue is full”
test already in use for mmio buffering would be straight forward.
The interrupt issues are more
concerning. A one byte write “window” at 3F8 doesn’t
seem to have this issue (c.f.) ftp://ftp.phil.uni-sb.de/pub/staff/chris/The_Serial_Port
But I agree that proxy device models are
not desirable when not performance critical. Regardless, they wouldn’t be
supported directly though a simple “hvm_buffered_io_intercept” call.
This would be more suited to the approach used in hvm_mmio_intercept to
do the lapic emulation.
** For those interested, I’m looking
at the performance of using Windbg for Guest domain debug, and the time to do
the serial port based initialization of a kernel debug session. Starting a
WinDBG session on a Windows guest OS takes several minutes. Any suggestions to
optimize that process would be gladly entertained.
From: Keir Fraser
Sent: Saturday, July 21, 2007 4:09
To: Trolle Selander; Zulauf, John
Subject: Re: [Xen-devel] Buffered
IO for IO?
Yes, it strikes me that this cannot
be done safely without providing a set of ‘proxy device models’ in
the hypervisor that know when it is safe to buffer and when the buffers must be
flushed, according to native hardware behaviour.
On 21/7/07 11:59, "Trolle Selander" <trolle.selander@xxxxxxxxx>
Safety would depend on how the emulated device works. For
serial ports in particular, it's definitely not safe, since depending on the
model of UART emulated, and the settings of the UART control registers, every
outb may result in a serial interrupt and UART register changes that will have
to be processed before any further io can be done.
It's possible that there might be some performance to be gained by
"upgrading" the emulated UART to a 16550A or better, and doing
buffered IO for the FIFO. Earlier this year I was experimenting with a patch
that made the qemu-dm serial emulation into a 16550A with FIFO, but though the
patch did fix some compatability issues with software that assumed a 16550A
UART in the HVM guest I'm working with, serial performance actually got
noticeably _worse_, so I never bothered submitting it. Implementing the FIFO
with buffered IO would possibly make it work better, but I don't see how it
could be done without moving at least part of the serial device model into the
hypervisor, which just strikes me as more trouble than it's worth.
On 7/21/07, Keir Fraser
On 20/7/07 22:33, "Zulauf, John" <john.zulauf@xxxxxxxxx> wrote:
> Has anyone experimented with adding Buffered IO support for
> instructions? Currently, the buffered io pages is only used for mmio
> writes (and then only to vga space). It seems quite straight-forward
Is it safe to buffer, and hence arbitrarily delay, any I/O port write?
Xen-devel mailing list
Xen-devel mailing list