[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V0 PATCH 3/6] AMD-PVH: call hvm_emulate_one instead of handle_mmio

On Mon, 25 Aug 2014 08:10:38 +0100
"Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> >>> On 22.08.14 at 20:52, <mukesh.rathor@xxxxxxxxxx> wrote:
> > On Fri, 22 Aug 2014 10:50:01 +0100
> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> >> Also - how come at least the use of the function in VMX's
> >> EXIT_REASON_IO_INSTRUCTION handling is no problem for PVH,
> >> but SVM's VMEXIT_IOIO one is?
> > 
> > Yup, missed that one. That would need to be addressed. 
> > 
> > I guess the first step would be to do a non-pvh patch to fix calling
> > handle_mmio for non-mmio purposes.  Since, it applies to both
> > vmx/svm, perhaps an hvm function. Let me do that first, and then
> > pvh can piggyback on that.
> Problem being that INS and OUTS can very well address MMIO
> on the memory side of the operation (while right now
> hvmemul_rep_{ins,outs}() fail such operations, this merely means
> they'd get emulated one by one instead of accelerated as multiple
> ops in one go).
> Also looking at handle_mmio() once again - it being just a relatively
> thin wrapper around hvm_emulate_one(), can you remind me again
> what in this small function it was that breaks on PVH? It would seem

handle_mmio -> hvmemul_do_io -> hvm_mmio_intercept(), last one is
fatal as not all handlers are safe for pvh. We had discussed creating
pvh_mmio_handlers[] parallel to hvm_mmio_handlers[], which for now
would be empty, but can be expanded in future. Then hvm_mmio_intercept()
will chose one or the other depending on is_pvh. Or, I could just skip 
hvm_mmio_handlers if pvh in hvm_mmio_intercept(), and pvh_mmio_handlers[]
can be added in future. Either way.

With above change, we can remove !pvh assert from handle_mmio, 
and leave svm/vmx code unchanged (so it will keep calling handle_mmio).
This would greatly help amd domU patches since time seems to be running


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.