[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V0 PATCH 3/6] AMD-PVH: call hvm_emulate_one instead of handle_mmio



On Tue, 26 Aug 2014 08:17:06 +0100
"Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> >>> On 26.08.14 at 03:53, <mukesh.rathor@xxxxxxxxxx> wrote:
> > On Mon, 25 Aug 2014 08:10:38 +0100
> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> > 
> >> >>> On 22.08.14 at 20:52, <mukesh.rathor@xxxxxxxxxx> wrote:
> >> > On Fri, 22 Aug 2014 10:50:01 +0100
> >> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
> >> >> Also - how come at least the use of the function in VMX's
> >> >> EXIT_REASON_IO_INSTRUCTION handling is no problem for PVH,
> >> >> but SVM's VMEXIT_IOIO one is?
> >> > 
> >> > Yup, missed that one. That would need to be addressed. 
> >> > 
> >> > I guess the first step would be to do a non-pvh patch to fix
> >> > calling handle_mmio for non-mmio purposes.  Since, it applies to
> >> > both vmx/svm, perhaps an hvm function. Let me do that first, and
> >> > then pvh can piggyback on that.
> >> 
> >> Problem being that INS and OUTS can very well address MMIO
> >> on the memory side of the operation (while right now
> >> hvmemul_rep_{ins,outs}() fail such operations, this merely means
> >> they'd get emulated one by one instead of accelerated as multiple
> >> ops in one go).
> >> 
> >> Also looking at handle_mmio() once again - it being just a
> >> relatively thin wrapper around hvm_emulate_one(), can you remind
> >> me again what in this small function it was that breaks on PVH? It
> >> would seem
> > 
> > handle_mmio -> hvmemul_do_io -> hvm_mmio_intercept(), last one is
> > fatal as not all handlers are safe for pvh.
> 
> But you see - that is exactly my point: Avoiding handle_mmio() alone
> doesn't buy you anything, as hvmemul_do_io() isn't being called
> directly from that function, but indirectly via the actors specified
> in hvm_emulate_ops. Which gets me back to suggesting that you need
> a different struct x86_emulate_ops instance to deal with these non-
> MMIO emulation needs.

Hmm... going that route seems to duplicate lot of code in 
hvm_emulate_one, but I'm not suggesting that's a roadblock.

FWIW, looking at the code again, the only unsafe call to handle_mmio 
for pvh that resulted in call to hvm_mmio_intercept has been taken care of:

hvm_hap_nested_page_fault():
    if ( (p2mt == p2m_mmio_dm) ||
          (access_w && (p2mt == p2m_ram_ro)) )
    {
        put_gfn(p2m->domain, gfn);
        rc = 0;

        if ( unlikely(is_pvh_vcpu(v)) )   <----
            goto out; 
        if ( !handle_mmio() )
.....

So, at present, the only call to handle_mmio for PVH should be vmx
INS/OUTS which would not go to hvm_mmio_intercept, but hvm_portio_intercept.
Which means, handle_mmio is safe for pvh now and we could even remove
the ASSERT !is_pvh_guest.

In conclusion, 3 options:
  1. Create a different struct x86_emulate_ops. 
        - Cleaner, but code very similar to hvm_emulate_one

  2. Remove !PVH ASSERT in handle_mmio now that we know the only path for PVH
     is IO instructions. However, conceptually there's no mmio emul for pvh so
     nice to make that statement with the ASSERT.

  3. Replace call to handle_mmio in EXIT_REASON_IO_INSTRUCTION intercept with 
     hvm_emulate_one. This path will never result in call to 
     hvm_mmio_intercept(), hence safe to do.

If you still wanna go with #1, I'll have to punt it to Boris when he's
back, and we should change AMD support in 4.5 to maybe, and also realise vmx
INS/OUTS will remain broken on debug builds because of the ASSERT in 
handle_mmio.  Otherwise, I can whip out new version of patches, my preference 
would be #3.  I could create a new function say hvm_emulate_io_intercept which
will call hvm_emulate_one(), so one can easily change it to do #1 in future.

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.