[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V0 PATCH 3/6] AMD-PVH: call hvm_emulate_one instead of handle_mmio



>>> On 28.08.14 at 03:07, <mukesh.rathor@xxxxxxxxxx> wrote:
> On Tue, 26 Aug 2014 08:17:06 +0100
> "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> >>> On 26.08.14 at 03:53, <mukesh.rathor@xxxxxxxxxx> wrote:
>> > On Mon, 25 Aug 2014 08:10:38 +0100
>> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> >> Also looking at handle_mmio() once again - it being just a
>> >> relatively thin wrapper around hvm_emulate_one(), can you remind
>> >> me again what in this small function it was that breaks on PVH? It
>> >> would seem
>> > 
>> > handle_mmio -> hvmemul_do_io -> hvm_mmio_intercept(), last one is
>> > fatal as not all handlers are safe for pvh.
>> 
>> But you see - that is exactly my point: Avoiding handle_mmio() alone
>> doesn't buy you anything, as hvmemul_do_io() isn't being called
>> directly from that function, but indirectly via the actors specified
>> in hvm_emulate_ops. Which gets me back to suggesting that you need
>> a different struct x86_emulate_ops instance to deal with these non-
>> MMIO emulation needs.
> 
> Hmm... going that route seems to duplicate lot of code in 
> hvm_emulate_one, but I'm not suggesting that's a roadblock.

Are you also at least casually looking at what changes other people
propose on the list? One of the recent series already introduces a
second such instance (in order to be able to discard writes), and
does so without duplicating much code.

> So, at present, the only call to handle_mmio for PVH should be vmx
> INS/OUTS which would not go to hvm_mmio_intercept, but hvm_portio_intercept.
> Which means, handle_mmio is safe for pvh now and we could even remove
> the ASSERT !is_pvh_guest.

In this argumentation you leave aside that all the code paths leading
to hvm_mmio_intercept() are only known to be dead now, but a live
code path reaching them could be introduced at any time without
anyone explicitly noticing (until it possibly gets found to be a security
issue). I'm afraid you continue to think the "just make it work" way
rather than "make it maintainable code", yet the latter is required for
getting PVH out of the experimental corner.

> In conclusion, 3 options:
>   1. Create a different struct x86_emulate_ops. 
>         - Cleaner, but code very similar to hvm_emulate_one

As per above - there likely is a way to do this without much code
duplication.

>   2. Remove !PVH ASSERT in handle_mmio now that we know the only path for PVH
>      is IO instructions. However, conceptually there's no mmio emul for pvh so
>      nice to make that statement with the ASSERT.

Indeed, removing such an ASSERT() is at least premature; I'm in no
way opposed to keeping it indefinitely.

>   3. Replace call to handle_mmio in EXIT_REASON_IO_INSTRUCTION intercept with 
>      hvm_emulate_one. This path will never result in call to 
>      hvm_mmio_intercept(), hence safe to do.

This is simply not true - apparently you're only considering the path
hvmemul_rep_{ins,outs}() -> hvmemul_do_pio(). But as said before,
if the memory side of these instructions happens to be MMIO (don't
forget that you want to support pass-through at some point), the
former of these will bail before reaching hvmemul_do_pio(), making
the emulator retry via the hvmemul_{read,write}() paths, which do
have calls to hvmemul_do_mmio(). As that namely happens when
hvm_copy_{from,to}_guest_virt() return HVMCOPY_bad_gfn_to_mfn,
I actually think that this code path is reachable even without
pass-through support (by the guest accessing some non-RAM
region). Yet again, even if - by going through all the code paths and
checking that is_pvh_...() checks prevent this from happening today
- this can't happen now, it _still_ is a latent issue possible to surface
at any time, which we ought to avoid.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.