[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/6] x86/hvm: Implement hvmemul_write() using real mappings

On 03/07/17 17:06, Jan Beulich wrote:
>>>> On 03.07.17 at 17:07, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 22/06/17 10:06, Jan Beulich wrote:
>>>> +    {
>>>> +        ASSERT_UNREACHABLE();
>>>> +        goto unhandleable;
>>>> +    }
>>>> +
>>>> +    do {
>>>> +        enum hvm_translation_result res;
>>>> +        struct page_info *page;
>>>> +        pagefault_info_t pfinfo;
>>>> +        p2m_type_t p2mt;
>>>> +
>>>> +        res = hvm_translate_get_page(curr, frame << PAGE_SHIFT, true, 
>>>> pfec,
>>>> +                                     &pfinfo, &page, NULL, &p2mt);
>>>> +
>>>> +        switch ( res )
>>>> +        {
>>>> +        case HVMTRANS_okay:
>>>> +            break;
>>>> +
>>>> +        case HVMTRANS_bad_linear_to_gfn:
>>>> +            x86_emul_pagefault(pfinfo.ec, pfinfo.linear, 
>>>> &hvmemul_ctxt->ctxt);
>>>> +            err = ERR_PTR(~(long)X86EMUL_EXCEPTION);
>>>> +            goto out;
>>>> +
>>>> +        case HVMTRANS_bad_gfn_to_mfn:
>>>> +            err = NULL;
>>>> +            goto out;
>>>> +
>>>> +        case HVMTRANS_gfn_paged_out:
>>>> +        case HVMTRANS_gfn_shared:
>>>> +            err = ERR_PTR(~(long)X86EMUL_RETRY);
>>>> +            goto out;
>>>> +
>>>> +        default:
>>>> +            goto unhandleable;
>>>> +        }
>>>> +
>>>> +        /* Error checking.  Confirm that the current slot is clean. */
>>>> +        ASSERT(mfn_x(*mfn) == 0);
>>> Wouldn't this better be done first thing in the loop?
>> IMO its clearer to keep it next to the assignment, but yes, could in
>> principle move to the top of the loop.
>>> And wouldn't the value better be INVALID_MFN?
>> The backing array is zeroed by hvm_emulate_init_once(), so relying on 0
>> here is more convenient.
>> Furthermore, INVALID_MFN is used lower down to poison unused slots, so
>> initialising the whole array to INVALID_MFN reduces the effectiveness of
>> the checks.
> Well, further down I had implicitly asked whether some of the
> checks wouldn't better go away (as they can't be carried out
> with some of the redundant unmap inputs dropped).
>>>> +        *mfn++ = _mfn(page_to_mfn(page));
>>>> +        frame++;
>>>> +
>>>> +        if ( p2m_is_discard_write(p2mt) )
>>>> +        {
>>>> +            err = ERR_PTR(~(long)X86EMUL_OKAY);
>>>> +            goto out;
>>> If one page is discard-write and the other isn't, this will end up
>>> being wrong.
>> Straddled accesses are always a grey area, and discard-write is an extra
>> special case which only exists inside Xen.  Discard-write means that the
>> guest knows that it shouldn't write there at all.
> Is it the case that the guest knows? Iirc this type had been
> introduced for introspection tool use.

This is for Intel GVT-g (c/s 1c02cce0ed).

Introspection would go very wrong if selective writes had no effect.

>> Doing nothing (by logically extending the discard-write restriction over
>> the entire region) is the least bad option here, IMO.
> Okay, that's a reasonable argument. I'd suggest saying so
> explicitly in a comment, though.

I will add a comment.

>>>> +static void hvmemul_unmap_linear_addr(
>>>> +    void *mapping, unsigned long linear, unsigned int bytes,
>>> Both vunmap() and unmap_domain_page() take pointers to const, so
>>> please use const on the pointer here too.
>> The meaning of const void *p in C is "this function does not modify the
>> content pointed to by p".
>> Both vunmap() and unmap_domain_page() mutate the content being pointed
>> to, so should not take const pointers.
> We've had this discussion before, and I continue to take a
> different position: When you free a memory block, you implicitly
> declare its contents undefined.

In C's view, the object explicitly becomes undefined, as its lifetime
has come to an end, but this is a well defined operation.

> An undefined modification to undefined contents still yields undefined.

The precondition to this statement is false.

Before the unmap() call, the object and its contents are well defined.

By using a function with a const pointer, the contract of the function
says the object doesn't change, and therefore, its content doesn't change.

Therefore, the object and its contents are still well defined after the

This final statement is obviously false, because we blew away the
pagetables under the mapping.  The contradiction exists because of the
use of const pointer in the first place.

>  So no actual change.
> Furthermore it is not a given that a freeing routine actually
> touches the handed memory block at all - that's an internal
> implementation detail of the allocator.
> Plus - not having the parameters const means you can't
> allocate and initialize something in one go, and then store or
> pass around a pointer to it which is const-qualified to make
> clear the contents of that memory block aren't supposed to be
> further modified (until de-allocation).

I don't see this as a bad thing.  You can't do it with malloc()/free().

>>>> +    struct hvm_emulate_ctxt *hvmemul_ctxt)
>>> There upsides and downsides to requiring the caller to pass in the
>>> same values as to map(): You can do more correctness checking
>>> here, but you also risk the caller using the wrong values (perhaps
>>> because of a meanwhile updated local variable). While I don't
>>> outright object to this approach, personally I'd prefer minimal
>>> inputs here, and the code deriving everything from hvmemul_ctxt.
>> I'm not sure exactly how we might wish to extend this logic.  Are we
>> ever going to want more than one active mapping at once (perhaps rep
>> movs emulation across two ram regions)?
> I could imagine even more than two regions - from an abstract
> perspective it may be possible / helpful to have a mapping of
> each piece of memory a single instruction accesses, along the
> lines of minimal number of architecturally guaranteed TLB
> entries on ia64 in order to execute any possible x86 insn.

A further problem I've just realised is the use of multiple ->write()
calls for a single instruction in x86_emulate().  The preceding writes
obviously can't get backed out in the case of the second write taking a
fault, which will cause the emulator to exhibit similar
non-architectural behaviour as this patch is trying to fix.

>> The other reason is that in the release builds, even if we stored the
>> pointer in hvmemul_ctxt, we still cant determine which unmapping
>> function to use without linear and size.
> I don't understand - all you pass on is "mapping". And whether to
> unmap a single or two pages could be told from hvmemul_ctxt->mfn[1]
> (not) being INVALID_MFN.

That isn't safe outside of the debug builds.

You could in principle use mfn[1] != 0 in release builds, if it weren't
for the fact that hvmemul_ctxt could be used for multiple ->write()
calls.  In the case of a straddled write followed by a non-straddled
write, the second unmap() would evaluate incorrectly.

>>>> --- a/xen/include/asm-x86/hvm/emulate.h
>>>> +++ b/xen/include/asm-x86/hvm/emulate.h
>>>> @@ -37,6 +37,13 @@ struct hvm_emulate_ctxt {
>>>>      unsigned long seg_reg_accessed;
>>>>      unsigned long seg_reg_dirty;
>>>> +    /*
>>>> +     * MFNs behind temporary mappings in the write callback.  The length 
>>>> is
>>>> +     * arbitrary, and can be increased if writes longer than PAGE_SIZE are
>>>> +     * needed.
>>>> +     */
>>>> +    mfn_t mfn[2];
>>> Mind being precise in the comment, saying "PAGE_SIZE+1"?
>> While that is strictly true, it is not the behaviour which the map()
>> function takes.  I don't think it is worth the overhead of fixing that
>> boundary condition for one extra byte, at which point the documentation
>> should match the implementation.
> I don't understand your reply - with today's map()
> implementation, any PAGE_SIZE+1 range can be mapped, with
> the extremes being one mapping just one byte and the other
> mapping an entire page. But perhaps I'm simply not understanding
> what map() behavior you're thinking of.

Oh so it can.  I'd forgotten that I'd changed the implementation.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.