|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6] x86/hvm: Implement hvmemul_write() using real mappings
>>> On 27.09.17 at 14:39, <aisaila@xxxxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -498,6 +498,170 @@ static int hvmemul_do_mmio_addr(paddr_t mmio_gpa,
> }
>
> /*
> + * Map the frame(s) covering an individual linear access, for writeable
> + * access. May return NULL for MMIO, or ERR_PTR(~X86EMUL_*) for other errors
> + * including ERR_PTR(~X86EMUL_OKAY) for write-discard mappings.
> + *
> + * In debug builds, map() checks that each slot in hvmemul_ctxt->mfn[] is
> + * clean before use, and poisions unused slots with INVALID_MFN.
> + */
> +static void *hvmemul_map_linear_addr(
> + unsigned long linear, unsigned int bytes, uint32_t pfec,
> + struct hvm_emulate_ctxt *hvmemul_ctxt)
> +{
> + struct vcpu *curr = current;
> + void *err, *mapping;
> +
> + /* First and final gfns which need mapping. */
> + unsigned long frame = linear >> PAGE_SHIFT, first = frame;
> + unsigned long final = (linear + bytes - !!bytes) >> PAGE_SHIFT;
> + unsigned long pages = final - first + 1;
This is right for the 64-bit wrapping case afaics.
> + if ( hvmemul_ctxt->ctxt.addr_size < 64 )
> + {
> + frame = (uint32_t)frame;
> + final = (uint32_t)final;
> + pages = (uint32_t)pages;
> + }
And this isn't right either, as here and above 12 bits of the linear
address were shifted out already.
> + /*
> + * mfn points to the next free slot. All used slots have a page
> reference
> + * held on them.
> + */
> + mfn_t *mfn = &hvmemul_ctxt->mfn[0];
In C89 declarations can't follow statements.
> + /*
> + * The caller has no legitimate reason for trying a zero-byte write, but
> + * final is calculate to fail safe in release builds.
> + *
> + * The maximum write size depends on the number of adjacent mfns[] which
> + * can be vmap()'d, accouting for possible misalignment within the
> region.
> + * The higher level emulation callers are responsible for ensuring that
> + * mfns[] is large enough for the requested write size.
> + */
> + if ( bytes == 0 ||
> + pages > ARRAY_SIZE(hvmemul_ctxt->mfn) )
> + {
> + ASSERT_UNREACHABLE();
> + goto unhandleable;
> + }
> +
> + do {
> + enum hvm_translation_result res;
> + struct page_info *page;
> + pagefault_info_t pfinfo;
> + p2m_type_t p2mt;
> +
> + /* Error checking. Confirm that the current slot is clean. */
> + ASSERT(mfn_x(*mfn) == 0);
> +
> + res = hvm_translate_get_page(curr, frame << PAGE_SHIFT, true, pfec,
> + &pfinfo, &page, NULL, &p2mt);
> +
> + switch ( res )
> + {
> + case HVMTRANS_okay:
> + break;
> +
> + case HVMTRANS_bad_linear_to_gfn:
> + x86_emul_pagefault(pfinfo.ec, pfinfo.linear,
> &hvmemul_ctxt->ctxt);
> + err = ERR_PTR(~X86EMUL_EXCEPTION);
> + goto out;
> +
> + case HVMTRANS_bad_gfn_to_mfn:
> + err = NULL;
> + goto out;
> +
> + case HVMTRANS_gfn_paged_out:
> + case HVMTRANS_gfn_shared:
> + err = ERR_PTR(~X86EMUL_RETRY);
> + goto out;
> +
> + default:
> + goto unhandleable;
> + }
> +
> + *mfn++ = _mfn(page_to_mfn(page));
> +
> + if ( p2m_is_discard_write(p2mt) )
> + {
> + err = ERR_PTR(~X86EMUL_OKAY);
> + goto out;
> + }
> +
> + } while ( ++frame < final );
With wrapping in mind, as Andrew did say on v5, this can't be <
anymore. Also see the additional explanation I gave for what
this needs to become. Same for hvmemul_unmap_linear_addr()
then (to state the obvious).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |