[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/PCI: Intercept Dom0 MMCFG from dom0s in HVM containers



>>> On 15.12.15 at 22:02, <boris.ostrovsky@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1778,6 +1778,28 @@ int hvm_emulate_one_no_write(
>      return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops_no_write);
>  }
>  
> +static const struct x86_emulate_ops hvm_emulate_ops_mmio = {
> +    .read       = mmio_ro_emulated_read,
> +    .insn_fetch = hvmemul_insn_fetch,
> +    .write      = mmio_intercept_write,
> +};

Blank line missing here, but as a probably better alternative this could
move into the function below.

> +int hvm_emulate_one_mmio(unsigned seg, unsigned bdf, unsigned long gla)

Name and function parameters don't match up (also for e.g. the
changed struct mmio_ro_emulate_ctxt). DYM e.g.
hvm_emulate_one_mmcfg()? (The function naming in x86/mm.c
is actually using mmio because its serving wider purpose than
just MMCFG write interception. Which makes me wonder whether
we don't actually need that other part for PVH too, in which case
the naming would again be fine.)

> +{
> +    struct mmio_ro_emulate_ctxt mmio_ro_ctxt = {
> +        .cr2 = gla,
> +        .seg = seg,
> +        .bdf = bdf
> +    };
> +    struct hvm_emulate_ctxt ctxt = { .ctxt.data = &mmio_ro_ctxt };

hvm_mem_access_emulate_one() so far is the only example pre-
initializing ctxt before calling hvm_emulate_prepare(), and I don't
think it actually needs this. I think the proper place to set .data is
after the call to hvm_emulate_prepare().

> +    int rc;
> +
> +    hvm_emulate_prepare(&ctxt, guest_cpu_user_regs());
> +    rc = _hvm_emulate_one(&ctxt, &hvm_emulate_ops_mmio);
> +    hvm_emulate_writeback(&ctxt);

Is this correct to be done unconditionally (i.e. regardless of rc)?

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3116,6 +3116,21 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned 
> long gla,
>          goto out_put_gfn;
>      }
>  
> +    if ( (p2mt == p2m_mmio_direct) && is_hardware_domain(currd) )

The PV condition includes a PFEC_page_present check, and I think
an equivalent should be added here too.

> +    {
> +        unsigned int seg, bdf;
> +        const unsigned long *ro_map;
> +
> +        if ( pci_mmcfg_decode(mfn_x(mfn), &seg, &bdf) &&
> +             ((ro_map = pci_get_ro_map(seg)) == NULL ||
> +              !test_bit(bdf, ro_map)) )
> +            if (hvm_emulate_one_mmio(seg, bdf, gla) == X86EMUL_OKAY)

Two immediately adjacent if()-s should be folded into one.

> @@ -5266,8 +5260,8 @@ static int mmio_ro_emulated_write(
>      unsigned int bytes,
>      struct x86_emulate_ctxt *ctxt)
>  {
> -    struct mmio_ro_emulate_ctxt *mmio_ro_ctxt =
> -        container_of(ctxt, struct mmio_ro_emulate_ctxt, ctxt);
> +    struct mmio_ro_emulate_ctxt *mmio_ro_ctxt = 
> +        (struct mmio_ro_emulate_ctxt *)ctxt->data;

Pointless cast.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.h
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.h
> @@ -430,6 +430,9 @@ struct x86_emulate_ctxt
>          } flags;
>          uint8_t byte;
>      } retire;
> +
> +    /* Caller data that can be used by x86_emulate_ops */

There seems to be more missing in this comment than just the full
stop: x86_emulate_ops is a structure and hence can't use any
data.

> --- a/xen/include/asm-x86/hvm/emulate.h
> +++ b/xen/include/asm-x86/hvm/emulate.h
> @@ -57,6 +57,10 @@ void hvm_emulate_writeback(
>  struct segment_register *hvmemul_get_seg_reg(
>      enum x86_segment seg,
>      struct hvm_emulate_ctxt *hvmemul_ctxt);
> +int hvm_emulate_one_mmio(
> +    unsigned seg,
> +    unsigned bdf,

Please don't omit the "int" here (and elsewhere whenever the
context doesn't suggest it's common practice in the given piece of
code).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.