[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server



>>> On 15.12.15 at 03:05, <shuai.ruan@xxxxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -935,6 +935,9 @@ static void hvm_ioreq_server_free_rangesets(struct 
> hvm_ioreq_server *s,
>          rangeset_destroy(s->range[i]);
>  }
>  
> +static const char *io_range_name[ NR_IO_RANGE_TYPES ] =

const

> +                                {"port", "mmio", "pci", "wp-ed memory"};

As brief as possible, but still understandable - e.g. "wp-mem"?

> @@ -2593,6 +2597,16 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>          type = (p->type == IOREQ_TYPE_PIO) ?
>                  HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>          addr = p->addr;
> +        if ( type == HVMOP_IO_RANGE_MEMORY )
> +        {
> +             ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
> +                                          &p2mt, P2M_UNSHARE);
> +             if ( p2mt == p2m_mmio_write_dm )
> +                 type = HVMOP_IO_RANGE_WP_MEM;
> +
> +             if ( ram_page )
> +                 put_page(ram_page);
> +        }

You evaluate the page's current type here - what if it subsequently
changes? I don't think it is appropriate to leave the hypervisor at
the mercy of the device model here.

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
>      bool_t           pending;
>  };
>  
> -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
> -#define MAX_NR_IO_RANGES  256
> +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
> +#define MAX_NR_IO_RANGES  8192

I'm sure I've objected before to this universal bumping of the limit:
Even if I were to withdraw my objection to the higher limit on the
new kind of tracked resource, I would continue to object to all
other resources getting their limits bumped too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.