[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 06/11] x86/hvm/ioreq: add a new mappable resource type...



>>> On 06.10.17 at 14:25, <paul.durrant@xxxxxxxxxx> wrote:
> @@ -288,6 +301,61 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, 
> bool buf)
>      return rc;
>  }
>  
> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +    struct domain *currd = current->domain;
> +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +
> +    if ( iorp->page )
> +    {
> +        /*
> +         * If a guest frame has already been mapped (which may happen
> +         * on demand if hvm_get_ioreq_server_info() is called), then
> +         * allocating a page is not permitted.
> +         */
> +        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> +            return -EPERM;
> +
> +        return 0;
> +    }
> +
> +    /*
> +     * Allocated IOREQ server pages are assigned to the emulating
> +     * domain, not the target domain. This is because the emulator is
> +     * likely to be destroyed after the target domain has been torn
> +     * down, and we must use MEMF_no_refcount otherwise page allocation
> +     * could fail if the emulating domain has already reached its
> +     * maximum allocation.
> +     */
> +    iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);

Whichever domain you assign the page to, you need to prevent
it becoming usable as e.g. a page table or descriptor table page.
IOW I think your missing a get_page_type(..., PGT_writable)
here, with the put_page() on the free path below then needing
to become put_page_and_type().

> @@ -784,6 +885,45 @@ int hvm_get_ioreq_server_info(struct domain *d, 
> ioservid_t id,
>      return rc;
>  }
>  
> +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
> +                               unsigned int idx, mfn_t *mfn)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +
> +    if ( id == DEFAULT_IOSERVID )
> +        return -EOPNOTSUPP;
> +
> +    s = get_ioreq_server(d, id);
> +
> +    ASSERT(!IS_DEFAULT(s));
> +
> +    rc = hvm_ioreq_server_alloc_pages(s);
> +    if ( rc )
> +        goto out;
> +
> +    if ( idx == 0 )

switch() ?

> @@ -3866,6 +3867,27 @@ int xenmem_add_to_physmap_one(
>      return rc;
>  }
>  
> +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id,
> +                                unsigned long frame,
> +                                unsigned long nr_frames,
> +                                unsigned long mfn_list[])
> +{
> +    unsigned int i;
> +
> +    for ( i = 0; i < nr_frames; i++ )
> +    {
> +        mfn_t mfn;
> +        int rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);

Coming back to the question of the size of the "frame" interface
structure field, note how you silently truncate the upper 32 bits
here.

> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -609,15 +609,26 @@ struct xen_mem_acquire_resource {
>      domid_t domid;
>      /* IN - the type of resource */
>      uint16_t type;
> +
> +#define XENMEM_resource_ioreq_server 0
> +
>      /*
>       * IN - a type-specific resource identifier, which must be zero
>       *      unless stated otherwise.
> +     *
> +     * type == XENMEM_resource_ioreq_server -> id == ioreq server id
>       */
>      uint32_t id;
>      /* IN - number of (4K) frames of the resource to be mapped */
>      uint32_t nr_frames;
>      uint32_t pad;
> -    /* IN - the index of the initial frame to be mapped */
> +    /* IN - the index of the initial frame to be mapped
> +     *
> +     * type == XENMEM_resource_ioreq_server -> frame == 0 -> bufioreq
> +     *                                                       page
> +     *                                         frame == 1 -> ioreq
> +     *                                                       page
> +     */

Long comment or not I think you want to introduce constants
for these two numbers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.