|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v11 06/11] x86/hvm/ioreq: add a new mappable resource type...
>>> On 12.10.17 at 18:25, <paul.durrant@xxxxxxxxxx> wrote:
> ... XENMEM_resource_ioreq_server
>
> This patch adds support for a new resource type that can be mapped using
> the XENMEM_acquire_resource memory op.
>
> If an emulator makes use of this resource type then, instead of mapping
> gfns, the IOREQ server will allocate pages from the heap. These pages
> will never be present in the P2M of the guest at any point and so are
> not vulnerable to any direct attack by the guest. They are only ever
> accessible by Xen and any domain that has mapping privilege over the
> guest (which may or may not be limited to the domain running the emulator).
>
> NOTE: Use of the new resource type is not compatible with use of
> XEN_DMOP_get_ioreq_server_info unless the XEN_DMOP_no_gfns flag is
> set.
>
> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> Acked-by: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
> Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>
Can you have validly retained this?
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -281,6 +294,69 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s,
> bool buf)
> return rc;
> }
>
> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> +{
> + struct domain *currd = current->domain;
> + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +
> + if ( iorp->page )
> + {
> + /*
> + * If a guest frame has already been mapped (which may happen
> + * on demand if hvm_get_ioreq_server_info() is called), then
> + * allocating a page is not permitted.
> + */
> + if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> + return -EPERM;
> +
> + return 0;
> + }
> +
> + /*
> + * Allocated IOREQ server pages are assigned to the emulating
> + * domain, not the target domain. This is because the emulator is
> + * likely to be destroyed after the target domain has been torn
> + * down, and we must use MEMF_no_refcount otherwise page allocation
> + * could fail if the emulating domain has already reached its
> + * maximum allocation.
> + */
> + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
> + if ( !iorp->page )
> + return -ENOMEM;
> +
> + if ( !get_page_type(iorp->page, PGT_writable_page) )
> + {
ASSERT_UNREACHABLE() ?
> @@ -777,6 +886,51 @@ int hvm_get_ioreq_server_info(struct domain *d,
> ioservid_t id,
> return rc;
> }
>
> +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
> + unsigned long idx, mfn_t *mfn)
> +{
> + struct hvm_ioreq_server *s;
> + int rc;
> +
> + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +
> + if ( id == DEFAULT_IOSERVID )
> + return -EOPNOTSUPP;
> +
> + s = get_ioreq_server(d, id);
> +
> + ASSERT(!IS_DEFAULT(s));
> +
> + rc = hvm_ioreq_server_alloc_pages(s);
> + if ( rc )
> + goto out;
> +
> + switch ( idx )
> + {
> + case XENMEM_resource_ioreq_server_frame_bufioreq:
> + rc = -ENOENT;
> + if ( !HANDLE_BUFIOREQ(s) )
> + goto out;
> +
> + *mfn = _mfn(page_to_mfn(s->bufioreq.page));
> + rc = 0;
> + break;
How about
if ( HANDLE_BUFIOREQ(s) )
*mfn = _mfn(page_to_mfn(s->bufioreq.page));
else
rc = -ENOENT;
break;
?
> +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id,
> + unsigned long frame,
> + unsigned long nr_frames,
> + unsigned long mfn_list[])
> +{
> + unsigned int i;
This now doesn't match up with the upper bound's type.
> @@ -629,6 +634,10 @@ struct xen_mem_acquire_resource {
> * is optional if nr_frames is 0.
> */
> uint64_aligned_t frame;
> +
> +#define XENMEM_resource_ioreq_server_frame_bufioreq 0
> +#define XENMEM_resource_ioreq_server_frame_ioreq(n_) (1 + (n_))
I don't see what you need the trailing underscore for. This is
normally only needed on local variables defined in (gcc extended)
macros, which we generally can't use in a public header anyway.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |