[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 1/8] public / x86: Introduce __HYPERCALL_dm_op...



>>> On 23.01.17 at 14:59, <paul.durrant@xxxxxxxxxx> wrote:
> +static bool copy_buf_from_guest(xen_dm_op_buf_t bufs[],
> +                                unsigned int nr_bufs, void *dst,
> +                                unsigned int idx, size_t dst_size)
> +{
> +    if ( dst_size != bufs[idx].size )
> +        return false;
> +
> +    return !copy_from_guest(dst, bufs[idx].h, dst_size);
> +}
> +
> +static bool copy_buf_to_guest(xen_dm_op_buf_t bufs[],
> +                              unsigned int nr_bufs, unsigned int idx,
> +                              void *src, size_t src_size)
> +{
> +    if ( bufs[idx].size != src_size )
> +        return false;
> +
> +    return !copy_to_guest(bufs[idx].h, src, bufs[idx].size);
> +}

What are the "nr_bufs" parameters good for in both of these?

> +static int dm_op(domid_t domid,
> +                 unsigned int nr_bufs,
> +                 xen_dm_op_buf_t bufs[])
> +{
> +    struct domain *d;
> +    struct xen_dm_op op;
> +    bool const_op = true;
> +    long rc;
> +
> +    rc = rcu_lock_remote_domain_by_id(domid, &d);
> +    if ( rc )
> +        return rc;
> +
> +    if ( !has_hvm_container_domain(d) )
> +        goto out;
> +
> +    rc = xsm_dm_op(XSM_DM_PRIV, d);
> +    if ( rc )
> +        goto out;
> +
> +    if ( !copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op)) )

I'm afraid my request to have an exact size check in the copy
functions was going a little too far for this particular instance: Just
consider what would happen for a tool stack built with just this one
patch in place, but run against a hypervisor with at least one more
applied. We can of course keep things the way they are here, but
then we'll need a placeholder added to the structure right away
(like is e.g. the case for domctl/sysctl). Every sub-structure should
then be checked to not violate that constraint by a BUILD_BUG_ON()
in its respective function (I'd prefer that over a single verification
of the entire structure/union, as that would more clearly pinpoint
a possible offender).

> +    {
> +        rc = -EFAULT;
> +        goto out;
> +    }
> +
> +    switch ( op.op )
> +    {
> +    default:
> +        rc = -EOPNOTSUPP;
> +        break;
> +    }
> +
> +    if ( !rc &&
> +         !const_op &&
> +         !copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op)) )
> +        rc = -EFAULT;
> +
> + out:
> +    rcu_unlock_domain(d);
> +
> +    return rc;
> +}
> +
> +#define MAX_NR_BUFS 1
> +
> +int compat_dm_op(domid_t domid,
> +                 unsigned int nr_bufs,
> +                 COMPAT_HANDLE_PARAM(compat_dm_op_buf_t) bufs)
> +{
> +    struct xen_dm_op_buf nat[MAX_NR_BUFS];
> +    unsigned int i;
> +
> +    if ( nr_bufs > MAX_NR_BUFS )
> +        return -EINVAL;

E2BIG (to make it distinguishable)?

> +long do_dm_op(domid_t domid,
> +              unsigned int nr_bufs,
> +              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
> +{
> +    struct xen_dm_op_buf nat[MAX_NR_BUFS];
> +
> +    if ( nr_bufs > MAX_NR_BUFS )
> +        return -EINVAL;

Same here then.

> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -56,6 +56,7 @@
>  ?       grant_entry_header              grant_table.h
>  ?    grant_entry_v2                  grant_table.h
>  ?    gnttab_swap_grant_ref           grant_table.h
> +!    dm_op_buf                       hvm/dm_op.h
>  ?    vcpu_hvm_context                hvm/hvm_vcpu.h
>  ?    vcpu_hvm_x86_32                 hvm/hvm_vcpu.h
>  ?    vcpu_hvm_x86_64                 hvm/hvm_vcpu.h

While for this patch the addition here is sufficient, subsequent
patches should add their sub-structures here with a leading ?,
and you'd then need to invoke the resulting CHECK_* macros
somewhere. I don't think we should leave those structures
unchecked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.