[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 5/8] tmem: Move TMEM_CONTROL subop of tmem hypercall to sysctl.



>>> On 27.08.15 at 13:02, <konrad.wilk@xxxxxxxxxx> wrote:
> @@ -2666,9 +2662,9 @@ long do_tmem_op(tmem_cli_op_t uops)
>      /* Acquire wirte lock for all command at first */
>      write_lock(&tmem_rwlock);
>  
> -    if ( op.cmd == TMEM_CONTROL )
> +    if ( op.cmd == TMEM_CONTROL_MOVED )
>      {
> -        rc = do_tmem_control(&op);
> +        rc = -ENOSYS;

As in other similar cases I'd prefer -EOPNOTSUPP here to prevent
callers from implying the tmem hypercall as a whole is unimplemented.

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -710,6 +710,48 @@ struct xen_sysctl_psr_cat_op {
>  typedef struct xen_sysctl_psr_cat_op xen_sysctl_psr_cat_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_psr_cat_op_t);
>  
> +#define XEN_SYSCTL_TMEM_OP_ALL_CLIENTS 0xFFFFU
> +
> +#define XEN_SYSCTL_TMEM_OP_THAW                   0
> +#define XEN_SYSCTL_TMEM_OP_FREEZE                 1
> +#define XEN_SYSCTL_TMEM_OP_FLUSH                  2
> +#define XEN_SYSCTL_TMEM_OP_DESTROY                3
> +#define XEN_SYSCTL_TMEM_OP_LIST                   4
> +#define XEN_SYSCTL_TMEM_OP_SET_WEIGHT             5
> +#define XEN_SYSCTL_TMEM_OP_SET_CAP                6
> +#define XEN_SYSCTL_TMEM_OP_SET_COMPRESS           7
> +#define XEN_SYSCTL_TMEM_OP_QUERY_FREEABLE_MB      8
> +#define XEN_SYSCTL_TMEM_OP_SAVE_BEGIN             10
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_VERSION       11
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_MAXPOOLS      12
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_WEIGHT 13
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_CAP    14
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_CLIENT_FLAGS  15
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_FLAGS    16
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_NPAGES   17
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_POOL_UUID     18
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_PAGE     19
> +#define XEN_SYSCTL_TMEM_OP_SAVE_GET_NEXT_INV      20
> +#define XEN_SYSCTL_TMEM_OP_SAVE_END               21
> +#define XEN_SYSCTL_TMEM_OP_RESTORE_BEGIN          30
> +#define XEN_SYSCTL_TMEM_OP_RESTORE_PUT_PAGE       32
> +#define XEN_SYSCTL_TMEM_OP_RESTORE_FLUSH_PAGE     33

Perhaps better to have all these in tmem.h, to not clutter this
header?

> +struct xen_sysctl_tmem_op {
> +    uint32_t cmd;       /* IN: XEN_SYSCTL_TMEM_OP_* . */
> +    int32_t pool_id;    /* IN: 0 by default unless _SAVE_*, RESTORE_* .*/
> +    uint32_t cli_id;    /* IN: client id, 0 for 
> XEN_SYSCTL_TMEM_QUERY_FREEABLE_MB
> +                           for all others can be the domain id or
> +                           XEN_SYSCTL_TMEM_OP_ALL_CLIENTS for all. */
> +    uint32_t arg1;      /* IN: If not applicable to command use 0. */
> +    uint32_t arg2;      /* IN: If not applicable to command use 0. */
> +    uint8_t  pad[4];    /* Padding so structure is the same under 32 and 64. 
> */

uint32_t please. And despite this being an (easily changeable) sysctl,
verifying that it's zero on input would be nice.

> --- a/xen/include/public/tmem.h
> +++ b/xen/include/public/tmem.h
> @@ -33,7 +33,7 @@
>  #define TMEM_SPEC_VERSION          1
>  
>  /* Commands to HYPERVISOR_tmem_op() */
> -#define TMEM_CONTROL               0
> +#define TMEM_CONTROL_MOVED         0

Perhaps say where it moved in a brief comment?

> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -761,6 +761,9 @@ static int flask_sysctl(int cmd)
>      case XEN_SYSCTL_tbuf_op:
>          return domain_has_xen(current->domain, XEN__TBUFCONTROL);
>  
> +    case XEN_SYSCTL_tmem_op:
> +        return domain_has_xen(current->domain, XEN__TMEM_CONTROL);
> +
>      case XEN_SYSCTL_sched_id:
>          return domain_has_xen(current->domain, XEN__GETSCHEDULER);

Hmm, these cases appear to be roughly sorted numerically, i.e.
yours would normally go at the end.

Despite the comments I see no reason for this (once adjusted where
needed) not to go in for 4.6.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.