|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 for Xen 4.6 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
On Tue, May 26, 2015 at 1:05 AM, Chong Li <lichong659@xxxxxxxxx> wrote:
> Add two hypercalls(XEN_DOMCTL_SCHEDOP_getvcpuinfo/putvcpuinfo) to get/set a
> domain's
> per-VCPU parameters. Hypercalls are handled by newly added hook
> (.adjust_vcpu) in the
> scheduler interface.
>
> Add a new data structure (struct xen_domctl_scheduler_vcpu_op) for
> transferring data
> between tool and hypervisor.
>
> Signed-off-by: Chong Li <chong.li@xxxxxxxxx>
> Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
> Signed-off-by: Sisu Xi <xisisu@xxxxxxxxx>
One comment that I didn't see addressed already...
> +static int
> +rt_vcpu_cntl(
> + const struct scheduler *ops,
> + struct domain *d,
> + struct xen_domctl_scheduler_vcpu_op *op)
> +{
> + struct rt_private *prv = rt_priv(ops);
> + struct rt_dom * const sdom = rt_dom(d);
> + struct rt_vcpu *svc;
> + struct list_head *iter;
> + unsigned long flags;
> + int rc = 0;
> + xen_domctl_sched_rtds_params_t local_sched;
> + unsigned int vcpuid;
> + unsigned int i;
> +
> + switch ( op->cmd )
> + {
> + case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> + spin_lock_irqsave(&prv->lock, flags);
> + list_for_each( iter, &sdom->vcpu )
> + {
> + svc = list_entry(iter, struct rt_vcpu, sdom_elem);
> + vcpuid = svc->vcpu->vcpu_id;
> +
> + local_sched.budget = svc->budget / MICROSECS(1);
> + local_sched.period = svc->period / MICROSECS(1);
> + if ( copy_to_guest_offset(op->u.rtds.vcpus, vcpuid,
> + &local_sched, 1) )
> + {
> + spin_unlock_irqrestore(&prv->lock, flags);
> + return -EFAULT;
> + }
> + hypercall_preempt_check();
> + }
> + spin_unlock_irqrestore(&prv->lock, flags);
> + break;
> + case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> + spin_lock_irqsave(&prv->lock, flags);
> + for( i = 0; i < op->u.rtds.nr_vcpus; i++ )
> + {
> + if ( copy_from_guest_offset(&local_sched,
> + op->u.rtds.vcpus, i, 1) )
> + {
> + spin_unlock_irqrestore(&prv->lock, flags);
> + return -EFAULT;
> + }
> + if ( local_sched.period <= 0 || local_sched.budget <= 0 )
> + {
> + spin_unlock_irqrestore(&prv->lock, flags);
> + return -EINVAL;
> + }
> + svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> + svc->period = MICROSECS(local_sched.period);
> + svc->budget = MICROSECS(local_sched.budget);
> + hypercall_preempt_check();
> + }
It looks like the interface here is assymetric. That is, on
"putvcpuinfo", you assume there is an array of rtds.nr_vcpus length,
and you read vcpu_id from each one. In "getvcpuinfo", you assume that
the array is the number of vcpus in the guest, and you just copy all
the vcpu data into it.
I think it would make more sense for it to work the same both ways --
so either always pass an array of all vcpus of the guest in and out;
or, have an array potentially specifying a subset of cpus in and out.
Thoughts?
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |