|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler
>>> On 07.03.16 at 17:28, <lichong659@xxxxxxxxx> wrote:
> On Mon, Mar 7, 2016 at 6:59 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 06.03.16 at 18:55, <lichong659@xxxxxxxxx> wrote:
>>> switch ( op->cmd )
>>> {
>>> - case XEN_DOMCTL_SCHEDOP_getinfo:
>>> - if ( d->max_vcpus > 0 )
>>> - {
>>> - spin_lock_irqsave(&prv->lock, flags);
>>> - svc = rt_vcpu(d->vcpu[0]);
>>> - op->u.rtds.period = svc->period / MICROSECS(1);
>>> - op->u.rtds.budget = svc->budget / MICROSECS(1);
>>> - spin_unlock_irqrestore(&prv->lock, flags);
>>> - }
>>> - else
>>> - {
>>> - /* If we don't have vcpus yet, let's just return the defaults.
>>> */
>>> - op->u.rtds.period = RTDS_DEFAULT_PERIOD;
>>> - op->u.rtds.budget = RTDS_DEFAULT_BUDGET;
>>> - }
>>> + case XEN_DOMCTL_SCHEDOP_getinfo: /* return the default parameters */
>>> + spin_lock_irqsave(&prv->lock, flags);
>>> + op->u.rtds.period = RTDS_DEFAULT_PERIOD / MICROSECS(1);
>>> + op->u.rtds.budget = RTDS_DEFAULT_BUDGET / MICROSECS(1);
>>> + spin_unlock_irqrestore(&prv->lock, flags);
>>> break;
>>
>> This alters the values returned when d->max_vcpus == 0 - while
>> this looks to be intentional, I think calling out such a bug fix in the
>> description is a must.
>
> Based on previous discussion, XEN_DOMCTL_SCHEDOP_getinfo only returns
> the default parameters,
> no matter whether vcpu is created yet or not. But I can absolutely
> explain this in the description.
That wasn't the point of the comment. Instead the change (fix) to
divide by MICROSECS(1) is what otherwise would go in silently.
>>> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>>> }
>>> spin_unlock_irqrestore(&prv->lock, flags);
>>> break;
>>> + case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>>> + if ( guest_handle_is_null(op->u.v.vcpus) )
>>> + {
>>> + rc = -EINVAL;
>>
>> Perhaps rather -EFAULT? But then again - what is this check good for
>> (considering that it doesn't cover other obviously bad handle values)?
>
> Dario suggested this in the last post, because vcpus is a handle and
> needs to be validated.
Well, as said - the handle being non-null doesn't make it a valid
handle. Any validation can be left to copy_{to,from}_guest*()
unless you mean to give a null handle some special meaning.
>>> + {
>>> + rc = -EINVAL;
>>> + break;
>>> + }
>>> +
>>> + spin_lock_irqsave(&prv->lock, flags);
>>> + svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>>> + local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
>>> + local_sched.s.rtds.period = svc->period / MICROSECS(1);
>>> + spin_unlock_irqrestore(&prv->lock, flags);
>>> +
>>> + if ( __copy_to_guest_offset(op->u.v.vcpus, index,
>>> + &local_sched, 1) )
>>> + {
>>> + rc = -EFAULT;
>>> + break;
>>> + }
>>> + if ( (++index > 0x3f) && hypercall_preempt_check() )
>>> + break;
>>
>> So how is the caller going to be able to reliably read all vCPU-s'
>> information for a guest with more than 64 vCPU-s?
>
> In libxc, we re-issue hypercall if the current one is preempted.
And with the current code - how does libxc know? (And anyway,
this should only be a last resort, if the hypervisor can't by itself
arrange for a continuation. If done this way, having a code
comment referring to the required caller behavior would seem to
be an absolute must.)
>>> + }
>>> +
>>> + if ( !rc && (op->u.v.nr_vcpus != index) )
>>> + op->u.v.nr_vcpus = index;
>>
>> I don't think the right side of the && is really necessary / useful.
>
> The right side is to check whether the vcpus array is fully processed.
> When it is true and no error occurs (rc == 0), we
> update op->u.v.nr_vcpus, which is returned to libxc, and helps xc
> function figuring out how many un-processed vcpus should
> be taken care of in the next hypercall.
Just consider what the contents of op->u.v.nr_vcpus is after
this piece of code was executed, once with the full conditional,
and another time with the right side of the && omitted.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |