[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler



On Tue, Feb 9, 2016 at 12:17 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> On Thu, 2016-02-04 at 16:50 -0600, Chong Li wrote:
>> Add XEN_DOMCTL_SCHEDOP_getvcpuinfo and _putvcpuinfo hypercalls
>> to independently get and set the scheduling parameters of each
>> vCPU of a domain
>>
>> Signed-off-by: Chong Li <chong.li@xxxxxxxxx>
>> Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
>> Signed-off-by: Sisu Xi <xisisu@xxxxxxxxx>
>>
>> ---
>> Changes on PATCH v4:
>> 1) Add uint32_t vcpu_index to struct xen_domctl_scheduler_op.
>> When processing XEN_DOMCTL_SCHEDOP_get/putvcpuinfo, we call
>> hypercall_preemption_check in case the current hypercall lasts
>> too long. If we decide to preempt the current hypercall, we record
>> the index of the most-recent finished vcpu into the vcpu_index of
>> struct xen_domctl_scheduler_op. So when we resume the hypercall after
>> preemption, we start processing from the posion specified by
>> vcpu_index,
>> and don't need to repeat the work that has already been done in the
>> hypercall before the preemption.
>> (This design is based on the do_grant_table_op() in grant_table.c)
>>

>

>> diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
>> index 3f1d047..34ae48d 100644
>> --- a/xen/common/sched_rt.c
>> +++ b/xen/common/sched_rt.c

>
>> +        for ( index = op->u.v.vcpu_index; index < op->u.v.nr_vcpus;
>> index++ )
>> +        {
>> +            spin_lock_irqsave(&prv->lock, flags);
>> +            if ( copy_from_guest_offset(&local_sched,
>> +                          op->u.v.vcpus, index, 1) )
>> +            {
>> +                rc = -EFAULT;
>> +                spin_unlock_irqrestore(&prv->lock, flags);
>> +                break;
>> +            }
>> +            if ( local_sched.vcpuid >= d->max_vcpus ||
>> +                          d->vcpu[local_sched.vcpuid] == NULL )
>> +            {
>> +                rc = -EINVAL;
>> +                spin_unlock_irqrestore(&prv->lock, flags);
>> +                break;
>> +            }
>> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
>> +            period = MICROSECS(local_sched.s.rtds.period);
>> +            budget = MICROSECS(local_sched.s.rtds.budget);
>> +            if ( period > RTDS_MAX_PERIOD || budget <
>> RTDS_MIN_BUDGET ||
>> +                          budget > period )
>>
> Isn't checking against RTDS_MIN_PERIOD missing?

Because RTDS_MIN_PERIOD==RTDS_MIN_BUDGET, by checking budget <
RTDS_MIN_BUDGET and budget > period, the checking against
RTDS_MIN_PERIOD is already covered.


>> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
>> index c195129..f4a4032 100644
>> --- a/xen/common/schedule.c
>> +++ b/xen/common/schedule.c
>> @@ -1148,10 +1148,19 @@ long sched_adjust(struct domain *d, struct
>> xen_domctl_scheduler_op *op)
>>      if ( ret )
>>          return ret;
>>
>> -    if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>> -         ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
>> -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
>> +    if ( op->sched_id != DOM2OP(d)->sched_id )
>>          return -EINVAL;
>> +    else
>> +        switch ( op->cmd )
>> +        {
>> +        case XEN_DOMCTL_SCHEDOP_putinfo:
>> +        case XEN_DOMCTL_SCHEDOP_getinfo:
>> +        case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
>> +        case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> +            break;
>> +        default:
>> +            return -EINVAL;
>> +        }
>>
> Maybe I'm misremembering (in which case, sorry), was using a switch
> like this instead of an if suggested during one of the previous rounds?
>
Yes. In patch v3, Jan suggested that.


-- 
Chong Li
Department of Computer Science and Engineering
Washington University in St.louis

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.