|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 for Xen 4.6 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
>>> On 29.05.15 at 15:01, <dario.faggioli@xxxxxxxxxx> wrote:
> On Wed, 2015-05-27 at 12:41 +0100, Jan Beulich wrote:
>> >>> On 26.05.15 at 19:18, <lichong659@xxxxxxxxx> wrote:
>> > On Tue, May 26, 2015 at 4:08 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> >>>>> On 26.05.15 at 02:05, <lichong659@xxxxxxxxx> wrote:
>> >>> +
>> >>> + switch ( op->cmd )
>> >>> + {
>> >>> + case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
>> >>> + spin_lock_irqsave(&prv->lock, flags);
>> >>> + list_for_each( iter, &sdom->vcpu )
>> >>> + {
>> >>> + svc = list_entry(iter, struct rt_vcpu, sdom_elem);
>> >>> + vcpuid = svc->vcpu->vcpu_id;
>> >>> +
>> >>> + local_sched.budget = svc->budget / MICROSECS(1);
>> >>> + local_sched.period = svc->period / MICROSECS(1);
>> >>> + if ( copy_to_guest_offset(op->u.rtds.vcpus, vcpuid,
>> >>
>> >> What makes you think that the caller has provided enough slots?
>> >
>> > This code works in a similar way as the one for XEN_SYSCTL_numainfo.
>> > So if there is no enough slot in guest space, en error code is
>> > returned.
>>
>> I don't think I saw any place you returned an error here. The
>> only error being returned is -EFAULT when the copy-out fails.
>>
> Look again at XEN_SYSCTL_numainfo, in particular, at this part:
>
> if ( ni->num_nodes < num_nodes )
> {
> ret = -ENOBUFS;
> i = num_nodes;
> }
> else
> i = 0
>
> Which, as you'll appreciate if you check from where ni->num_nodes and
> num_nodes come from, if where the boundary checking we're asking for
> happens.
>
> Note how the 'i' variable is used in the rest of the block, and you'll
> see what we mean, and can do something similar.
I think this was targeted at Chong rather than me (while I was
listed as To, and Chong only on Cc)?
>> >>> + }
>> >>> + hypercall_preempt_check();
>> >>
>> >> ???
>> >
>> > We've talked about this in patch v1
>> >
> (http://lists.xenproject.org/archives/html/xen-devel/2015-05/msg01152.html).
>> > When a domain has too many VCPUs, we need to make sure the spin lock
>> > does not last for too long.
>>
>> Right, but the call above does nothing like that. Please look at other
>> uses of it to understand what needs to be done here.
>>
> This being said, would it make sense to put down a threshold, below
> which we don't need to check for preemption (nor to ask to reissue the
> hypercall)?
>
> Something like, if we're dealing with a request for the parameters of N
> (== 16 ? 32 ?) vcpus, it's fine to do them all at once. Above that
> limit, we just do bunches of N vcpus, and then do a preempt check. What
> do you think Jan? And what do you think a suitable limit would be?
I have no problem with that, or with checking for preemption only
every so many iterations.
> In fact, apart from the fact that it's used incorrectly, I don't think
> that checking for preemption after _each_ step of the loop make much
> sense...
Whether checking at the beginning (but not for the first iteration)
or at the end (but not for the last one) doesn't really matter.
>> >>> --- a/xen/common/schedule.c
>> >>> +++ b/xen/common/schedule.c
>> >>> @@ -1104,6 +1104,30 @@ long sched_adjust(struct domain *d, struct
>> > xen_domctl_scheduler_op *op)
>> >>> return ret;
>> >>> }
>> >>>
>> >>> +/* Adjust scheduling parameter for the vcpus of a given domain. */
>> >>> +long sched_adjust_vcpu(
>> >>> + struct domain *d,
>> >>> + struct xen_domctl_scheduler_vcpu_op *op)
>> >>> +{
>> >>> + long ret;
>> >>
>> >> I see no reason for this and the function return type to be "long".
>> >
>> > The reason is stated in the begining.
>>
>> In the beginning of what? I don't see any such, and it would also be
>> odd for there to be an explanation of why a particular type was chosen.
>>
> As Chong he's saying elsewhere, he's moslty following suit. Should we
> consider a pre-patch making both sched_adjust() and
> sched_adjust_global() retunr int?
Perhaps. But the main point here is that people copying existing code
should sanity check what they copy (so to not spread oddities or even
bugs).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |