[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 for Xen 4.6 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler
On Fri, 2015-05-29 at 14:14 +0100, Jan Beulich wrote: > >>> On 29.05.15 at 15:01, <dario.faggioli@xxxxxxxxxx> wrote: > > On Wed, 2015-05-27 at 12:41 +0100, Jan Beulich wrote: > >> >>> On 26.05.15 at 19:18, <lichong659@xxxxxxxxx> wrote: > >> > On Tue, May 26, 2015 at 4:08 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote: > >> >>>>> On 26.05.15 at 02:05, <lichong659@xxxxxxxxx> wrote: > >> >>> + > >> >>> + switch ( op->cmd ) > >> >>> + { > >> >>> + case XEN_DOMCTL_SCHEDOP_getvcpuinfo: > >> >>> + spin_lock_irqsave(&prv->lock, flags); > >> >>> + list_for_each( iter, &sdom->vcpu ) > >> >>> + { > >> >>> + svc = list_entry(iter, struct rt_vcpu, sdom_elem); > >> >>> + vcpuid = svc->vcpu->vcpu_id; > >> >>> + > >> >>> + local_sched.budget = svc->budget / MICROSECS(1); > >> >>> + local_sched.period = svc->period / MICROSECS(1); > >> >>> + if ( copy_to_guest_offset(op->u.rtds.vcpus, vcpuid, > >> >> > >> >> What makes you think that the caller has provided enough slots? > >> > > >> > This code works in a similar way as the one for XEN_SYSCTL_numainfo. > >> > So if there is no enough slot in guest space, en error code is > >> > returned. > >> > >> I don't think I saw any place you returned an error here. The > >> only error being returned is -EFAULT when the copy-out fails. > >> > > Look again at XEN_SYSCTL_numainfo, in particular, at this part: > > > > if ( ni->num_nodes < num_nodes ) > > { > > ret = -ENOBUFS; > > i = num_nodes; > > } > > else > > i = 0 > > > > Which, as you'll appreciate if you check from where ni->num_nodes and > > num_nodes come from, if where the boundary checking we're asking for > > happens. > > > > Note how the 'i' variable is used in the rest of the block, and you'll > > see what we mean, and can do something similar. > > I think this was targeted at Chong rather than me (while I was > listed as To, and Chong only on Cc)? > It was indeed targeted at him. :-) I replied to your email to re-use and quote the points you made, but did not think about changing what my MUA does by default when hitting 'Reply'... I never do that, actually, but I now see how it could be a good idea to do so... I'll try to remember and fit that in my workflow. > >> >>> +/* Adjust scheduling parameter for the vcpus of a given domain. */ > >> >>> +long sched_adjust_vcpu( > >> >>> + struct domain *d, > >> >>> + struct xen_domctl_scheduler_vcpu_op *op) > >> >>> +{ > >> >>> + long ret; > >> >> > >> >> I see no reason for this and the function return type to be "long". > >> > > >> > The reason is stated in the begining. > >> > >> In the beginning of what? I don't see any such, and it would also be > >> odd for there to be an explanation of why a particular type was chosen. > >> > > As Chong he's saying elsewhere, he's moslty following suit. Should we > > consider a pre-patch making both sched_adjust() and > > sched_adjust_global() retunr int? > > Perhaps. But the main point here is that people copying existing code > should sanity check what they copy (so to not spread oddities or even > bugs). > Agreed, but in this case, he'd have ended up with: long sched_adjust(...) int sched_adjust_vcpu(...) long sched_adjust_global(...) So I think I see why Chong just went with 'long', that was my point. Then of course, one could have spotted that it's the two existing ones that are wrong/suboptimal, but that's more our responsibility than Chong's fault, IMO (which does not mean that we should not point the thing out and fix/ask to fix). In any case, as far as this series is concerned, there should be no need for a new function like this any longer. For "fixing" _adjust and _adjust_global, I'll take care of it. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |