[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 for Xen 4.7 1/4] xen: enable per-VCPU parameter settings for RTDS scheduler



On Sun, Mar 06, 2016 at 11:55:55AM -0600, Chong Li wrote:
[...]
> @@ -1163,6 +1173,96 @@ rt_dom_cntl(
>          }
>          spin_unlock_irqrestore(&prv->lock, flags);
>          break;
> +    case XEN_DOMCTL_SCHEDOP_getvcpuinfo:
> +        if ( guest_handle_is_null(op->u.v.vcpus) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +        while ( index < op->u.v.nr_vcpus )
> +        {
> +            if ( copy_from_guest_offset(&local_sched,
> +                          op->u.v.vcpus, index, 1) )

Indentation.

> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( local_sched.vcpuid >= d->max_vcpus ||
> +                          d->vcpu[local_sched.vcpuid] == NULL )

Ditto.

> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            spin_lock_irqsave(&prv->lock, flags);
> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> +            local_sched.s.rtds.budget = svc->budget / MICROSECS(1);
> +            local_sched.s.rtds.period = svc->period / MICROSECS(1);
> +            spin_unlock_irqrestore(&prv->lock, flags);
> +
> +            if ( __copy_to_guest_offset(op->u.v.vcpus, index,
> +                    &local_sched, 1) )

Here as well.

You might want to check your editor setting. I won't point out other
places that need fixing.

> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( (++index > 0x3f) && hypercall_preempt_check() )
> +                break;
> +        }
> +
> +        if ( !rc && (op->u.v.nr_vcpus != index) )
> +            op->u.v.nr_vcpus = index;
> +        break;
> +    case XEN_DOMCTL_SCHEDOP_putvcpuinfo:
> +        if ( guest_handle_is_null(op->u.v.vcpus) )
> +        {
> +            rc = -EINVAL;
> +            break;
> +        }
> +        while ( index < op->u.v.nr_vcpus )
> +        {
> +            if ( copy_from_guest_offset(&local_sched,
> +                          op->u.v.vcpus, index, 1) )
> +            {
> +                rc = -EFAULT;
> +                break;
> +            }
> +            if ( local_sched.vcpuid >= d->max_vcpus ||
> +                          d->vcpu[local_sched.vcpuid] == NULL )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            period = MICROSECS(local_sched.s.rtds.period);
> +            budget = MICROSECS(local_sched.s.rtds.budget);
> +            if ( period > RTDS_MAX_PERIOD || budget < RTDS_MIN_BUDGET ||
> +                          budget > period || period < RTDS_MIN_PERIOD )
> +            {
> +                rc = -EINVAL;
> +                break;
> +            }
> +
> +            /*
> +             * We accept period/budget less than 100 us, but will warn users 
> about
> +             * the large scheduling overhead due to it
> +             */
> +            if ( period < MICROSECS(100) || budget < MICROSECS(100) )
> +                printk("Warning: period or budget set to less than 100us.\n"
> +                       "This may result in high scheduling overhead.\n");
> +

I'm not the maintainer, but I think having printk here is bad idea
because the toolstack can then DoS the hypervisor.


> +            spin_lock_irqsave(&prv->lock, flags);
> +            svc = rt_vcpu(d->vcpu[local_sched.vcpuid]);
> +            svc->period = period;
> +            svc->budget = budget;
> +            spin_unlock_irqrestore(&prv->lock, flags);
> +

And this locking pattern seems sub-optimal. You might be able to move
the lock and unlock outside the while loop?

Note that I merely look at this snippet. I don't know that other
constraints you have, so take what I said with a grained of salt.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.