[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v1 1/4] xen: enabling XL to set per-VCPU parameters of a domain for RTDS scheduler



Hey,

2 things first:
 - can you avoid HTML emails?
 - can you trim the quotes, which means only leaving in the reply
   message the bits of the conversation that are useful for bringing it
   forward, and cut the rest (I'm doing this in this mail too)

On Thu, 2015-05-14 at 17:27 -0500, Chong Li wrote:


> On Mon, May 11, 2015 at 1:57 AM, Jan Beulich <JBeulich@xxxxxxxx>
> wrote:
>         >>> On 11.05.15 at 00:04, <lichong659@xxxxxxxxx> wrote:
>         > On Fri, May 8, 2015 at 2:49 AM, Jan Beulich
>         <JBeulich@xxxxxxxx> 
>         
>         >> Considering a maximum size guest, these two nested loops
>         could
>         >> require a couple of million iterations. That's too much
>         without any
>         >> preemption checks in the middle.
>         >>
>         >
>         > The section protected by the lock is only the
>         "list_for_each" loop, whose
>         > running time is limited by the number of vcpus of a domain
>         (32 at most).

>         Since when is 32 the limit on the number of vCPU-s in a
>         domain?

> Based on Dario's suggestion, I'll use vcpu_id to locate the vcpu to
> set, which cost much
> less time.
>
I will indeed be quicker. However, this is probably not enough to
invalidate Jan's point: a guest can have enough vCPUs to make it
undesirable that the hypervisor goes doing something on each of them
without interruption/preemption.

In fact, even with only one for loop, the number of iterations may still
be high and, more important, is not under our (read: Xen) complete
control, as it depends on how many vCPUs the user is operating on.

For this reason, I think we also want to make this preemptable, at least
if the number of vCPUs passed is above a threshold that we can define.
Or perhaps by doing a bunch of vCPUs, tell toolstack how far we got and
have them reissue the hypercall.

Here's some (rather random) references:
 * http://xenbits.xen.org/xsa/advisory-77.html
 * http://xenbits.xen.org/xsa/advisory-125.html
 * http://lists.xenproject.org/archives/html/xen-devel/2014-12/msg00735.html
 * how the recently introduced XEN_SYSCTL_pcitopoinfo has been implemented

Regards,
Dario

>  
>         
>         > If this does cause problems, I think adding a
>         "hypercall_preempt_check()"
>         > at the outside "for" loop may help. Is that right?
>         
>         Yes.
>         
>         >> > --- a/xen/common/schedule.c
>         >> > +++ b/xen/common/schedule.c
>         >> > @@ -1093,7 +1093,9 @@ long sched_adjust(struct domain *d,
>         struct
>         >> xen_domctl_scheduler_op *op)
>         >> >
>         >> >      if ( (op->sched_id != DOM2OP(d)->sched_id) ||
>         >> >           ((op->cmd != XEN_DOMCTL_SCHEDOP_putinfo) &&
>         >> > -          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo)) )
>         >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getinfo) &&
>         >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_putvcpuinfo) &&
>         >> > +          (op->cmd != XEN_DOMCTL_SCHEDOP_getvcpuinfo)) )
>         >>
>         >> Imo this should become a switch now.
>         >>
>         >
>         > Do you mean "switch ( op->cmd )" ? I'm afraid that would
>         make it look more
>         > complicated.
>         
>         This may be a matter of taste to a certain degree, but I
>         personally
>         don't think a series of four almost identical comparisons
>         reads any
>         better than its switch() replacement. But it being a style
>         issue, the
>         ultimate decision is with George as the maintainer anyway.
>         
>         Jan
> 
> 
> 
> 
> -- 
> Chong Li
> Department of Computer Science and Engineering
> Washington University in St.louis

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.