[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 1/2] xen: credit2: flexible configuration of runqueues
Ok, now about the code. On Fri, 2017-03-10 at 23:56 +0530, Praveen Kumar wrote: > The user can create runqueue per-cpu using Xen boot parameter like > below: > > credit2_runqueue=cpu > > which would mean the following: > - pCPU 0 belong to runqueue 0 > - pCPU 1 belong to runqueue 1 > - pCPU 2 belong to runqueue 2 > and so on. > > Signed-off-by: Praveen Kumar <kpraveen.lkml@xxxxxxxxx> > > --- > diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c > index af457c1..2bc0013 100644 > --- a/xen/common/sched_credit2.c > +++ b/xen/common/sched_credit2.c > @@ -301,6 +301,9 @@ integer_param("credit2_balance_over", > opt_overload_balance_tolerance); > * want that to happen basing on topology. At the moment, it is > possible > * to choose to arrange runqueues to be: > * > + * - per-cpu: meaning that there will be one runqueue per logical > cpu. This > + * will happen when if the opt_runqueue parameter is set > to 'cpu'. > + * > * - per-core: meaning that there will be one runqueue per each > physical > * core of the host. This will happen if the > opt_runqueue > * parameter is set to 'core'; > This is ok, but you also need to modify the "credit2\_runqueue" section in docs/misc/xen-command-line.markdown . In fact, in order to not end up with outdated and incorrect documentation, we require that the docs are updated in the same patch that introduces something new (or changes something existing). Thanks and Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |