[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v11 05/23] x86: refactor psr: L3 CAT: implement Domain init/free and schedule flows.
On 17-05-30 07:26:55, Jan Beulich wrote: > >>> On 03.05.17 at 10:44, <yi.y.sun@xxxxxxxxxxxxxxx> wrote: > > +static unsigned int get_max_cos_max(const struct psr_socket_info *info) > > +{ > > + unsigned int cos_max = 0, i; > > + > > + for ( i = 0; i < PSR_SOCKET_FEAT_NUM; i++ ) > > + { > > + const struct feat_node *feat = info->features[i]; > > + if ( !feat ) > > Blank line between declaration(s) and statement(s) please. > > > + continue; > > + > > + cos_max = max(feat->cos_max, cos_max); > > And you're likely better off inverting the condition and dropping > the "continue". > Will modify the above points. > > +static void psr_assoc_cos(uint64_t *reg, unsigned int cos, > > + uint64_t cos_mask) > > +{ > > + *reg = (*reg & ~cos_mask) | > > + (((uint64_t)cos << PSR_ASSOC_REG_SHIFT) & cos_mask); > > +} > > Indirection is normally only needed if a function needs to return > more than one value. Is there a reason you can't have this one > return the computed result? > This is inherited from old code. I think it is to avoid indentation issue in caller below. Anyway, I will modify it according to your comment. > > @@ -376,6 +412,14 @@ void psr_ctxt_switch_to(struct domain *d) > > if ( psr_cmt_enabled() ) > > psr_assoc_rmid(®, d->arch.psr_rmid); > > > > + /* IDLE domain's 'psr_cos_ids' is NULL so we set default value for it. > > */ > > + if ( psra->cos_mask ) > > + psr_assoc_cos(®, > > + d->arch.psr_cos_ids ? > > + > > d->arch.psr_cos_ids[cpu_to_socket(smp_processor_id())] : > > + 0, > > While this doesn't really conflict with our coding style, it makes > reading harder than necessary. Please use either > > if ( psra->cos_mask ) > psr_assoc_cos(®, > (d->arch.psr_cos_ids ? > d->arch.psr_cos_ids[cpu_to_socket(smp_processor_id())] > : > 0), > > or > > if ( psra->cos_mask ) > psr_assoc_cos(®, > d->arch.psr_cos_ids > ? d->arch.psr_cos_ids[cpu_to_socket(smp_processor_id())] > : 0, > > to allow immediate recognition that it is a single argument's expression > that spans three lines. > Thank you! > As to the idle domain aspect - is there a strict need to write a new > value for the idle domain at all? I.e. can't you just skip the write in > that case, knowing you'll write a proper value anyway once the > next non-idle vCPU gets scheduled here? Which then raises the > question on d->arch.psr_cos_ids being NULL - is that strictly only > possible for the idle domain, or are there also other cases? This > determines how the if() condition should be re-written ... > I agree with you that IDLE domain can be skipped. But considering that some domains' 'psr_cos_ids' may fail to be allocated, we have to set default value for these domains. So, I think we should keep this but skip IDLE domain in 'psr_ctxt_switch_to' caller. > > @@ -401,14 +445,37 @@ int psr_set_l3_cbm(struct domain *d, unsigned int > > socket, > > return 0; > > } > > > > -int psr_domain_init(struct domain *d) > > +/* Called with domain lock held, no extra lock needed for 'psr_cos_ids' */ > > +static void psr_free_cos(struct domain *d) > > +{ > > + xfree(d->arch.psr_cos_ids); > > + d->arch.psr_cos_ids = NULL; > > +} > > + > > +static int psr_alloc_cos(struct domain *d) > > { > > + d->arch.psr_cos_ids = xzalloc_array(unsigned int, nr_sockets); > > + if ( !d->arch.psr_cos_ids ) > > + return -ENOMEM; > > + > > return 0; > > } > > > > +int psr_domain_init(struct domain *d) > > +{ > > + /* Init to success value */ > > + int ret = 0; > > + > > + if ( psr_alloc_feat_enabled() ) > > + ret = psr_alloc_cos(d); > > + > > + return ret; > > +} > > Along the lines of the above - do we really need to fail domain > creation if we can't alloc psr_cos_ids? Granted there'll be other > allocation failures, but from an abstract pov this is an optional > feature, and hence the domain could do fine without. > Ok, will modiy related codes. > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |