[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
On 15.08.22 14:47, Juergen Gross wrote: On 15.08.22 14:34, Jan Beulich wrote:On 15.08.2022 14:16, Juergen Gross wrote:On 15.08.22 14:00, Jan Beulich wrote:On 15.08.2022 13:55, Juergen Gross wrote:On 15.08.22 13:52, Jan Beulich wrote:On 15.08.2022 13:04, Juergen Gross wrote:--- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -3237,6 +3237,65 @@ out: return ret; } +static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu) +{ + struct cpu_rm_data *data; + const struct sched_resource *sr; + unsigned int idx; + + rcu_read_lock(&sched_res_rculock); + + sr = get_sched_res(cpu);+ data = xmalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);+ if ( !data ) + goto out; + + data->old_ops = sr->scheduler; + data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv; + data->ppriv_old = sr->sched_priv;Repeating a v1 comment: "At least from an abstract perspective, doesn't reading fields from sr require the RCU lock to be held continuously (i.e. not dropping it at the end of this function and re-acquiring it in the caller)?" Initially I thought you did respond to this in some way, but when looking for a matching reply I couldn't find one.Oh, sorry. The RCU lock is protecting only the sr, not any data pointers in the sr are referencing. So it is fine to drop the RCU lock after reading some of the fields from the sr and storing it in the cpu_rm_data memory.Hmm, interesting. "Protecting only the sr" then means what exactly? Just its allocation, but not its contents?Correct.Plus it's not just the pointers - sr->granularity also would better not increase in the meantime ... Quite likely there's a reason why that also cannot happen, yet even then I think a brief code comment might be helpful here.Okay, will add something like: "Between schedule_cpu_rm_alloc() and the real cpu removal action the relevant contents of struct sched_resource can't change, as the cpu in question is locked against any other movement to or from cpupools, and the data copied by schedule_cpu_rm_alloc() is cpupool specific." Is that okay?Well, I guess I need to leave this to the scheduler maintainers then. I have to admit that it's not clear to me why all of sr->granularity, sr->scheduler, or sr->sched_priv would be "cpupool specific". I may besr->scheduler is the pointer to the scheduler ops array which is set when a cpu is added to a cpupool (the scheduler is a cpupool property). The same applies to sr->granularity: this value is per-cpupool, too. sr->sched_priv is only changed when a cpu is added to or removed from a cpupool, as this is the per-cpu data of a scheduler, which needs to stay when scheduling is happening on the cpu, thus it is allowed to be removed only in case the cpu is removed from or added to the cpupool.able to agree for sr->granularity, but the other two I thought was scheduler data, not cpupool data. For sr->granularity in turn (but perhaps also the other two fields) it's not obvious to me that pool properties can't change in a racing manner.They can't. Otherwise the scheduler would explode. BTW, I'll rework the comment a little bit to: "Between schedule_cpu_rm_alloc() and the real cpu removal action the relevant contents of struct sched_resource can't change, as the cpu in question is locked against any other movement to or from cpupools, and the data copied by schedule_cpu_rm_alloc() is modified only in case the cpu in question is being moved from or to a cpupool." Will resend the series tomorrow morning if nobody objects. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |