[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 2/3] xen/sched: carve out memory allocation and freeing from schedule_cpu_rm()
On 03.08.22 11:25, Jan Beulich wrote:
On 02.08.2022 15:27, Juergen Gross wrote:
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3190,6 +3190,66 @@ out:
return ret;
}
+static struct cpu_rm_data *schedule_cpu_rm_alloc(unsigned int cpu)
+{
+ struct cpu_rm_data *data;
+ struct sched_resource *sr;
const?
Yes.
+ int idx;
While code is supposedly only being moved, I still question this not
being "unsigned int", the more that sr->granularity is "unsigned int"
as well. (Same then for the retained instance ofthe variable in the
original function.) Of course the loop in the error path then needs
writing differently.
I considered that and didn't want to change the loop. OTOH this seems
to be rather trivial, so I can do the switch.
+ rcu_read_lock(&sched_res_rculock);
+
+ sr = get_sched_res(cpu);
+ data = xzalloc_flex_struct(struct cpu_rm_data, sr, sr->granularity - 1);
Afaict xmalloc_flex_struct() would do here, as you fill all fields.
Okay.
+ if ( !data )
+ goto out;
+
+ data->old_ops = sr->scheduler;
+ data->vpriv_old = idle_vcpu[cpu]->sched_unit->priv;
+ data->ppriv_old = sr->sched_priv;
At least from an abstract perspective, doesn't reading fields from
sr require the RCU lock to be held continuously (i.e. not dropping
it at the end of this function and re-acquiring it in the caller)?
+ for ( idx = 0; idx < sr->granularity - 1; idx++ )
+ {
+ data->sr[idx] = sched_alloc_res();
+ if ( data->sr[idx] )
+ {
+ data->sr[idx]->sched_unit_idle = sched_alloc_unit_mem();
+ if ( !data->sr[idx]->sched_unit_idle )
+ {
+ sched_res_free(&data->sr[idx]->rcu);
+ data->sr[idx] = NULL;
+ }
+ }
+ if ( !data->sr[idx] )
+ {
+ for ( idx--; idx >= 0; idx-- )
+ sched_res_free(&data->sr[idx]->rcu);
+ xfree(data);
+ data = NULL;
XFREE()?
Oh, right. Forgot about that possibility.
@@ -3198,53 +3258,22 @@ out:
*/
int schedule_cpu_rm(unsigned int cpu)
{
- void *ppriv_old, *vpriv_old;
- struct sched_resource *sr, **sr_new = NULL;
+ struct sched_resource *sr;
+ struct cpu_rm_data *data;
struct sched_unit *unit;
- struct scheduler *old_ops;
spinlock_t *old_lock;
unsigned long flags;
- int idx, ret = -ENOMEM;
+ int idx = 0;
unsigned int cpu_iter;
+ data = schedule_cpu_rm_alloc(cpu);
+ if ( !data )
+ return -ENOMEM;
+
rcu_read_lock(&sched_res_rculock);
sr = get_sched_res(cpu);
- old_ops = sr->scheduler;
- if ( sr->granularity > 1 )
- {
This conditional is lost afaict, resulting in potentially wrong behavior
in the new helper. Considering its purpose I expect there's a guarantee
that the field's value can never be zero, but then I guess an ASSERT()
would be nice next to the potentially problematic uses in the helper.
I'll add the ASSERT().
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -598,6 +598,14 @@ struct affinity_masks {
cpumask_var_t soft;
};
+/* Memory allocation related data for schedule_cpu_rm(). */
+struct cpu_rm_data {
+ struct scheduler *old_ops;
const?
Yes.
Juergen
Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key
Attachment:
OpenPGP_signature
Description: OpenPGP digital signature
|