|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/7] xen: sched: get rid of the per domain vCPU list in RTDS
On 08/10/15 13:52, Dario Faggioli wrote:
> @@ -319,14 +317,16 @@ rt_dump(const struct scheduler *ops)
> }
>
> printk("Domain info:\n");
> - list_for_each( iter_sdom, &prv->sdom )
> + list_for_each( iter, &prv->sdom )
> {
> - sdom = list_entry(iter_sdom, struct rt_dom, sdom_elem);
> + struct vcpu *vc;
> +
> + sdom = list_entry(iter, struct rt_dom, sdom_elem);
> printk("\tdomain: %d\n", sdom->dom->domain_id);
>
> - list_for_each( iter_svc, &sdom->vcpu )
> + for_each_vcpu( sdom->dom, vc )
Space before bracket, as you are already changing the line.
> {
> - svc = list_entry(iter_svc, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(vc);
> rt_dump_vcpu(ops, svc);
> }
> }
>
> @@ -1145,7 +1135,7 @@ rt_dom_cntl(
> {
> case XEN_DOMCTL_SCHEDOP_getinfo:
> spin_lock_irqsave(&prv->lock, flags);
> - svc = list_entry(sdom->vcpu.next, struct rt_vcpu, sdom_elem);
> + svc = rt_vcpu(sdom->dom->vcpu[0]);
This change swaps one potential bad pointer for another.
In the former case, there was no guarantee that sdom->vcpu had any
entries in it, potentially making svc a wild pointer.
In the latter case, there is no guarantee that dom->vcpu has been
allocated yet. You must check d->max_vcpus > 0 before dereferencing
d->vcpu[].
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |