[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH] x86/S3: Restore broken vcpu affinity on resume (v3)
On 03/27/2013 08:50 AM, Jan Beulich wrote:
On 27.03.13 at 13:36, Ben Guthro <benjamin.guthro@xxxxxxxxxx> wrote:
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -96,7 +96,11 @@ static void thaw_domains(void)
rcu_read_lock(&domlist_read_lock);
for_each_domain ( d )
+ {
+ if (system_state == SYS_STATE_resume)
I don't think there's a way to get here with system_state other
than SYS_STATE_resume.
Also, should there be a need to re-submit again, there are spaces
missing inside the parentheses.
OK, I'll remove this if entirely
+ restore_vcpu_affinity(d);
domain_unpause(d);
+ }
rcu_read_unlock(&domlist_read_lock);
}
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -541,6 +541,38 @@ void vcpu_force_reschedule(struct vcpu *v)
}
}
+void restore_vcpu_affinity(struct domain *d)
+{
+ struct vcpu *v;
+
+ for_each_vcpu ( d, v )
+ {
+ vcpu_schedule_lock_irq(v);
+
+ if (v->affinity_broken)
And here again.
ACK. Will resolve in v4
+ {
+ printk("Restoring vcpu affinity for domain %d vcpu %d\n",
+ v->domain->domain_id, v->vcpu_id);
XENLOG_DEBUG perhaps? Otherwise this can get pretty noisy
even without loglvl= override during resume if there are many
and/or big domains. To conserve on ring and transmit buffer space,
I'd also suggest shortening the text to "Restoring affinity for
d%dv%d\n" (and using d->domain_id).
Jan
I modeled this after the printk where the affinity was broken, so they
could be matched up in the log, for anyone looking.
Should I also change that printk to XENLOG_DEBUG?
Ben
+ cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
+ v->affinity_broken = 0;
+ }
+
+ if ( v->processor == smp_processor_id() )
+ {
+ set_bit(_VPF_migrating, &v->pause_flags);
+ vcpu_schedule_unlock_irq(v);
+ vcpu_sleep_nosync(v);
+ vcpu_migrate(v);
+ }
+ else
+ {
+ vcpu_schedule_unlock_irq(v);
+ }
+ }
+
+ domain_update_node_affinity(d);
+}
+
/*
* This function is used by cpu_hotplug code from stop_machine context
* and from cpupools to switch schedulers on a cpu.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|