|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/S3: Restore broken vcpu affinity on resume
On Tue, Mar 26, 2013 at 5:20 PM, Ben Guthro <benjamin.guthro@xxxxxxxxxx> wrote:
> When in SYS_STATE_suspend, and going through the cpu_disable_scheduler
> path, save a copy of the current cpu affinity, and mark a flag to
> restore it later.
>
> Later, in the resume process, when enabling nonboot cpus restore these
> affinities.
>
> This is the second submission of this patch.
> Primary differences from the first patch is to fix formatting problems.
> However, when doing so, I tested with another patch in the
> cpu_disable_scheduler() path that is also appropriate here.
>
> Signed-off-by: Ben Guthro <benjamin.guthro@xxxxxxxxxx>
Overall looks fine to me; just a few comments below.
> diff --git a/xen/common/cpupool.c b/xen/common/cpupool.c
> index 10b10f8..7a04f5e 100644
> --- a/xen/common/cpupool.c
> +++ b/xen/common/cpupool.c
> @@ -19,13 +19,10 @@
> #include <xen/sched-if.h>
> #include <xen/cpu.h>
>
> -#define for_each_cpupool(ptr) \
> - for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next))
> -
You're taking this out because it's not used, I presume?
Since you'll probably be sending another patch anyway (see below), I
think it would be better if you pull this out into a specific
"clean-up" patch.
> @@ -569,6 +609,13 @@ int cpu_disable_scheduler(unsigned int cpu)
> {
> printk("Breaking vcpu affinity for domain %d vcpu %d\n",
> v->domain->domain_id, v->vcpu_id);
> +
> + if (system_state == SYS_STATE_suspend)
> + {
This appears to have two tabs instead of 16 spaces?
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |