|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/7] xen/arm: When CPU dies, free percpu area immediatelly
On 17/04/18 11:52, Mirela Simonovic wrote: Hi Julien, Hi Mirela, On Mon, Apr 16, 2018 at 5:21 PM, Julien Grall <julien.grall@xxxxxxx> wrote:On 16/04/18 14:41, Mirela Simonovic wrote:On Mon, Apr 16, 2018 at 3:14 PM, Julien Grall <julien.grall@xxxxxxx> wrote:On 12/04/18 22:31, Stefano Stabellini wrote:On Thu, 12 Apr 2018, Julien Grall wrote:On 12/04/18 00:46, Stefano Stabellini wrote:On Wed, 11 Apr 2018, Julien Grall wrote:On 11/04/18 14:19, Mirela Simonovic wrote: They have nothing very x86 specific AFAICT so they could potentially be used for Arm when XEN_SYSCTL_hotplug will be implemented. cpu_up_helper() does call rcu_barrier() prior to calling cpu_up().
That's not true. Below the code for cpu_up_helper():
int ret = cpu_up(cpu); <- First call
if ( ret == -EBUSY )
{
rcu_barrier(); <- RCU barrier
ret = cpu_up(cpu); <- Second call
}
return ret;
So the rcu_barrier is called after cpu_up() in case it returns -EBUSY.
So calling rcu_barrier() is expected to be done prior to calling cpu_up() (or enable_nonboot_cpus(), which is just a wrapper for cpu_up()). I believe this is right way to do because cpu_up() is used for enabling non-boot CPUs in both boot and suspend/hotplug scenarios, while rcu_barrier() is not required in boot scenario. Therefore, I'll add rcu_barrier() prior to calling enable_nonboot_cpus(). If I missed something please let me know. See above, this is exactly why I asked Andrew & Jan input on how rcu work is flushed when using cpu_up_helper/cpu_down_helper. Because I don't understand if it is meant to work. So I would like to see whether it would make sense to put the rcu_barrier() somewhere else to cover every call of cpu_up(). Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |