[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] [RFC] arm: use PSCI if available

On Wed, 27 Mar 2013, Will Deacon wrote:
> On Wed, Mar 27, 2013 at 04:23:15PM +0000, Stefano Stabellini wrote:
> > OK, let's see if I can make this acceptable to you.
> > 
> > 
> > Would you agree on a patch that moves virt_smp_ops out of mach-virt and
> > renames them to psci_smp_ops (maybe to arch/arm/kernel/psci_smp_ops.c)?
> Moving the code out of psci.c is certainly a good first step, yes.

OK, I'll do that.

> > Would you agree on initializing psci from setup_arch, right after the
> > call to arm_dt_init_cpu_maps()?
> Hmmm. An early_initcall runs before SMP is up, so why do you need this
> earlier than that? Is it because you don't want to set the SMP ops later on?

Because I need to set the right smp_ops before any of the smp_ops
functions are called.
If I wait until early_initcall, the wrong smp_init_cpus is going to be

> > Finally the most controversial point: would you agree on using
> > psci_smp_ops by default if they are available?
> > If not, would you at least agree on letting Xen overwrite the default
> > machine smp_ops?
> > We need one or the other for dom0 support.
> Again, I think there needs to be a dummy layer between the smp_ops and PSCI,
> rather than assigning the things directly if we're going to use this as a
> default implementation. I still question whether default PSCI operations make
> any sense though... I understand that you're currently saying `yes, Xen can
> use the same firmware interface as KVM' but will that always be true? What
> happens when we want to run virtual machines on multi-cluster platforms, for
> example? Will KVM and Xen make sure that CPU affinities are described in the
> same way? What if one or the other decides to pass side-band information in
> the power_state parameters?
> In all of these cases, we'd have to split the code back up, so I don't see
> any long-term value in consolidating everything just because it might be
> possible today. The real problem you're trying to solve seems to stem from
> patching the smp_ops in your dom0 kernel.

That is correct.
Although having a clean generic solution would be preferable to me, if
we cannot reach an agreement or if just cannot be done, an hack that
patch smp_ops would also solve the Xen problem (see
I distaste hacks as anybody else, so I would rather solve the issue in a
different way.

> Can you elaborate a bit more on
> what's going on here please? How would having PSCI default smp_ops help you?

As Arnd quickly explained
(http://marc.info/?l=linux-arm-kernel&m=136440746215589&w=2), the
hardware environment provided by Xen to Dom0 looks very similar to the
native hardware but with a few important differences:

- the amount of memory is probably lower, in fact Xen is going to edit
the device tree and only tell Dom0 about the memory that Dom0 can
actually see;

- the number of cpus might be different, in fact Xen is going to edit
the device tree and tell Dom0 about the vcpus that Dom0 can actually

- not all the devices present might actually be exported to Dom0, in
fact Xen is going to remove these devices from the device tree so that
Dom0 won't try to access them;

- the interface to bring up secondary cpus is different and based on
PSCI, in fact Xen is going to add a PSCI node to the device tree so that
Dom0 can use it.

Oh wait, Dom0 is not going to use the PSCI interface even if the node is
present on device tree because it's going to prefer the platform smp_ops


Xen can however add any nodes to device tree besides PSCI, it already
adds a Xen hypervisor node, unfortunately the smp_ops are not detected
via device tree at the moment so that won't help.

So I guess what we really need is a way to export a set of firmware
operations for cpu bring up via device tree. I thought that PSCI was
exactly meant for this, but if it isn't, then we need to come up with
something else.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.