[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen: arm: ignore CPUs which are not marked available in the DT
On Thu, 2014-07-24 at 11:36 +0100, Julien Grall wrote: > Hi Ian, > > On 23/07/14 19:36, Ian Campbell wrote: > > On Wed, 2014-07-23 at 18:15 +0100, Julien Grall wrote: > >> On 07/23/2014 05:45 PM, Ian Campbell wrote: > >>> Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx> > >>> --- > >>> xen/arch/arm/smpboot.c | 3 +++ > >>> 1 file changed, 3 insertions(+) > >>> > >>> diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c > >>> index cf149da..4b0a738 100644 > >>> --- a/xen/arch/arm/smpboot.c > >>> +++ b/xen/arch/arm/smpboot.c > >>> @@ -134,6 +134,9 @@ void __init smp_init_cpus(void) > >>> if ( !dt_device_type_is_equal(cpu, "cpu") ) > >>> continue; > >>> > >>> + if ( !dt_device_is_available(cpu) ) > >>> + continue; > >>> + > >> > >> I can't find a such things on the Linux device tree bindings. > > > > status is a generic property which is common to all nodes, it comes from > > ePAPR. > > > >> Do you > >> have a use case where CPU are marked disabled? > > > > I use it locally when booting with models -- it allows me to turn off > > cpus in the base .dts file using a wrapper instead of having to edit the > > original. > > I read the ePAPR and the property status on CPU node. AFAIU, the > property doesn't have this meaning for a such node. > > This property means the CPU is in quiescent state and property to bring > up the CPU is provides in the device tree node. > > Section 5.5.2.2: > > Before starting a client program on the boot cpu, the boot program shall > set certain properties in the > device tree passed to the client as follows: > â Each secondary CPUâs cpu node shall have a status property with a > value of âdisabledâ. Oh well, it was only a convenience for me anyway. Ian > > Regards, > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |