[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/5] x86: allow specifying the NUMA nodes Dom0 should run on



On 05/03/15 16:43, Jan Beulich wrote:
>>>> On 05.03.15 at 17:11, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 26/02/15 13:52, Jan Beulich wrote:
>>> ---
>>> I'm uncertain whether range restricting the PXMs permitted for Dom0 is
>>> the right approach (matching what other NUMA code did until recently),
>>> or whether we would instead want to simply limit the number of PXMs we
>>> can handler there (i.e. using a static array instead of a static
>>> bitmap).
>> I am not quite sure what you mean by this.
> Since we somehow need to store the information provided on the
> command line, we have basically two options (taking into consideration
> that command line parsing happens very early): Either we use a bitmap
> (as done here) to track the PXMs provided (which puts an upper bound
> on the PXM values) or we store each PXM in a (static) array, which puts
> an upper bound on how many PXMs may be specified (and requires
> more storage per PXM).

Oh I see.

In which case I suggest a static array with a bounded based on
NUMA_NODE_SHIFT.  It is __initdata so storage size is not a big deal,
and its upper bound will be the same as Xen will tolerate with distinct
NUMA nodes.

>>> @@ -1230,11 +1284,11 @@ int __init construct_dom0(
>>>  
>>>      printk("Dom0 has maximum %u VCPUs\n", d->max_vcpus);
>>>  
>>> -    cpu = cpumask_first(cpupool0->cpu_valid);
>>> +    cpu = v->processor;
>>>      for ( i = 1; i < d->max_vcpus; i++ )
>>>      {
>>> -        cpu = cpumask_cycle(cpu, cpupool0->cpu_valid);
>>> -        (void)alloc_vcpu(d, i, cpu);
>>> +        cpu = cpumask_cycle(cpu, &dom0_cpus);
>>> +        setup_vcpu(d, i, cpu);
>> I know this is a preexisting bug, but you might want to fix it as you
>> are changing the affected codepath.
>>
>> Construction of dom0 should fail if any alloc_vcpu() call fails (and now
>> setup_vcpu()).   Nothing currently catches a failure to allocate vcpu
>> 1..$N, and this patch introduces a way for dom0 be more
>> memory-constrained than previously.
> I'm not sure - starting Dom0 with (in the worst case) just one vCPU
> would seem better to me than not starting it at all.

Having d->max_vcpus set and any of d->vcpu[1 ... d->max_vcpus] as NULL
pointers will quickly cause all sorts of misery.

In the case that the failure is -ENOMEM, it is likely that something
else on the construction path will also blow up, so killing dom0
completely is not really an overreaction.  If however if the failure is
something else, booting really should be halted.

An alternative course of action would be to reduce d->max_vcpus down a
bit on failure, but then the order of operations will have to change so
no other construction code makes decisions based on an overly large
max_vcpus.

The max_vcpus hypercall specifically avoids letting this happen when
constructing domUs.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.