[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] xen: sched_rt: print useful affinity info when dumping



On 03/16/2015 08:30 PM, Meng Xu wrote:
> Hi Dario,
> 
> 2015-03-16 13:05 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
>> In fact, printing the cpupool's CPU online mask
>> for each vCPU is just redundant, as that is the
>> same for all the vCPUs of all the domains in the
>> same cpupool, while hard affinity is already part
>> of the output of dumping domains info.
>>
>> Instead, print the intersection between hard
>> affinity and online CPUs, which is --in case of this
>> scheduler-- the effective affinity always used for
>> the vCPUs.
> 
> Agree.
> 
>>
>> This change also takes the chance to add a scratch
>> cpumask, to avoid having to create one more
>> cpumask_var_t on the stack of the dumping routine.
> 
> Actually, I have a question about the strength of this design. When we
> have a machine with many cpus, we will end up with allocating a
> cpumask for each cpu. Is this better than having a cpumask_var_t on
> the stack of the dumping routine, since the dumping routine is not in
> the hot path?

The reason for taking this off the stack is that the hypervisor stack is
a fairly limited resource -- IIRC it's only 8k (for each cpu).  If the
call stack gets too deep, the hypervisor will triple-fault.  Keeping
really large variables like cpumasks off the stack is key to making sure
we don't get close to that.

Systems with a lot of cpus can also be assumed I think to have a lot of
memory, so having one per cpu shouldn't really be that bad.

 -George



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.