|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [ImageBuilder] Add support for Xen boot-time cpupools
Hi Stefano,
On 07/09/2022 03:43, Stefano Stabellini wrote:
>
> On Tue, 6 Sep 2022, Michal Orzel wrote:
>> Introduce support for creating boot-time cpupools in the device tree and
>> assigning them to dom0less domUs. Add the following options:
>> - CPUPOOL[number]="cpu1_path,...,cpuN_path scheduler" to specify the
>> list of cpus and the scheduler to be used to create cpupool
>> - NUM_CPUPOOLS to specify the number of cpupools to create
>> - DOMU_CPUPOOL[number]="<id>" to specify the id of the cpupool to
>> assign to domU
>>
>> Example usage:
>> CPUPOOL[0]="/cpus/cpu@1,/cpus/cpu@2 null"
>> DOMU_CPUPOOL[0]=0
>> NUM_CPUPOOLS=1
>>
>> The above example will create a boot-time cpupool (id=0) with 2 cpus:
>> cpu@1, cpu@2 and the null scheduler. It will assign the cpupool with
>> id=0 to domU0.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
>
> Great patch in record time, thanks Michal!
>
>
> On the CPUPOOL string format: do you think we actually need the device
> tree path or could we get away with something like:
>
> CPUPOOL[0]="cpu@1,cpu@2 null"
>
> All the cpus have to be under the top-level /cpus node per the device
> tree spec, so maybe the node name should be enough?
>
According to specs, passing only the node names should be enough
so I will modify it.
>
>
>> ---
>> README.md | 10 +++++
>> scripts/uboot-script-gen | 80 ++++++++++++++++++++++++++++++++++++++++
>> 2 files changed, 90 insertions(+)
>>
>> diff --git a/README.md b/README.md
>> index bd9dac924b44..44abb2193142 100644
>> --- a/README.md
>> +++ b/README.md
>> @@ -181,6 +181,9 @@ Where:
>> present. If set to 1, the VM can use PV drivers. Older Linux kernels
>> might break.
>>
>> +- DOMU_CPUPOOL[number] specifies the id of the cpupool (created using
>> + CPUPOOL[number] option, where number == id) that will be assigned to domU.
>> +
>> - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>> used. To enable this set any LINUX\_\* variables and do NOT set the
>> XEN variable.
>> @@ -223,6 +226,13 @@ Where:
>> include the public key in. This can only be used with
>> FIT_ENC_KEY_DIR. See the -u option below for more information.
>>
>> +- CPUPOOL[number]="cpu1_path,...,cpuN_path scheduler"
>> + specifies the list of cpus (separated by commas) and the scheduler to be
>> + used to create boot-time cpupool. If no scheduler is set, the Xen default
>> + one will be used.
>> +
>> +- NUM_CPUPOOLS specifies the number of boot-time cpupools to create.
>> +
>> Then you can invoke uboot-script-gen as follows:
>>
>> ```
>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>> index 18c0ce10afb4..2e1c80a92ce1 100755
>> --- a/scripts/uboot-script-gen
>> +++ b/scripts/uboot-script-gen
>> @@ -176,6 +176,81 @@ function add_device_tree_static_mem()
>> dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
>> }
>>
>> +function add_device_tree_cpupools()
>> +{
>> + local num=$1
>> + local phandle_next="0xfffffff"
>
> I think phandle_next is a good idea, and I would make it a global
> variable at the top of the uboot-script-gen file or at the top of
> scripts/common.
>
> The highest valid phandle is actually 0xfffffffe.
>
This was my original idea so I will do following to properly handle phandles:
- create a global variable phandle_next in scripts/common set to 0xfffffffe
- create a function get_next_phandle in scripts/common to get the next
available phandle,
formatted properly in hex, which will also decrement the phandle_next
I will push this as a prerequisite patch for boot-time cpupools.
>
>
>> + local cpus
>> + local scheduler
>> + local cpu_list
>> + local phandle
>> + local cpu_phandles
>> + local i
>> + local j
>> +
>> + i=0
>> + while test $i -lt $num
>
> I don't think there is much value in passing NUM_CPUPOOLS as argument to
> this function given that the function is also accessing CPUPOOL[]
> directly. I would remove $num and just do:
>
> while test $i -lt $NUM_CPUPOOLS
ok
~Michal
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |