[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 5/5] xen/memory, tools: Make init-dom0less consume XEN_DOMCTL_get_mem_map


  • To: Henry Wang <xin.wang2@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 2 Apr 2024 10:51:38 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Wei Liu <wl@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Alec Kwapis <alec.kwapis@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Tue, 02 Apr 2024 08:51:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02.04.2024 10:43, Henry Wang wrote:
> On 4/2/2024 3:05 PM, Jan Beulich wrote:
>> On 29.03.2024 06:11, Henry Wang wrote:
>>> On 3/12/2024 1:07 AM, Jan Beulich wrote:
>>>>> +/*
>>>>> + * Flag to force populate physmap to use pages from domheap instead of 
>>>>> 1:1
>>>>> + * or static allocation.
>>>>> + */
>>>>> +#define XENMEMF_force_heap_alloc  (1<<19)
>>>>>    #endif
>>>> If this is for populate_physmap only, then other sub-ops need to reject
>>>> its use.
>>>>
>>>> I have to admit I'm a little wary of allocating another flag here and ...
>>>>
>>>>> --- a/xen/include/xen/mm.h
>>>>> +++ b/xen/include/xen/mm.h
>>>>> @@ -205,6 +205,8 @@ struct npfec {
>>>>>    #define  MEMF_no_icache_flush (1U<<_MEMF_no_icache_flush)
>>>>>    #define _MEMF_no_scrub    8
>>>>>    #define  MEMF_no_scrub    (1U<<_MEMF_no_scrub)
>>>>> +#define _MEMF_force_heap_alloc 9
>>>>> +#define  MEMF_force_heap_alloc (1U<<_MEMF_force_heap_alloc)
>>>>>    #define _MEMF_node        16
>>>>>    #define  MEMF_node_mask   ((1U << (8 * sizeof(nodeid_t))) - 1)
>>>>>    #define  MEMF_node(n)     ((((n) + 1) & MEMF_node_mask) << _MEMF_node)
>>>> ... here - we don't have that many left. Since other sub-ops aren't
>>>> intended to support this flag, did you consider adding another (perhaps
>>>> even arch-specific) sub-op instead?
>>> While revisiting this comment when trying to come up with a V3, I
>>> realized adding a sub-op here in the same level as
>>> XENMEM_populate_physmap will basically duplicate the function
>>> populate_physmap() with just the "else" (the non-1:1 allocation) part,
>>> also a similar xc_domain_populate_physmap_exact() & co will be needed
>>> from the toolstack side to call the new sub-op. So I am having the
>>> concern of the duplication of code and not sure if I understand you
>>> correctly. Would you please elaborate a bit more or clarify if I
>>> understand you correctly? Thanks!
>> Well, the goal is to avoid both code duplication and introduction of a new,
>> single-use flag. The new sub-op suggestion, I realize now, would mainly have
>> helped with avoiding the new flag in the public interface. That's still
>> desirable imo. Internally, have you checked which MEMF_* are actually used
>> by populate_physmap()? Briefly looking, e.g. MEMF_no_dma and MEMF_no_refcount
>> aren't. It therefore would be possible to consider re-purposing one that
>> isn't (likely to be) used there. Of course doing so requires care to avoid
>> passing that flag down to other code (page_alloc.c functions in particular),
>> where the meaning would be the original one.
> 
> I think you made a good point, however, to be honest I am not sure about 
> the repurposing flags such as MEMF_no_dma and MEMF_no_refcount, because 
> I think the name and the purpose of the flag should be clear and 
> less-misleading. Reusing either one for another meaning, namely forcing 
> a non-heap allocation in populate_physmap() would be confusing in the 
> future. Also if one day these flags will be needed in 
> populate_physmap(), current repurposing approach will lead to a even 
> confusing code base.

For the latter - hence "(likely to be)" in my earlier reply.

For the naming - of course an aliasing #define ought to be used then, to
make the purpose clear at the use sites.

Jan

> I also do agree very much that we need to also avoid the code 
> duplication, so compared to other two suggested approach, adding a new 
> MEMF would be the cleanest solution IMHO, as it is just one bit and MEMF 
> flags are not added very often.
> 
> I would also curious what the other common code maintainers will say on 
> this issue: @Andrew, @Stefano, @Julien, any ideas? Thanks!
> 
> Kind regards,
> Henry




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.