[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header


  • To: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 18 Feb 2026 14:12:04 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Romain Caritey <Romain.Caritey@xxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • Delivery-date: Wed, 18 Feb 2026 13:12:18 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 18.02.2026 13:58, Oleksii Kurochko wrote:
> 
> On 2/17/26 8:34 AM, Jan Beulich wrote:
>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>> common header xen/domain.h.
>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>
>>>>> --- a/xen/include/xen/domain.h
>>>>> +++ b/xen/include/xen/domain.h
>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>   #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>   
>>>>> +/*
>>>>> + * Is the domain using the host memory layout?
>>>>> + *
>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>> + * host memory layout.
>>>>> + *
>>>>> + * The hardware domain will use the host layout regardless of
>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>> + * for the devices.
>>>>> + */
>>>>> +#ifndef domain_use_host_layout
>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>> +                                    is_hardware_domain(d))
>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>> proliferate in common (non-DT) code.
>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>> domain) on x86 as well. In fact, we already have a working prototype,
>>> although it is not suitable for upstream yet.
>>>
>>> In addition to the PSP use case that we discussed a few months ago,
>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>> are configured as PVH.
>> Hmm. Then adjustments need making, for commentary and macro to be correct
>> on x86. First and foremost none of what is there is true for PV.
> 
> As is_domain_direct_mapped() returns always false for x86, so
> domain_use_host_layout macro will return incorrect value for non-hardware
> domains (dom0?). And as PV domains are not auto_translated domains so are
> always direct-mapped, so technically is_domain_direct_mapped() (or
> domain_use_host_layout()) should return true in such case.

Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
some special purpose (absence of IOMMU iirc).

> (I assume it is also true for every domain except HVM according to the comment
> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
> the comment above definition of XENFEAT_direct_mapped: /* ...not 
> auto_translated
> domains (x86 only) are always direct-mapped*/).
> 
> Is my understanding correct?
> 
> Then isn't that a problem of how is_domain_direct_mapped() is defined
> for x86? Shouldn't it be defined like:
>    #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf 
> & CDF_directmap))
> 
> Would it be better to move "!paging_mode_translate(d) || " to the definition
> of domain_use_host_layout()?
> 
> Could you please explain what is wrong with the comment? Probably, except:
>    * To avoid any trouble finding space, it is easier to force using the
>    * host memory layout.
> everything else should be true for x86.

"The hardware domain will use ..." isn't true for PV Dom0.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.