[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/x86: allow Dom0 PVH to call XENMEM_exchange


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 12 May 2025 11:14:02 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Jason Andryuk <jason.andryuk@xxxxxxx>, "Ragiadakou, Xenia" <Xenia.Ragiadakou@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, agarciav@xxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 12 May 2025 09:14:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 02.05.2025 18:49, Stefano Stabellini wrote:
> On Fri, 2 May 2025, Jan Beulich wrote:
>> On 01.05.2025 15:44, Jason Andryuk wrote:
>>> On 2025-04-30 20:19, Stefano Stabellini wrote:
>>>> On Wed, 30 Apr 2025, Roger Pau Monné wrote:
>>>>> On Wed, Apr 30, 2025 at 08:27:55AM +0200, Jan Beulich wrote:
>>>>>> On 29.04.2025 23:52, Stefano Stabellini wrote:
>>>>>>> On Tue, 29 Apr 2025, Jan Beulich wrote:
>>>>>>>> On 28.04.2025 22:00, Stefano Stabellini wrote:
>>>>>>>>> On Mon, 28 Apr 2025, Jan Beulich wrote:
>>>>>>>>>> On 25.04.2025 22:19, Stefano Stabellini wrote:
>>>
>>>>>>>>>>> --- a/xen/common/memory.c
>>>>>>>>>>> +++ b/xen/common/memory.c
>>>>>>>>>>> @@ -794,7 +794,7 @@ static long 
>>>>>>>>>>> memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>>>>>>>>>>>               rc = guest_physmap_add_page(d, _gfn(gpfn), mfn,
>>>>>>>>>>>                                           exch.out.extent_order) ?: 
>>>>>>>>>>> rc;
>>>>>>>>>>>   
>>>>>>>>>>> -            if ( !paging_mode_translate(d) &&
>>>>>>>>>>> +            if ( (!paging_mode_translate(d) || 
>>>>>>>>>>> is_hardware_domain(d)) &&
>>>>>>>>>>>                    __copy_mfn_to_guest_offset(exch.out.extent_start,
>>>>>>>>>>>                                               (i << 
>>>>>>>>>>> out_chunk_order) + j,
>>>>>>>>>>>                                               mfn) )
>>>>>>>>>>
>>>>>>>>>> Wait, no: A PVH domain (Dom0 or not) can't very well make use of 
>>>>>>>>>> MFNs, can
>>>>>>>>>> it?
>>>>>>>>>
>>>>>>>>> One way or another Dom0 PVH needs to know the MFN to pass it to the
>>>>>>>>> co-processor.
>>>>>>>>
>>>>>>>> I see. That's pretty odd, though. I'm then further concerned of the 
>>>>>>>> order of
>>>>>>>> the chunks. At present we're rather lax, in permitting PVH and PV Dom0 
>>>>>>>> the
>>>>>>>> same upper bound. With both CPU and I/O side translation there is, in
>>>>>>>> principle, no reason to permit any kind of contiguity. Of course 
>>>>>>>> there's a
>>>>>>>> performance aspect, but that hardly matters in the specific case here. 
>>>>>>>> Yet at
>>>>>>>> the same time, once we expose MFNs, contiguity will start mattering as 
>>>>>>>> soon
>>>>>>>> as any piece of memory needs to be larger than PAGE_SIZE. Which means 
>>>>>>>> it will
>>>>>>>> make tightening of the presently lax handling prone to regressions in 
>>>>>>>> this
>>>>>>>> new behavior you're introducing. What chunk size does the PSP driver 
>>>>>>>> require?
>>>>>>>
>>>>>>> I don't know. The memory returned by XENMEM_exchange is contiguous,
>>>>>>> right? Are you worried that Xen cannot allocate the requested amount of
>>>>>>> memory contiguously?
>>>
>>> In the case I looked at, it is 8 pages.  The driver defines a ring of 32 
>>> * 1k entries.  I'm not sure if there are other paths or device versions 
>>> where it might differ.
>>
>> As per this ...
>>
>>>>>> That would be Dom0's problem then. But really for a translated guest the
>>>>>> exchanged chunks being contiguous shouldn't matter, correctness-wise. 
>>>>>> That is,
>>>>>> within Xen, rather than failing a request, we could choose to retry using
>>>>>> discontiguous chunks (contiguous only in GFN space). Such an (afaict)
>>>>>> otherwise correct change would break your use case, as it would 
>>>>>> invalidate the
>>>>>> MFN information passed back. (This fallback approach would similarly 
>>>>>> apply to
>>>>>> other related mem-ops. It's just that during domain creation the tool 
>>>>>> stack
>>>>>> has its own fallback, so it may not be of much use right now.)
>>>>>
>>>>> I think the description in the public header needs to be expanded to
>>>>> specify what the XENMEM_exchange does for translated guests, and
>>>>> clearly write down that the underlying MFNs for the exchanged region
>>>>> will be contiguous.  Possibly a new XENMEMF_ flag needs to be added to
>>>>> request contiguous physical memory for the new range.
>>>>>
>>>>> Sadly this also has the side effect of quite likely shattering
>>>>> superpages for dom0 EPT/NPT, which will result in decreased dom0
>>>>> performance.
>>>
>>> Yes, this appears to happen as memory_exchange seems to always replace 
>>> the pages.  I tested returning the existing MFNs if they are already 
>>> contiguous since that was sufficient for this driver.  It worked, but it 
>>> was messy.  A big loop to copy in the GFN array and check contiguity 
>>> which duplicated much of the real loop.
>>
>> ... there may not be a need for the output range to be contiguous? In which
>> case - wouldn't a simple "give me the MFN for this GFN" hypercall do? I seem
>> to vaguely recall that we even had one, long ago; it was purged because of
>> it violating the "no MFNs exposed" principle (and because it not having had
>> any use [anymore]). XENMEM_translate_gpfn_list looks like is what I mean;
>> see commit 2d2f7977a052.
> 
> Unfortunately I don't think that would work because I am pretty sure
> that the PSP needs a consecutive range. In other words, I think the
> output needs to be contiguous.

Hmm, higher up (context) you said "ring of 32 * 1k entries". That reads like
independent 1k areas, and hence no need for contiguity.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.