|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 12/17] xen/riscv: Implement p2m_free_entry() and related helpers
On 14.07.2025 18:01, Oleksii Kurochko wrote:
> On 7/14/25 9:15 AM, Jan Beulich wrote:
>> On 11.07.2025 17:56, Oleksii Kurochko wrote:
>>> On 7/1/25 4:23 PM, Jan Beulich wrote:
>>>> On 10.06.2025 15:05, Oleksii Kurochko wrote:
>>>>> +/* Put any references on the single 4K page referenced by mfn. */
>>>>> +static void p2m_put_4k_page(mfn_t mfn, p2m_type_t type)
>>>>> +{
>>>>> + /* TODO: Handle other p2m types */
>>>>> +
>>>>> + /* Detect the xenheap page and mark the stored GFN as invalid. */
>>>>> + if ( p2m_is_ram(type) && is_xen_heap_mfn(mfn) )
>>>>> + page_set_xenheap_gfn(mfn_to_page(mfn), INVALID_GFN);
>>>> Is this a valid thing to do? How do you make sure the respective uses
>>>> (in gnttab's shared and status page arrays) are / were also removed?
>>> As grant table frame GFN is stored directly in struct page_info instead
>>> of keeping it in standalone status/shared arrays, thereby there is no need
>>> for status/shared arrays.
>> I fear I don't follow. Looking at Arm's header (which I understand you
>> derive from), I see
>>
>> #define gnttab_shared_page(t, i) virt_to_page((t)->shared_raw[i])
>>
>> #define gnttab_status_page(t, i) virt_to_page((t)->status[i])
>>
>> Are you intending to do things differently?
>
> I missed these arrays... Arm had different arrays:
> - (gt)->arch.shared_gfn = xmalloc_array(gfn_t, ngf_); \
> - (gt)->arch.status_gfn = xmalloc_array(gfn_t, nsf_); \
>
> I think I don't know the answer to your question, as I'm not deeply familiar
> with grant tables and would need to do some additional investigation.
>
> And just to be sure I understand your question correctly: are you asking
> whether I marked a page as|INVALID_GFN| while a domain might still be using
> it for grant table purposes?
Not quite. I'm trying to indicate that you may leave stale information around
when you update the struct page_info instance without also updating one of the
array slots. IOW I think both updates need to happen in sync, or it needs to
be explained why not doing so is still okay.
>>>>> +/* Put any references on the superpage referenced by mfn. */
>>>>> +static void p2m_put_2m_superpage(mfn_t mfn, p2m_type_t type)
>>>>> +{
>>>>> + struct page_info *pg;
>>>>> + unsigned int i;
>>>>> +
>>>>> + ASSERT(mfn_valid(mfn));
>>>>> +
>>>>> + pg = mfn_to_page(mfn);
>>>>> +
>>>>> + for ( i = 0; i < XEN_PT_ENTRIES; i++, pg++ )
>>>>> + p2m_put_foreign_page(pg);
>>>>> +}
>>>>> +
>>>>> +/* Put any references on the page referenced by pte. */
>>>>> +static void p2m_put_page(struct p2m_domain *p2m, const pte_t pte,
>>>>> + unsigned int level)
>>>>> +{
>>>>> + mfn_t mfn = pte_get_mfn(pte);
>>>>> + p2m_type_t p2m_type = p2m_type_radix_get(p2m, pte);
>>>> This gives you the type of the 1st page. What guarantees that all other
>>>> pages
>>>> in a superpage are of the exact same type?
>>> Doesn't superpage mean that all the 4KB pages within that superpage have the
>>> same type and contiguous in memory?
>> If the mapping is a super-page one - yes. Yet I see nothing super-page-ish
>> here.
>
> Probably, I just misunderstood your reply, but there is a check below:
> if ( level == 2 )
> return p2m_put_l2_superpage(mfn, pte.p2m.type);
> And I expect that if|level == 2|, it means it is a superpage, which means that
> all the 4KB pages within that superpage share the same type and are contiguous
> in memory.
Let's hope that all of this is going to remain consistent then.
>>>>> +static void p2m_free_page(struct domain *d, struct page_info *pg)
>>>>> +{
>>>>> + if ( is_hardware_domain(d) )
>>>>> + free_domheap_page(pg);
>>>> Why's the hardware domain different here? It should have a pool just like
>>>> all other domains have.
>>> Hardware domain (dom0) should be no limit in the number of pages that can
>>> be allocated, so allocate p2m pages for hardware domain is done from heap.
>>>
>>> An idea of p2m pool is to provide a way how to put clear limit and amount
>>> to the p2m allocation.
>> Well, we had been there on another thread, and I outlined how I think
>> Dom0 may want handling.
>
> I think that I don't remember. Could you please remind me what was that
> thread?
> Probably, do you mean this
> reply:https://lore.kernel.org/xen-devel/cover.1749555949.git.oleksii.kurochko@xxxxxxxxx/T/#m4789842aaae1653b91d3368f66cadb0ef87fb17e
> ?
> But this is not really about Dom0 case.
It would have been where the allocation counterpart to the freeing here is,
I expect.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |