|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/vmap: handle superpages in vmap_to_mfn()
On 02.12.2020 13:17, Hongyan Xia wrote:
> On Wed, 2020-12-02 at 11:04 +0100, Jan Beulich wrote:
>> On 30.11.2020 17:50, Hongyan Xia wrote:
>>> + l3page = virt_to_page(pl3e);
>>> + L3T_LOCK(l3page);
>>> +
>>> + ASSERT(l3e_get_flags(*pl3e) & _PAGE_PRESENT);
>>> + if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
>>> + {
>>> + ret = mfn_add(l3e_get_mfn(*pl3e),
>>> + (l2_offset << PAGETABLE_ORDER) + l1_offset);
>>> + L3T_UNLOCK(l3page);
>>> + return ret;
>>
>> To keep the locked region as narrow as possible
>>
>> mfn = l3e_get_mfn(*pl3e);
>> L3T_UNLOCK(l3page);
>> return mfn_add(mfn, (l2_offset << PAGETABLE_ORDER) +
>> l1_offset);
>>
>> ? However, in particular because of the recurring unlocks on
>> the exit paths I wonder whether it wouldn't be better to
>> restructure the whole function such that there'll be one unlock
>> and one return. Otoh I'm afraid what I'm asking for here is
>> going to yield a measurable set of goto-s ...
>
> I can do that.
>
> But what about the lock narrowing? Will be slightly more tricky when
> there is goto. Naturally:
>
> ret = full return mfn;
> goto out;
>
> out:
> UNLOCK();
> return ret;
>
> but with narrowing, my first reaction is:
>
> ret = high bits of mfn;
> l2_offset = 0;
> l1_offset = 0;
> goto out;
>
> out:
> UNLOCK();
> return mfn + l2_offset << TABLE_ORDER + l1_offset;
>
> To be honest, I doubt it is really worth it and I prefer the first one.
That's why I said "However ..." - I did realize both won't fit
together very well.
>>> + }
>>> +
>>> + pl2e = map_l2t_from_l3e(*pl3e) + l2_offset;
>>> + ASSERT(l2e_get_flags(*pl2e) & _PAGE_PRESENT);
>>> + if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
>>> + {
>>> + ret = mfn_add(l2e_get_mfn(*pl2e), l1_offset);
>>> + L3T_UNLOCK(l3page);
>>> + return ret;
>>> + }
>>> +
>>> + pl1e = map_l1t_from_l2e(*pl2e) + l1_offset;
>>> + UNMAP_DOMAIN_PAGE(pl2e);
>>> + ASSERT(l1e_get_flags(*pl1e) & _PAGE_PRESENT);
>>> + ret = l1e_get_mfn(*pl1e);
>>> + L3T_UNLOCK(l3page);
>>> + UNMAP_DOMAIN_PAGE(pl1e);
>>> +
>>> + return ret;
>>> +}
>>
>> Now for the name of the function: The only aspect tying it
>> somewhat to vmap() is that it assumes (asserts) it'll find a
>> valid mapping. I think it wants renaming, and vmap_to_mfn()
>> then would become a #define of it (perhaps even retaining
>> its property of getting unsigned long passed in), at which
>> point it also doesn't need moving out of page.h. As to the
>> actual name, xen_map_to_mfn() to somewhat match up with
>> map_pages_to_xen()?
>
> I actually really like this idea. I will come up with something in the
> next rev. But if we want to make it generic, shouldn't we not ASSERT on
> pl*e and the PRESENT flag and just return INVALID_MFN? Then this
> function works on both mapped address (assumption of vmap_to_mfn()) and
> other use cases.
Depends - we can still document that this function is going to
require a valid mapping. I did consider the generalization, too,
but this to a certain degree also collides with virt_to_xen_l3e()
allocating an L3 table, which isn't what we would want for a
fully generic lookup function. IOW - I'm undecided and will take
it from wherever you move it (albeit with no promise to not ask
for further adjustment).
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |