[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/7] xen/p2m: put reference for superpage





On 09/05/2024 09:13, Roger Pau Monné wrote:
On Wed, May 08, 2024 at 11:11:04PM +0100, Julien Grall wrote:
Hi,

CC-ing Roger as he is working on adding support for the foreign mapping on
x86. Although, I am not expecting any implication as only 4KB mapping should
be supported.

I don't think we have plans on x86 to support foreign mappings with
order != 0 ATM.

We would need a new interface to allow creating such mappings, and
it's also not clear to me how the domain that creates such mappings
can identify super-pages on the remote domain.  IOW: the mapping
domain could request a super-page in the foreign domain gfn space,
but that could end up being a range of lower order mappings.

I agree with this. But ...


Also the interactions with the remote domain would need to be audited,
as the remote domain shattering the superpage would need to be
replicated in the mapping side in order to account for the changes.

... I don't understand this one. How is this different from today's where a domain can foreign map a 2MB which may be using a superpage in the remote domain?


On 08/05/2024 22:05, Julien Grall wrote:
On 07/05/2024 14:30, Luca Fancellu wrote:
On 7 May 2024, at 14:20, Julien Grall <julien@xxxxxxx> wrote:

Hi Luca,

On 23/04/2024 09:25, Luca Fancellu wrote:
From: Penny Zheng <Penny.Zheng@xxxxxxx>
But today, p2m_put_l3_page could not handle superpages.

This was done on purpose. Xen is not preemptible and therefore
we need to be cautious how much work is done within the p2m
code.

With the below proposal, for 1GB mapping, we may end up to call
put_page() up to 512 * 512 = 262144 times. put_page() can free
memory. This could be a very long operation.

Have you benchmark how long it would take?

I did not, since its purpose was unclear to me and was not commented
in the last serie from Penny.

Honestly, I can't remember why it wasn't commented.

I skimmed through the code to check what we currently do for preemption.

{decrease, increase}_reservation() will allow to handle max_order() mapping
at the time. On a default configuration, the max would be 4MB.

relinquish_p2m_mapping() is preempting every 512 iterations. One iteration
is either a 4KB/2MB/1GB mapping.

relinquish_memory() is checking for preemption after every page.

So I think, it would be ok to allow 2MB mapping for static shared memory but
not 1GB. relinquish_p2m_mapping() would also needs to be updated to take
into account the larger foreign mapping.

FWIW, relinquish_p2m_mapping() likely does more than what's strictly
needed, as you could just remove foreign mappings while leaving other
entries as-is?  The drain of the p2m pool and release of domain pages
should take care of dropping references to the RAM domain memory?
I believe the code was written in a way we could easily introduce reference for all the mappings. This has been discussed a few times in the past but we never implemented it.


I would consider to check for preemption if 't' is p2m_map_foreign and the
order is above 9 (i.e. 2MB).

How can those mappings be removed?

From any p2m_* functions. On Arm we don't (yet) care about which mapping is replaced.

 Is it possible for the guest to
modify such foreign super-pages?

Yes.

 Not sure all paths will be easy to
audit for preemption if it's more than relinquish_p2m_mapping() that
you need to adjust.

I thought about it yesterday. But I came to the conclusion that if we have any concern about removing 1GB foreign superpage then we would already have the problem today as a domain can map contiguously 1GB worth of foreign mapping using small pages.

I went through the various code paths and, I believe, the only concern would be if an admin decides to change the values from max_order() using the command line option "memop-max-order".

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.