[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 1/4] x86/mm: Shadow and p2m changes for PV mem_access

>>> Especially with the above note about GFN space being effectively
>>> unbounded for PV guests, you need to find a different data structure
>>> to represent the information you need. Perhaps something similar to
>>> what log-dirty mode uses, except that you would need more levels in
>>> the extreme case, and hence it might be worthwhile making the actually
>used number of levels dynamic?
>> I am a little confused here. In my initial discussions about this I was told 
>> that
>PV guests have bounded and contiguous memory. This is why I took the
>approach of an indexable array.
>> " OTOH, all you need is a byte per pfn, and the great thing is that in PV
>domains, the physmap is bounded and continuous. Unlike HVM and its PCI
>holes, etc, which demand the sparse tree structure. So you can allocate an
>easily indexable array, notwithstanding super page concerns (I think/hope)."
>> http://lists.xenproject.org/archives/html/xen-devel/2013-11/msg03860.h
>> tml
>> Did I misunderstand something here?
>> PS: I have CCed Andres too so that he can clarify.
>I can muddy things up further.
>In the good olden days that was the case. It seems that since linux 3.x, pv
>dom0 is defined with a physmap as large as the host, and plenty of holes in it
>which are plugged with the m2p override. I am unfortunately unhelpfully hazy
>on details. A situation like this will lead to max_pfn being very different 
>tot_pages. This is common in hvm.

So let me see if I understand correctly. Say I have a 4GB PV domain on 32 GB 
host. Its MFNs could be spread out from 10-12GB / 14-16GB. Now if I do 
get_gpfn_from_mfn() on the range 10-12GB / 14-16GB, the return values need not 
be 0 - 4GB but could be 0-3GB / 6-7GB? 


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.